Science.gov

Sample records for accurate numerical evaluation

  1. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  2. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  3. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  4. Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air

    NASA Technical Reports Server (NTRS)

    Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.

    2007-01-01

    The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.

  5. Can numerical simulations accurately predict hydrodynamic instabilities in liquid films?

    NASA Astrophysics Data System (ADS)

    Denner, Fabian; Charogiannis, Alexandros; Pradas, Marc; van Wachem, Berend G. M.; Markides, Christos N.; Kalliadasis, Serafim

    2014-11-01

    Understanding the dynamics of hydrodynamic instabilities in liquid film flows is an active field of research in fluid dynamics and non-linear science in general. Numerical simulations offer a powerful tool to study hydrodynamic instabilities in film flows and can provide deep insights into the underlying physical phenomena. However, the direct comparison of numerical results and experimental results is often hampered by several reasons. For instance, in numerical simulations the interface representation is problematic and the governing equations and boundary conditions may be oversimplified, whereas in experiments it is often difficult to extract accurate information on the fluid and its behavior, e.g. determine the fluid properties when the liquid contains particles for PIV measurements. In this contribution we present the latest results of our on-going, extensive study on hydrodynamic instabilities in liquid film flows, which includes direct numerical simulations, low-dimensional modelling as well as experiments. The major focus is on wave regimes, wave height and wave celerity as a function of Reynolds number and forcing frequency of a falling liquid film. Specific attention is paid to the differences in numerical and experimental results and the reasons for these differences. The authors are grateful to the EPSRC for their financial support (Grant EP/K008595/1).

  6. Accurate derivative evaluation for any Grad–Shafranov solver

    SciTech Connect

    Ricketson, L.F.; Cerfon, A.J.; Rachh, M.; Freidberg, J.P.

    2016-01-15

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad–Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  7. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  8. Accurate numerical simulation of short fiber optical parametric amplifiers.

    PubMed

    Marhic, M E; Rieznik, A A; Kalogerakis, G; Braimiotis, C; Fragnito, H L; Kazovsky, L G

    2008-03-17

    We improve the accuracy of numerical simulations for short fiber optical parametric amplifiers (OPAs). Instead of using the usual coarse-step method, we adopt a model for birefringence and dispersion which uses fine-step variations of the parameters. We also improve the split-step Fourier method by exactly treating the nonlinear ellipse rotation terms. We find that results obtained this way for two-pump OPAs can be significantly different from those obtained by using the usual coarse-step fiber model, and/or neglecting ellipse rotation terms.

  9. Accurate spectral numerical schemes for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Wilkening, Jon; Cerfon, Antoine J.; Landreman, Matt

    2015-08-01

    We examine the merits of using a family of polynomials that are orthogonal with respect to a non-classical weight function to discretize the speed variable in continuum kinetic calculations. We consider a model one-dimensional partial differential equation describing energy diffusion in velocity space due to Fokker-Planck collisions. This relatively simple case allows us to compare the results of the projected dynamics with an expensive but highly accurate spectral transform approach. It also allows us to integrate in time exactly, and to focus entirely on the effectiveness of the discretization of the speed variable. We show that for a fixed number of modes or grid points, the non-classical polynomials can be many orders of magnitude more accurate than classical Hermite polynomials or finite-difference solvers for kinetic equations in plasma physics. We provide a detailed analysis of the difference in behavior and accuracy of the two families of polynomials. For the non-classical polynomials, if the initial condition is not smooth at the origin when interpreted as a three-dimensional radial function, the exact solution leaves the polynomial subspace for a time, but returns (up to roundoff accuracy) to the same point evolved to by the projected dynamics in that time. By contrast, using classical polynomials, the exact solution differs significantly from the projected dynamics solution when it returns to the subspace. We also explore the connection between eigenfunctions of the projected evolution operator and (non-normalizable) eigenfunctions of the full evolution operator, as well as the effect of truncating the computational domain.

  10. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  11. Accurate simulation of transient landscape evolution by eliminating numerical diffusion: the TTLEM 1.0 model

    NASA Astrophysics Data System (ADS)

    Campforts, Benjamin; Schwanghart, Wolfgang; Govers, Gerard

    2017-01-01

    Landscape evolution models (LEMs) allow the study of earth surface responses to changing climatic and tectonic forcings. While much effort has been devoted to the development of LEMs that simulate a wide range of processes, the numerical accuracy of these models has received less attention. Most LEMs use first-order accurate numerical methods that suffer from substantial numerical diffusion. Numerical diffusion particularly affects the solution of the advection equation and thus the simulation of retreating landforms such as cliffs and river knickpoints. This has potential consequences for the integrated response of the simulated landscape. Here we test a higher-order flux-limiting finite volume method that is total variation diminishing (TVD-FVM) to solve the partial differential equations of river incision and tectonic displacement. We show that using the TVD-FVM to simulate river incision significantly influences the evolution of simulated landscapes and the spatial and temporal variability of catchment-wide erosion rates. Furthermore, a two-dimensional TVD-FVM accurately simulates the evolution of landscapes affected by lateral tectonic displacement, a process whose simulation was hitherto largely limited to LEMs with flexible spatial discretization. We implement the scheme in TTLEM (TopoToolbox Landscape Evolution Model), a spatially explicit, raster-based LEM for the study of fluvially eroding landscapes in TopoToolbox 2.

  12. Recommendations for Achieving Accurate Numerical Simulation of Tip Clearance Flows in Transonic Compressor Rotors

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.

  13. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  14. Final Report for "Accurate Numerical Models of the Secondary Electron Yield from Grazing-incidence Collisions".

    SciTech Connect

    Seth A Veitzer

    2008-10-21

    Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.

  15. Towards numerically accurate many-body perturbation theory: Short-range correlation effects

    SciTech Connect

    Gulans, Andris

    2014-10-28

    The example of the uniform electron gas is used for showing that the short-range electron correlation is difficult to handle numerically, while it noticeably contributes to the self-energy. Nonetheless, in condensed-matter applications studied with advanced methods, such as the GW and random-phase approximations, it is common to neglect contributions due to high-momentum (large q) transfers. Then, the short-range correlation is poorly described, which leads to inaccurate correlation energies and quasiparticle spectra. To circumvent this problem, an accurate extrapolation scheme is proposed. It is based on an analytical derivation for the uniform electron gas presented in this paper, and it provides an explanation why accurate GW quasiparticle spectra are easy to obtain for some compounds and very difficult for others.

  16. An accurate solution of elastodynamic problems by numerical local Green's functions

    NASA Astrophysics Data System (ADS)

    Loureiro, F. S.; Silva, J. E. A.; Mansur, W. J.

    2015-09-01

    Green's function based methodologies for elastodynamics in both time and frequency domains, which can be either numerical or analytical, appear in many branches of physics and engineering. Thus, the development of exact expressions for Green's functions is of great importance. Unfortunately, such expressions are known only for relatively few kinds of geometry, medium and boundary conditions. In this way, due to the difficulty in finding exact Green's functions, specially in the time domain, the present paper presents a solution of the transient elastodynamic equations by a time-stepping technique based on the Explicit Green's Approach method written in terms of the Green's and Step response functions, both being computed numerically by the finite element method. The major feature is the computation of these functions separately by the central difference time integration scheme and locally owing to the principle of causality. More precisely, Green's functions are computed only at t = Δt adopting two time substeps while Step response functions are computed directly without substeps. The proposed time-stepping method shows to be quite accurate with distinct numerical properties not presented in the standard central difference scheme as addressed in the numerical example.

  17. Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations

    SciTech Connect

    Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg

    2007-08-10

    In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.

  18. A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation

    NASA Astrophysics Data System (ADS)

    Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin

    2016-07-01

    In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.

  19. Numerical system utilising a Monte Carlo calculation method for accurate dose assessment in radiation accidents.

    PubMed

    Takahashi, F; Endo, A

    2007-01-01

    A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure.

  20. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  1. Keeping the edge: an accurate numerical method to solve the stream power law

    NASA Astrophysics Data System (ADS)

    Campforts, B.; Govers, G.

    2015-12-01

    Bedrock rivers set the base level of surrounding hill slopes and mediate the dynamic interplay between mountain building and denudation. The propensity of rivers to preserve pulses of increased tectonic uplift also allows to reconstruct long term uplift histories from longitudinal river profiles. An accurate reconstruction of river profile development at different timescales is therefore essential. Long term river development is typically modeled by means of the stream power law. Under specific conditions this equation can be solved analytically but numerical Finite Difference Methods (FDMs) are most frequently used. Nonetheless, FDMs suffer from numerical smearing, especially at knickpoint zones which are key to understand transient landscapes. Here, we solve the stream power law by means of a Finite Volume Method (FVM) which is Total Variation Diminishing (TVD). Total volume methods are designed to simulate sharp discontinuities making them very suitable to model river incision. In contrast to FDMs, the TVD_FVM is well capable of preserving knickpoints as illustrated for the fast propagating Niagara falls. Moreover, we show that the TVD_FVM performs much better when reconstructing uplift at timescales exceeding 100 Myr, using Eastern Australia as an example. Finally, uncertainty associated with parameter calibration is dramatically reduced when the TVD_FVM is applied. Therefore, the use of a TVD_FVM to understand long term landscape evolution is an important addition to the toolbox at the disposition of geomorphologists.

  2. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    PubMed

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  3. Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, F

    2008-04-11

    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.

  4. Advanced numerical techniques for accurate unsteady simulations of a wingtip vortex

    NASA Astrophysics Data System (ADS)

    Ahmad, Shakeel

    A numerical technique is developed to simulate the vortices associated with stationary and flapping wings. The Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations are used over an unstructured grid. The present work assesses the locations of the origins of vortex generation, models those locations and develops a systematic mesh refinement strategy to simulate vortices more accurately using the URANS model. The vortex center plays a key role in the analysis of the simulation data. A novel approach to locating a vortex center is also developed referred to as the Max-Max criterion. Experimental validation of the simulated vortex from a stationary NACA0012 wing is achieved. The tangential velocity along the core of the vortex falls within five percent of the experimental data in the case of the stationary NACA0012 simulation. The wing surface pressure coefficient also matches with the experimental data. The refinement techniques are then focused on unsteady simulations of pitching and dual-mode wing flapping. Tip vortex strength, location, and wing surface pressure are analyzed. Links to vortex behavior and wing motion are inferred. Key words: vortex, tangential velocity, Cp, vortical flow, unsteady vortices, URANS, Max-Max, Vortex center

  5. Numerical evaluation of high energy particle effects in magnetohydrodynamics

    SciTech Connect

    White, R.B.; Wu, Y.

    1994-03-01

    The interaction of high energy ions with magnetohydrodynamic modes is analyzed. A numerical code is developed which evaluates the contribution of the high energy particles to mode stability using orbit averaging of motion in either analytic or numerically generated equilibria through Hamiltonian guiding center equations. A dispersion relation is then used to evaluate the effect of the particles on the linear mode. Generic behavior of the solutions of the dispersion relation is discussed and dominant contributions of different components of the particle distribution function are identified. Numerical convergence of Monte-Carlo simulations is analyzed. The resulting code ORBIT provides an accurate means of comparing experimental results with the predictions of kinetic magnetohydrodynamics. The method can be extended to include self consistent modification of the particle orbits by the mode, and hence the full nonlinear dynamics of the coupled system.

  6. Intrabeam scattering formulas for fast numerical evaluation

    SciTech Connect

    Nagaitsev, Sergei; /Fermilab

    2005-03-01

    Expressions for small-angle multiple intrabeam scattering (IBS) emittance growth rates are normally expressed through integrals, which require a numeric evaluation at various locations of the accelerator lattice. In this paper, I demonstrate that the IBS growth rates can be presented in closed-form expressions with the help of the so-called symmetric elliptic integral. This integral can be evaluated numerically by a very efficient recursive method by employing the duplication theorem. Several examples of IBS rates for a smooth-lattice approximation, equal transverse temperatures and plasma temperature relaxation are given.

  7. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  8. Numerical Evaluation of 2D Ground States

    NASA Astrophysics Data System (ADS)

    Kolkovska, Natalia

    2016-02-01

    A ground state is defined as the positive radial solution of the multidimensional nonlinear problem \\varepsilon propto k_ bot 1 - ξ with the function f being either f(u) =a|u|p-1u or f(u) =a|u|pu+b|u|2pu. The numerical evaluation of ground states is based on the shooting method applied to an equivalent dynamical system. A combination of fourth order Runge-Kutta method and Hermite extrapolation formula is applied to solving the resulting initial value problem. The efficiency of this procedure is demonstrated in the 1D case, where the maximal difference between the exact and numerical solution is ≈ 10-11 for a discretization step 0:00025. As a major application, we evaluate numerically the critical energy constant. This constant is defined as a functional of the ground state and is used in the study of the 2D Boussinesq equations.

  9. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  10. Accurate numerical forward model for optimal retracking of SIRAL2 SAR echoes over open ocean

    NASA Astrophysics Data System (ADS)

    Phalippou, L.; Demeestere, F.

    2011-12-01

    The SAR mode of SIRAL-2 on board Cryosat-2 has been designed to measure primarily sea-ice and continental ice (Wingham et al. 2005). In 2005, K. Raney (KR, 2005) pointed out the improvements brought by SAR altimeter for open ocean. KR results were mostly based on 'rule of thumb' considerations on speckle noise reduction due to the higher PRF and to speckle decorrelation after SAR processing. In 2007, Phalippou and Enjolras (PE,2007) provided the theoretical background for optimal retracking of SAR echoes over ocean with a focus on the forward modelling of the power-waveforms. The accuracies of geophysical parameters (range, significant wave heights, and backscattering coefficient) retrieved from SAR altimeter data were derived accounting for SAR echo shape and speckle noise accurate modelling. The step forward to optimal retracking using numerical forward model (NFM) was also pointed out. NFM of the power waveform avoids analytical approximation, a warranty to minimise the geophysical dependent biases in the retrieval. NFM have been used for many years, in operational meteorology in particular, for retrieving temperature and humidity profiles from IR and microwave radiometers as the radiative transfer function is complex (Eyre, 1989). So far this technique was not used in the field of ocean conventional altimetry as analytical models (e.g. Brown's model for instance) were found to give sufficient accuracy. However, although NFM seems desirable even for conventional nadir altimetry, it becomes inevitable if one wish to process SAR altimeter data as the transfer function is too complex to be approximated by a simple analytical function. This was clearly demonstrated in PE 2007. The paper describes the background to SAR data retracking over open ocean. Since PE 2007 improvements have been brought to the forward model and it is shown that the altimeter on-ground and in flight characterisation (e.g antenna pattern range impulse response, azimuth impulse response

  11. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  12. Fast and accurate numerical method for predicting gas chromatography retention time.

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-08-07

    Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.

  13. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  14. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  15. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  16. Evaluation of flow topology from numerical data

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, Jim

    1987-01-01

    Results obtained from numerical calculations and modern (optical) diagnostics are often too complicated for manual inspection, manipulation and display. A simpler but still accurate description of these results is needed to facilitate data understanding. The paper discusses preliminary investigations into methods for the decomposition of 2-D and 3-D fluid flow data bases into elementary structures for purposes of description, analysis and comparison. An approach which involves the development of scene-like representation of the flow topology is presented. Using features such as critical points and dividing streamlines as a basis, a representation of the global topology of the flow is generated. The topology is then represented by a graph with the various structures represented by the nodes and their relationships in the flow by the connecting lines of the graph. Once the flow field has been placed in this form, it can be studied and compared with other data sets using techniques of syntactic pattern recognition or displayed using 3-D graphics techniques.

  17. Accurate polarimeter with multicapture fitting for plastic lens evaluation

    NASA Astrophysics Data System (ADS)

    Domínguez, Noemí; Mayershofer, Daniel; Garcia, Cristina; Arasa, Josep

    2016-02-01

    Due to their manufacturing process, plastic injection molded lenses do not achieve a constant density throughout their volume. This change of density introduces tensions in the material, inducing local birefringence, which in turn is translated into a variation of the ordinary and extraordinary refractive indices that can be expressed as a retardation phase plane using the Jones matrix notation. The detection and measurement of the value of the retardation of the phase plane are therefore very useful ways to evaluate the quality of plastic lenses. We introduce a polariscopic device to obtain two-dimensional maps of the tension distribution in the bulk of a lens, based on detection of the local birefringence. In addition to a description of the device and the mathematical approach used, a set of initial measurements is presented that confirms the validity of the developed system for the testing of the uniformity of plastic lenses.

  18. Development of accurate waveform models for eccentric compact binaries with numerical relativity simulations

    NASA Astrophysics Data System (ADS)

    Huerta, Eliu; Agarwal, Bhanu; Chua, Alvin; George, Daniel; Haas, Roland; Hinder, Ian; Kumar, Prayush; Moore, Christopher; Pfeiffer, Harald

    2017-01-01

    We recently constructed an inspiral-merger-ringdown (IMR) waveform model to describe the dynamical evolution of compact binaries on eccentric orbits, and used this model to constrain the eccentricity with which the gravitational wave transients currently detected by LIGO could be effectively recovered with banks of quasi-circular templates. We now present the second generation of this model, which is calibrated using a large catalog of eccentric numerical relativity simulations. We discuss the new features of this model, and show that its enhance accuracy makes it a powerful tool to detect eccentric signals with LIGO.

  19. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  20. The use of experimental bending tests to more accurate numerical description of TBC damage process

    NASA Astrophysics Data System (ADS)

    Sadowski, T.; Golewski, P.

    2016-04-01

    Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.

  1. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    SciTech Connect

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  2. An accurate numerical solution to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in rivers

    NASA Astrophysics Data System (ADS)

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2016-07-01

    We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].

  3. Post-identification feedback to eyewitnesses impairs evaluators' abilities to discriminate between accurate and mistaken testimony.

    PubMed

    Smalarz, Laura; Wells, Gary L

    2014-04-01

    Giving confirming feedback to mistaken eyewitnesses has robust distorting effects on their retrospective judgments (e.g., how certain they were, their view, etc.). Does feedback harm evaluators' abilities to discriminate between accurate and mistaken identification testimony? Participant-witnesses to a simulated crime made accurate or mistaken identifications from a lineup and then received confirming feedback or no feedback. Each then gave videotaped testimony about their identification, and a new sample of participant-evaluators judged the accuracy and credibility of the testimonies. Among witnesses who were not given feedback, evaluators were significantly more likely to believe the testimony of accurate eyewitnesses than they were to believe the testimony of mistaken eyewitnesses, indicating significant discrimination. Among witnesses who were given confirming feedback, however, evaluators believed accurate and mistaken witnesses at nearly identical rates, indicating no ability to discriminate. Moreover, there was no evidence of overbelief in the absence of feedback whereas there was significant overbelief in the confirming feedback conditions. Results demonstrate that a simple comment following a witness' identification decision ("Good job, you got the suspect") can undermine fact-finders' abilities to discern whether the witness made an accurate or a mistaken identification.

  4. Evaluation of Numerical Storm Surge Models.

    DTIC Science & Technology

    1980-12-01

    of Defense, has primary responsibility for design of coastal protective works and for recommendations, where appropriate, for the management of exposed...coastal areas. In addition, the Federal Insurance Administration (FIA), of the Federal Emergency Management Agency (FFMA), is responsible for...study management and the responsibility to compare and evaluate the results of the computations were assigned to the Committee on Tidal Hydraulics

  5. Accurate Evaluation of the Dispersion Energy in the Simulation of Gas Adsorption into Porous Zeolites.

    PubMed

    Fraccarollo, Alberto; Canti, Lorenzo; Marchese, Leonardo; Cossi, Maurizio

    2017-03-07

    The force fields used to simulate the gas adsorption in porous materials are strongly dominated by the van der Waals (vdW) terms. Here we discuss the delicate problem to estimate these terms accurately, analyzing the effect of different models. To this end, we simulated the physisorption of CH4, CO2, and Ar into various Al-free microporous zeolites (ITQ-29, SSZ-13, and silicalite-1), comparing the theoretical results with accurate experimental isotherms. The vdW terms in the force fields were parametrized against the free gas densities and high-level quantum mechanical (QM) calculations, comparing different methods to evaluate the dispersion energies. In particular, MP2 and DFT with semiempirical corrections, with suitable basis sets, were chosen to approximate the best QM calculations; either Lennard-Jones or Morse expressions were used to include the vdW terms in the force fields. The comparison of the simulated and experimental isotherms revealed that a strong interplay exists between the definition of the dispersion energies and the functional form used in the force field; these results are fairly general and reproducible, at least for the systems considered here. On this basis, the reliability of different models can be discussed, and a recipe can be provided to obtain accurate simulated adsorption isotherms.

  6. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments

    PubMed Central

    Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  7. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments.

    PubMed

    Eter, Wael A; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-04-15

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, (111)In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of (111)In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers.

  8. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

    NASA Technical Reports Server (NTRS)

    Fink, Patricia W.; Wilton, D. R.; Khayat, Michael A.

    2007-01-01

    Simple and efficient numerical procedures for evaluating the gradient of Newton-type potentials are presented. Convergences of both normal and tangential components of the gradient are examined. The convergence of the vector potential is also examined, and it is shown that the scheme for handling near-hypersingular integrals also is effective for the nearly singular potential terms.

  9. Factors Influencing Undergraduates' Self-Evaluation of Numerical Competence

    ERIC Educational Resources Information Center

    Tariq, Vicki N.; Durrani, Naureen

    2012-01-01

    This empirical study explores factors influencing undergraduates' self-evaluation of their numerical competence, using data from an online survey completed by 566 undergraduates from a diversity of academic disciplines, across all four faculties at a post-1992 UK university. Analysis of the data, which included correlation and multiple regression…

  10. A defect corrected finite element approach for the accurate evaluation of magnetic fields on unstructured grids

    NASA Astrophysics Data System (ADS)

    Römer, Ulrich; Schöps, Sebastian; De Gersem, Herbert

    2017-04-01

    In electromagnetic simulations of magnets and machines, one is often interested in a highly accurate and local evaluation of the magnetic field uniformity. Based on local post-processing of the solution, a defect correction scheme is proposed as an easy to realize alternative to higher order finite element or hybrid approaches. Radial basis functions (RBFs) are key for the generality of the method, which in particular can handle unstructured grids. Also, contrary to conventional finite element basis functions, higher derivatives of the solution can be evaluated, as required, e.g., for deflection magnets. Defect correction is applied to obtain a solution with improved accuracy and adjoint techniques are used to estimate the remaining error for a specific quantity of interest. Significantly improved (local) convergence orders are obtained. The scheme is also applied to the simulation of a Stern-Gerlach magnet currently in operation.

  11. Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam

    NASA Astrophysics Data System (ADS)

    Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad

    2015-05-01

    Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.

  12. Numerical parameter constraints for accurate PIC-DSMC simulation of breakdown from arc initiation to stable arcs

    NASA Astrophysics Data System (ADS)

    Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith

    2015-09-01

    Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  13. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    PubMed

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  14. Accurate Evaluation of Microwave-Leakage-Induced Frequency Shifts in Fountain Clocks

    NASA Astrophysics Data System (ADS)

    Fang, Fang; Liu, Kun; Chen, Wei-Liang; Liu, Nian-Feng; Suo, Rui; Li, Tian-Chun

    2014-10-01

    We report theoretical calculations of the transition probability errors introduced by microwave leakage in Cs fountain clocks, which will shift the clock frequency. The results show that the transition probability errors are affected by the Ramsey pulse amplitude, the relative phase between the Ramsey field and the leakage field, and the asymmetry of the leakage fields for the upward and downward passages. This effect is quite different for the leakage fields presenting below the Ramsey cavity and above the Ramsey cavity. The leakage-field-induced frequency shifts of the NIM5 fountain clock in different cases are measured. The results are consistent with the theoretical calculations, and give an accurate evaluation of the leakage-field-induced frequency shifts, as distinguished from other microwave-power-related effects for the first time.

  15. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

    NASA Technical Reports Server (NTRS)

    Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.

    2007-01-01

    Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.

  16. Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans

    2015-03-01

    Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.

  17. Identification and Evaluation of Reference Genes for Accurate Transcription Normalization in Safflower under Different Experimental Conditions

    PubMed Central

    Li, Dandan; Hu, Bo; Wang, Qing; Liu, Hongchang; Pan, Feng; Wu, Wei

    2015-01-01

    Safflower (Carthamus tinctorius L.) has received a significant amount of attention as a medicinal plant and oilseed crop. Gene expression studies provide a theoretical molecular biology foundation for improving new traits and developing new cultivars. Real-time quantitative PCR (RT-qPCR) has become a crucial approach for gene expression analysis. In addition, appropriate reference genes (RGs) are essential for accurate and rapid relative quantification analysis of gene expression. In this study, fifteen candidate RGs involved in multiple metabolic pathways of plants were finally selected and validated under different experimental treatments, at different seed development stages and in different cultivars and tissues for real-time PCR experiments. These genes were ABCS, 60SRPL10, RANBP1, UBCL, MFC, UBCE2, EIF5A, COA, EF1-β, EF1, GAPDH, ATPS, MBF1, GTPB and GST. The suitability evaluation was executed by the geNorm and NormFinder programs. Overall, EF1, UBCE2, EIF5A, ATPS and 60SRPL10 were the most stable genes, and MBF1, as well as MFC, were the most unstable genes by geNorm and NormFinder software in all experimental samples. To verify the validation of RGs selected by the two programs, the expression analysis of 7 CtFAD2 genes in safflower seeds at different developmental stages under cold stress was executed using different RGs in RT-qPCR experiments for normalization. The results showed similar expression patterns when the most stable RGs selected by geNorm or NormFinder software were used. However, the differences were detected using the most unstable reference genes. The most stable combination of genes selected in this study will help to achieve more accurate and reliable results in a wide variety of samples in safflower. PMID:26457898

  18. Numerical evaluation of gas core length in free surface vortices

    NASA Astrophysics Data System (ADS)

    Cristofano, L.; Nobili, M.; Caruso, G.

    2014-11-01

    The formation and evolution of free surface vortices represent an important topic in many hydraulic intakes, since strong whirlpools introduce swirl flow at the intake, and could cause entrainment of floating matters and gas. In particular, gas entrainment phenomena are an important safety issue for Sodium cooled Fast Reactors, because the introduction of gas bubbles within the core causes dangerous reactivity fluctuation. In this paper, a numerical evaluation of the gas core length in free surface vortices is presented, according to two different approaches. In the first one, a prediction method, developed by the Japanese researcher Sakai and his team, has been applied. This method is based on the Burgers vortex model, and it is able to estimate the gas core length of a free surface vortex starting from two parameters calculated with single-phase CFD simulations. The two parameters are the circulation and the downward velocity gradient. The other approach consists in performing a two-phase CFD simulation of a free surface vortex, in order to numerically reproduce the gas- liquid interface deformation. Mapped convergent mesh is used to reduce numerical error and a VOF (Volume Of Fluid) method was selected to track the gas-liquid interface. Two different turbulence models have been tested and analyzed. Experimental measurements of free surface vortices gas core length have been executed, using optical methods, and numerical results have been compared with experimental measurements. The computational domain and the boundary conditions of the CFD simulations were set consistently with the experimental test conditions.

  19. Determination of bedform resolution necessary to accurately resolve the flow field by comparing numerical simulations with field data

    NASA Astrophysics Data System (ADS)

    Margelowsky, G.; Foster, D.; Traykovski, P.; Felzenberg, J. A.

    2010-12-01

    The dynamics of wave-current and tidal flow bottom boundary layers are evaluated with a quasi-three-dimensional non-hydrostatic phase-resolving wave-current bottom boundary layer model, Dune. In each case, the model is evaluated with field observations of velocity profiles and seabed geometry. For wave-current boundary layers, the observations were obtained over a 26-day period in 13 m of water at the Martha’s Vineyard Coastal Observatory (MVCO, Edgartown, MA) in 2002 - 2003. Bedforms were orbital-scale ripples with wavelengths of 50-125 cm and heights of 5-20 cm with peak root-mean-square orbital velocities and mean flows typically ranging from 50-70 cm/s and 10-20 cm/s, respectively. The observations for tidal flows were obtained over a 3-day period in 13-16 m of water in Portsmouth Harbor (Portsmouth, NH) in 2008. Bedforms were dunes with wavelengths on the order of 1 m and heights on the order of 10 cm with typical peak tidal currents of approximately 1 m/s. The flow field is simulated with a finite volume approach to solve the Reynolds-Averaged Navier-Stokes equations with a k-ω 2nd order turbulence closure scheme. The model simulations are performed for a range of theoretical and observed bedforms to examine the boundary layer sensitivity to the resolution of the bottom roughness. The observed and predicted vertical velocity profiles are evaluated with correlations and Briar’s Skill scores over the range of data sets.

  20. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    PubMed

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers.

  1. Study on Applicability of Numerical Simulation to Evaluation of Gas Entrainment From Free Surface

    SciTech Connect

    Kei Ito; Takaaki Sakai; Hiroyuki Ohshima

    2006-07-01

    An onset condition of gas entrainment (GE) due to free surface vortex has been studied to establish a design of fast breeder reactor with higher coolant velocity than conventional designs, because the GE might cause the reactor operation instability and therefore should be avoided. The onset condition of the GE has been investigated experimentally and theoretically, however, dependency of the vortex type GE on local geometry configuration of each experimental system and local velocity distribution has prevented researchers from formulating the universal onset condition of the vortex type GE. A real scale test is considered as an accurate method to evaluate the occurrence of the vortex type GE, but the real scale test is generally expensive and not useful in the design study of large and complicated FBR systems, because frequent displacement of inner equipments accompanied by the design change is difficult in the real scale test. Numerical simulation seems to be promising method as an alternative to the real scale test. In this research, to evaluate the applicability of the numerical simulation to the design work, numerical simulations were conducted on the basic experimental system of the vortex type GE. This basic experiment consisted of rectangular flow channel and two important equipments for vortex type GE in the channel, i.e. vortex generation and suction equipments. Generated vortex grew rapidly interacting with the suction flow and the grown vortex formed a free surface dent (gas core). When the tip of the gas core or the bubbles detached from the tip of the gas core reached the suction mouth, the gas was entrained to the suction tube. The results of numerical simulation under the experimental conditions were compared to the experiment in terms of velocity distributions and free surface shape. As a result, the numerical simulation showed qualitatively good agreement with experimental data. The numerical simulation results were similar to the experimental

  2. Evaluation of method of moments codes: University of Houston junction and numerical electromagnetic code

    NASA Astrophysics Data System (ADS)

    Rockway, J. W.; Logan, J. C.; Deneris, C. A.

    1991-10-01

    The principal goal of the Electromagnetic Compatibility (EMC) Project is to minimize exterior electromagnetic interference (EMI) problems during the life-cycle of Navy surface ships. An important aspect of mitigating exterior EMI problems is the characterization of antenna performance. At the present time, for shipboard MF and HF (2 to 30 MHz) antennas, there exists two techniques for performing a topside antenna study: brass scale modeling and numerical modeling. Brass scale models have a few drawbacks: they are time-consuming to build, difficult to rapidly modify, troublesome to accurately measure near fields on scale models, and they are somewhat limited in their application. This report is an evaluation of the progress in development of two advanced computer programs for antenna modeling. One is the performance of the JUNCTION Code, under development at the University of Houston, and the other is the performance of the Numerical Electromagnetic Code - Version 4 (NEC4) being developed at the Lawrence Livermore National Laboratory.

  3. The Good, the Strong, and the Accurate: Preschoolers' Evaluations of Informant Attributes

    ERIC Educational Resources Information Center

    Fusaro, Maria; Corriveau, Kathleen H.; Harris, Paul L.

    2011-01-01

    Much recent evidence shows that preschoolers are sensitive to the accuracy of an informant. Faced with two informants, one of whom names familiar objects accurately and the other inaccurately, preschoolers subsequently prefer to learn the names and functions of unfamiliar objects from the more accurate informant. This study examined the inference…

  4. Evaluating the Impact of Aerosols on Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Freitas, Saulo; Silva, Arlindo; Benedetti, Angela; Grell, Georg; Members, Wgne; Zarzur, Mauricio

    2015-04-01

    The Working Group on Numerical Experimentation (WMO, http://www.wmo.int/pages/about/sec/rescrosscut/resdept_wgne.html) has organized an exercise to evaluate the impact of aerosols on NWP. This exercise will involve regional and global models currently used for weather forecast by the operational centers worldwide and aims at addressing the following questions: a) How important are aerosols for predicting the physical system (NWP, seasonal, climate) as distinct from predicting the aerosols themselves? b) How important is atmospheric model quality for air quality forecasting? c) What are the current capabilities of NWP models to simulate aerosol impacts on weather prediction? Toward this goal we have selected 3 strong or persistent events of aerosol pollution worldwide that could be fairly represented in current NWP models and that allowed for an evaluation of the aerosol impact on weather prediction. The selected events includes a strong dust storm that blew off the coast of Libya and over the Mediterranean, an extremely severe episode of air pollution in Beijing and surrounding areas, and an extreme case of biomass burning smoke in Brazil. The experimental design calls for simulations with and without explicitly accounting for aerosol feedbacks in the cloud and radiation parameterizations. In this presentation we will summarize the results of this study focusing on the evaluation of model performance in terms of its ability to faithfully simulate aerosol optical depth, and the assessment of the aerosol impact on the predictions of near surface wind, temperature, humidity, rainfall and the surface energy budget.

  5. Factors influencing undergraduates' self-evaluation of numerical competence

    NASA Astrophysics Data System (ADS)

    Tariq, Vicki N.; Durrani, Naureen

    2012-04-01

    This empirical study explores factors influencing undergraduates' self-evaluation of their numerical competence, using data from an online survey completed by 566 undergraduates from a diversity of academic disciplines, across all four faculties at a post-1992 UK university. Analysis of the data, which included correlation and multiple regression analyses, revealed that undergraduates exhibiting greater confidence in their mathematical and numeracy skills, as evidenced by their higher self-evaluation scores and their higher scores on the confidence sub-scale contributing to the measurement of attitude, possess more cohesive, rather than fragmented, conceptions of mathematics, and display more positive attitudes towards mathematics/numeracy. They also exhibit lower levels of mathematics anxiety. Students exhibiting greater confidence also tended to be those who were relatively young (i.e. 18-29 years), whose degree programmes provided them with opportunities to practise and further develop their numeracy skills, and who possessed higher pre-university mathematics qualifications. The multiple regression analysis revealed two positive predictors (overall attitude towards mathematics/numeracy and possession of a higher pre-university mathematics qualification) and five negative predictors (mathematics anxiety, lack of opportunity to practise/develop numeracy skills, being a more mature student, being enrolled in Health and Social Care compared with Science and Technology, and possessing no formal mathematics/numeracy qualification compared with a General Certificate of Secondary Education or equivalent qualification) accounted for approximately 64% of the variation in students' perceptions of their numerical competence. Although the results initially suggested that male students were significantly more confident than females, one compounding variable was almost certainly the students' highest pre-university mathematics or numeracy qualification, since a higher

  6. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  7. Is scintillometer measurement accurate enough for evaluating remote sensing based energy balance ET models?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The three evapotranspiration (ET) measurement/retrieval techniques used in this study, lysimeter, scintillometer and remote sensing vary in their level of complexity, accuracy, resolution and applicability. The lysimeter with its point measurement is the most accurate and direct method to measure ET...

  8. Azimuthal cement evaluation with an acoustic phased-arc array transmitter: numerical simulations and field tests

    NASA Astrophysics Data System (ADS)

    Che, Xiao-Hua; Qiao, Wen-Xiao; Ju, Xiao-Dong; Wang, Rui-Jia

    2016-03-01

    We developed a novel cement evaluation logging tool, named the azimuthally acoustic bond tool (AABT), which uses a phased-arc array transmitter with azimuthal detection capability. We combined numerical simulations and field tests to verify the AABT tool. The numerical simulation results showed that the radiation direction of the subarray corresponding to the maximum amplitude of the first arrival matches the azimuth of the channeling when it is behind the casing. With larger channeling size in the circumferential direction, the amplitude difference of the casing wave at different azimuths becomes more evident. The test results showed that the AABT can accurately locate the casing collars and evaluate the cement bond quality with azimuthal resolution at the casing—cement interface, and can visualize the size, depth, and azimuth of channeling. In the case of good casing—cement bonding, the AABT can further evaluate the cement bond quality at the cement—formation interface with azimuthal resolution by using the amplitude map and the velocity of the formation wave.

  9. THE EVALUATION OF METHODS FOR CREATING DEFENSIBLE, REPEATABLE, OBJECTIVE AND ACCURATE TOLERANCE VALUES

    EPA Science Inventory

    In the field of bioassessment, tolerance has traditionally referred to the degree to which organisms can withstand environmental degradation. This concept has been around for many years and its use is widespread. In numerous cases, tolerance values (TVs) have been assigned to i...

  10. Numerical Weather Predictions Evaluation Using Spatial Verification Methods

    NASA Astrophysics Data System (ADS)

    Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2014-12-01

    During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain-­-Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is co­financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-­-2013).

  11. Evaluation of kinetic uncertainty in numerical models of petroleum generation

    USGS Publications Warehouse

    Peters, K.E.; Walters, C.C.; Mankiewicz, P.J.

    2006-01-01

    Oil-prone marine petroleum source rocks contain type I or type II kerogen having Rock-Eval pyrolysis hydrogen indices greater than 600 or 300-600 mg hydrocarbon/g total organic carbon (HI, mg HC/g TOC), respectively. Samples from 29 marine source rocks worldwide that contain mainly type II kerogen (HI = 230-786 mg HC/g TOC) were subjected to open-system programmed pyrolysis to determine the activation energy distributions for petroleum generation. Assuming a burial heating rate of 1??C/m.y. for each measured activation energy distribution, the calculated average temperature for 50% fractional conversion of the kerogen in the samples to petroleum is approximately 136 ?? 7??C, but the range spans about 30??C (???121-151??C). Fifty-two outcrop samples of thermally immature Jurassic Oxford Clay Formation were collected from five locations in the United Kingdom to determine the variations of kinetic response for one source rock unit. The samples contain mainly type I or type II kerogens (HI = 230-774 mg HC/g TOC). At a heating rate of 1??C/m.y., the calculated temperatures for 50% fractional conversion of the Oxford Clay kerogens to petroleum differ by as much as 23??C (127-150??C). The data indicate that kerogen type, as defined by hydrogen index, is not systematically linked to kinetic response, and that default kinetics for the thermal decomposition of type I or type II kerogen can introduce unacceptable errors into numerical simulations. Furthermore, custom kinetics based on one or a few samples may be inadequate to account for variations in organofacies within a source rock. We propose three methods to evaluate the uncertainty contributed by kerogen kinetics to numerical simulations: (1) use the average kinetic distribution for multiple samples of source rock and the standard deviation for each activation energy in that distribution; (2) use source rock kinetics determined at several locations to describe different parts of the study area; and (3) use a weighted

  12. Evaluating Cloud and Precipitation Processes in Numerical Models using Current and Potential Future Satellite Missions

    NASA Astrophysics Data System (ADS)

    van den Heever, S. C.; Tao, W. K.; Skofronick Jackson, G.; Tanelli, S.; L'Ecuyer, T. S.; Petersen, W. A.; Kummerow, C. D.

    2015-12-01

    Cloud, aerosol and precipitation processes play a fundamental role in the water and energy cycle. It is critical to accurately represent these microphysical processes in numerical models if we are to better predict cloud and precipitation properties on weather through climate timescales. Much has been learned about cloud properties and precipitation characteristics from NASA satellite missions such as TRMM, CloudSat, and more recently GPM. Furthermore, data from these missions have been successfully utilized in evaluating the microphysical schemes in cloud-resolving models (CRMs) and global models. However, there are still many uncertainties associated with these microphysics schemes. These uncertainties can be attributed, at least in part, to the fact that microphysical processes cannot be directly observed or measured, but instead have to be inferred from those cloud properties that can be measured. Evaluation of microphysical parameterizations are becoming increasingly important as enhanced computational capabilities are facilitating the use of more sophisticated schemes in CRMs, and as future global models are being run on what has traditionally been regarded as cloud-resolving scales using CRM microphysical schemes. In this talk we will demonstrate how TRMM, CloudSat and GPM data have been used to evaluate different aspects of current CRM microphysical schemes, providing examples of where these approaches have been successful. We will also highlight CRM microphysical processes that have not been well evaluated and suggest approaches for addressing such issues. Finally, we will introduce a potential NASA satellite mission, the Cloud and Precipitation Processes Mission (CAPPM), which would facilitate the development and evaluation of different microphysical-dynamical feedbacks in numerical models.

  13. A novel stress-accurate FE technology for highly non-linear analysis with incompressibility constraint. Application to the numerical simulation of the FSW process

    NASA Astrophysics Data System (ADS)

    Chiumenti, M.; Cervera, M.; Agelet de Saracibar, C.; Dialami, N.

    2013-05-01

    In this work a novel finite element technology based on a three-field mixed formulation is presented. The Variational Multi Scale (VMS) method is used to circumvent the LBB stability condition allowing the use of linear piece-wise interpolations for displacement, stress and pressure fields, respectively. The result is an enhanced stress field approximation which enables for stress-accurate results in nonlinear computational mechanics. The use of an independent nodal variable for the pressure field allows for an adhoc treatment of the incompressibility constraint. This is a mandatory requirement due to the isochoric nature of the plastic strain in metal forming processes. The highly non-linear stress field typically encountered in the Friction Stir Welding (FSW) process is used as an example to show the performance of this new FE technology. The numerical simulation of the FSW process is tackled by means of an Arbitrary-Lagrangian-Eulerian (ALE) formulation. The computational domain is split into three different zones: the work.piece (defined by a rigid visco-plastic behaviour in the Eulerian framework), the pin (within the Lagrangian framework) and finally the stirzone (ALE formulation). A fully coupled thermo-mechanical analysis is introduced showing the heat fluxes generated by the plastic dissipation in the stir-zone (Sheppard rigid-viscoplastic constitutive model) as well as the frictional dissipation at the contact interface (Norton frictional contact model). Finally, tracers have been implemented to show the material flow around the pin allowing a better understanding of the welding mechanism. Numerical results are compared with experimental evidence.

  14. Unified treatment for accurate and fast evaluation of the Fermi-Dirac functions

    NASA Astrophysics Data System (ADS)

    Guseinov, I. I.; Mamedov, B. A.

    2010-05-01

    A new analytical approach to the computation of the Fermi-Dirac (FD) functions is presented, which was suggested by previous experience with various algorithms. Using the binomial expansion theorem, these functions are expressed through the binomial coefficients and familiar incomplete Gamma functions. This simplification and the use of the memory of the computer for the calculation of binomial coefficients may extend the limits to large arguments for users and result in speedier calculation, should such limits be required in practice. Some numerical results are presented for significant mapping examples and they are briefly discussed.

  15. EEMD based pitch evaluation method for accurate grating measurement by AFM

    NASA Astrophysics Data System (ADS)

    Li, Changsheng; Yang, Shuming; Wang, Chenying; Jiang, Zhuangde

    2016-09-01

    The pitch measurement and AFM calibration precision are significantly influenced by the grating pitch evaluation method. This paper presents the ensemble empirical mode decomposition (EEMD) based pitch evaluation method to relieve the accuracy deterioration caused by high and low frequency components of scanning profile during pitch evaluation. The simulation analysis shows that the application of EEMD can improve the pitch accuracy of the FFT-FT algorithm. The pitch error is small when the iteration number of the FFT-FT algorithms was 8. The AFM measurement of the 500 nm-pitch one-dimensional grating shows that the EEMD based pitch evaluation method could improve the pitch precision, especially the grating line position precision, and greatly expand the applicability of the gravity center algorithm when particles and impression marks were distributed on the sample surface. The measurement indicates that the nonlinearity was stable, and the nonlinearity of x axis and forward scanning was much smaller than their counterpart. Finally, a detailed pitch measurement uncertainty evaluation model suitable for commercial AFMs was demonstrated and a pitch uncertainty in the sub-nanometer range was achieved. The pitch uncertainty was reduced about 10% by EEMD.

  16. Efficient and accurate evaluation of potential energy matrix elements for quantum dynamics using Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Alborzpour, Jonathan P.; Tew, David P.; Habershon, Scott

    2016-11-01

    Solution of the time-dependent Schrödinger equation using a linear combination of basis functions, such as Gaussian wavepackets (GWPs), requires costly evaluation of integrals over the entire potential energy surface (PES) of the system. The standard approach, motivated by computational tractability for direct dynamics, is to approximate the PES with a second order Taylor expansion, for example centred at each GWP. In this article, we propose an alternative method for approximating PES matrix elements based on PES interpolation using Gaussian process regression (GPR). Our GPR scheme requires only single-point evaluations of the PES at a limited number of configurations in each time-step; the necessity of performing often-expensive evaluations of the Hessian matrix is completely avoided. In applications to 2-, 5-, and 10-dimensional benchmark models describing a tunnelling coordinate coupled non-linearly to a set of harmonic oscillators, we find that our GPR method results in PES matrix elements for which the average error is, in the best case, two orders-of-magnitude smaller and, in the worst case, directly comparable to that determined by any other Taylor expansion method, without requiring additional PES evaluations or Hessian matrices. Given the computational simplicity of GPR, as well as the opportunities for further refinement of the procedure highlighted herein, we argue that our GPR methodology should replace methods for evaluating PES matrix elements using Taylor expansions in quantum dynamics simulations.

  17. Fast and accurate simulations of diffusion-weighted MRI signals for the evaluation of acquisition sequences

    NASA Astrophysics Data System (ADS)

    Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime

    2016-03-01

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.

  18. Melt-rock reaction in the asthenospheric mantle: Perspectives from high-order accurate numerical simulations in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.

    2013-12-01

    The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales

  19. Evaluating de novo sequencing in proteomics: already an accurate alternative to database-driven peptide identification?

    PubMed

    Muth, Thilo; Renard, Bernhard Y

    2017-03-21

    While peptide identifications in mass spectrometry (MS)-based shotgun proteomics are mostly obtained using database search methods, high-resolution spectrum data from modern MS instruments nowadays offer the prospect of improving the performance of computational de novo peptide sequencing. The major benefit of de novo sequencing is that it does not require a reference database to deduce full-length or partial tag-based peptide sequences directly from experimental tandem mass spectrometry spectra. Although various algorithms have been developed for automated de novo sequencing, the prediction accuracy of proposed solutions has been rarely evaluated in independent benchmarking studies. The main objective of this work is to provide a detailed evaluation on the performance of de novo sequencing algorithms on high-resolution data. For this purpose, we processed four experimental data sets acquired from different instrument types from collision-induced dissociation and higher energy collisional dissociation (HCD) fragmentation mode using the software packages Novor, PEAKS and PepNovo. Moreover, the accuracy of these algorithms is also tested on ground truth data based on simulated spectra generated from peak intensity prediction software. We found that Novor shows the overall best performance compared with PEAKS and PepNovo with respect to the accuracy of correct full peptide, tag-based and single-residue predictions. In addition, the same tool outpaced the commercial competitor PEAKS in terms of running time speedup by factors of around 12-17. Despite around 35% prediction accuracy for complete peptide sequences on HCD data sets, taken as a whole, the evaluated algorithms perform moderately on experimental data but show a significantly better performance on simulated data (up to 84% accuracy). Further, we describe the most frequently occurring de novo sequencing errors and evaluate the influence of missing fragment ion peaks and spectral noise on the accuracy. Finally

  20. Comparison of numerical techniques for the evaluation of the Doppler broadening functions psi(x,theta) and chi(x,theta)

    NASA Technical Reports Server (NTRS)

    Canright, R. B., Jr.; Semler, T. T.

    1972-01-01

    Several approximations to the Doppler broadening functions psi(x, theta) and chi(x, theta) are compared with respect to accuracy and speed of evaluation. A technique, due to A. M. Turning (1943), is shown to be at least as accurate as direct numerical quadrature and somewhat faster than Gaussian quadrature. FORTRAN 4 listings are included.

  1. Accurate evaluation of viscoelasticity of radial artery wall during flow-mediated dilation in ultrasound measurement

    NASA Astrophysics Data System (ADS)

    Sakai, Yasumasa; Taki, Hirofumi; Kanai, Hiroshi

    2016-07-01

    In our previous study, the viscoelasticity of the radial artery wall was estimated to diagnose endothelial dysfunction using a high-frequency (22 MHz) ultrasound device. In the present study, we employed a commercial ultrasound device (7.5 MHz) and estimated the viscoelasticity using arterial pressure and diameter, both of which were measured at the same position. In a phantom experiment, the proposed method successfully estimated the elasticity and viscosity of the phantom with errors of 1.8 and 30.3%, respectively. In an in vivo measurement, the transient change in the viscoelasticity was measured for three healthy subjects during flow-mediated dilation (FMD). The proposed method revealed the softening of the arterial wall originating from the FMD reaction within 100 s after avascularization. These results indicate the high performance of the proposed method in evaluating vascular endothelial function just after avascularization, where the function is difficult to be estimated by a conventional FMD measurement.

  2. Evaluation of a low-cost and accurate ocean temperature logger on subsurface mooring systems

    SciTech Connect

    Tian, Chuan; Deng, Zhiqun; Lu, Jun; Xu, Xiaoyang; Zhao, Wei; Xu, Ming

    2014-06-23

    Monitoring seawater temperature is important to understanding evolving ocean processes. To monitor internal waves or ocean mixing, a large number of temperature loggers are typically mounted on subsurface mooring systems to obtain high-resolution temperature data at different water depths. In this study, we redesigned and evaluated a compact, low-cost, self-contained, high-resolution and high-accuracy ocean temperature logger, TC-1121. The newly designed TC-1121 loggers are smaller, more robust, and their sampling intervals can be automatically changed by indicated events. They have been widely used in many mooring systems to study internal wave and ocean mixing. The logger’s fundamental design, noise analysis, calibration, drift test, and a long-term sea trial are discussed in this paper.

  3. Congenital spinal dermal tract: how accurate is clinical and radiological evaluation?

    PubMed

    Tisdall, Martin M; Hayward, Richard D; Thompson, Dominic N P

    2015-06-01

    OBJECT A dermal sinus tract is a common form of occult spinal dysraphism. The presumed etiology relates to a focal failure of disjunction resulting in a persistent adhesion between the neural and cutaneous ectoderm. Clinical and radiological features can appear innocuous, leading to delayed diagnosis and failure to appreciate the implications or extent of the abnormality. If it is left untreated, complications can include meningitis, spinal abscess, and inclusion cyst formation. The authors present their experience in 74 pediatric cases of spinal dermal tract in an attempt to identify which clinical and radiological factors are associated with an infective presentation and to assess the reliability of MRI in evaluating this entity. METHODS Consecutive cases of spinal dermal tract treated with resection between 1998 and 2010 were identified from the departmental surgical database. Demographics, clinical history, and radiological and operative findings were collected from the patient records. The presence or absence of active infection (abscess, meningitis) at the time of neurosurgical presentation and any history of local sinus discharge or infection was assessed. Magnetic resonance images were reviewed to evaluate the extent of the sinus tract and determine the presence of an inclusion cyst. Radiological and operative findings were compared. RESULTS The surgical course was uncomplicated in 90% of 74 cases eligible for analysis. Magnetic resonance imaging underreported the presence of both an intradural tract (MRI 46%, operative finding 86%) and an intraspinal inclusion cyst (MRI 15%, operative finding 24%). A history of sinus discharge (OR 12.8, p = 0.0003) and the intraoperative identification of intraspinal inclusion cysts (OR 5.6, p = 0.023) were associated with an infective presentation. There was no significant association between the presence of an intradural tract discovered at surgery and an infective presentation. CONCLUSIONS Surgery for the treatment of

  4. A Compilation Strategy for Numerical Programs Based on Partial Evaluation

    DTIC Science & Technology

    1989-07-01

    advance. For example, since Pluto is very small relative to the other planets, its mass was approximated as zero in the compile time data-structures...Laboratory. (Sussman] G.J. Sussman and J. Wisdom, "Numerical evidence that the motion of Pluto is chaotic". In Science, Volume 241, 22 July 1988. [WU 87...30.15522934 1.657000860 1.437858110) (3-vector -.009619598984 -. 1150657040 -.04688875226))) (define pluto (make-rectangular-heliocentric ’ pluto 0 (3-vector

  5. The development and evaluation of numerical algorithms for MIMD computers

    NASA Technical Reports Server (NTRS)

    Voigt, Robert G.

    1990-01-01

    Two activities were pursued under this grant. The first was a visitor program to conduct research on numerical algorithms for MIMD computers. The program is summarized in the following attachments. Attachment A - List of Researchers Supported; Attachment B - List of Reports Completed; and Attachment C - Reports. The second activity was a workshop on the Control of fluid Dynamic Systems held on March 28 to 29, 1989. The workshop is summarized in attachments. Attachment D - Workshop Summary; and Attachment E - List of Workshop Participants.

  6. How Accurately Can Older Adults Evaluate the Quality of Their Text Recall? The Effect of Providing Standards on Judgment Accuracy

    PubMed Central

    Baker, Julie; Dunlosky, John; Hertzog, Christopher

    2010-01-01

    Adults have difficulties accurately judging how well they have learned text materials; unfortunately, such low levels of accuracy may obscure age-related deficits. Higher levels of accuracy have been obtained when younger adults make postdictions about which test questions they answered correctly. Accordingly, we focus on the accuracy of postdictive judgments to evaluate whether age deficits would emerge with higher levels of accuracy and whether people’s postdictive accuracy would benefit from providing an appropriate standard of evlauation. Participants read texts with definitions embedded in them, attempted to recall each definition, and then made a postdictive judgment about the quality of their recall. When making these judgments, participants either received no standard or were presented the correct definition as a standard for evaluation. Age-related equivalence was found in the relative accuracy of these term-specific judgments, and older adults’ absolute accuracy benefited from providing standards to the same degree as did younger adults. PMID:20126418

  7. Pitfalls and guidelines for the numerical evaluation of moderate-order system frequency response

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1981-01-01

    The design and evaluation of a feedback control system via frequency response methods relies heavily upon numerical methods. In application, one can usually develop low order simulation models which for the most part are devoid of numerical problems. However, when complex feedback interactions, for example, between instrument control systems and their flexible mounting structure, must be evaluated, simulation models become moderate to large order and numerical problems become common. A large body of relevant numerical error analysis literature is summarized in a large language understandable to nonspecialists. The intent is to provide engineers using simulation models with an engineering feel for potential numerical problems without getting intertwined in the complexities of the associated mathematical theory. Guidelines are also provided by suggesting alternate state of the art methods which have good numerical evaluation characteristics.

  8. Computer numeric control subaperture aspheric surface polishing-microroughness evaluation

    NASA Astrophysics Data System (ADS)

    Prochaska, Frantisek; Polak, Jaroslav; Matousek, Ondrej; Tomka, David

    2014-09-01

    The aim of this work was an investigation of surface microroughness and shape accuracy achieved on an aspheric lens by subaperture computer numeric control (CNC) polishing. Different optical substrates were polished (OHARA S-LAH 58, SF4, ZERODUR) using a POLITEX™ polishing pad, synthetic pitch, and the natural optical pitch. Surface roughness was measured by light interferometer. The best results were achieved on the S-LAH58 glass and the ZERODUR™ using the natural optical pitch. In the case of SF4 glass, the natural optical pitch showed a tendency to scratch the surface. Experiments also indicated a problem in surface form deterioration when using the natural optical pitch, regardless of the type of optical material.

  9. Lift capability prediction for helicopter rotor blade-numerical evaluation

    NASA Astrophysics Data System (ADS)

    Rotaru, Constantin; Cîrciu, Ionicǎ; Luculescu, Doru

    2016-06-01

    The main objective of this paper is to describe the key physical features for modelling the unsteady aerodynamic effects found on helicopter rotor blade operating under nominally attached flow conditions away from stall. The unsteady effects were considered as phase differences between the forcing function and the aerodynamic response, being functions of the reduced frequency, the Mach number and the mode forcing. For a helicopter rotor, the reduced frequency at any blade element can't be exactly calculated but a first order approximation for the reduced frequency gives useful information about the degree of unsteadiness. The sources of unsteady effects were decomposed into perturbations to the local angle of attack and velocity field. The numerical calculus and graphics were made in FLUENT and MAPLE soft environments. This mathematical model is applicable for aerodynamic design of wind turbine rotor blades, hybrid energy systems optimization and aeroelastic analysis.

  10. High Specificity in Circulating Tumor Cell Identification Is Required for Accurate Evaluation of Programmed Death-Ligand 1

    PubMed Central

    Schultz, Zachery D.; Warrick, Jay W.; Guckenberger, David J.; Pezzi, Hannah M.; Sperger, Jamie M.; Heninger, Erika; Saeed, Anwaar; Leal, Ticiana; Mattox, Kara; Traynor, Anne M.; Campbell, Toby C.; Berry, Scott M.; Beebe, David J.; Lang, Joshua M.

    2016-01-01

    Background Expression of programmed-death ligand 1 (PD-L1) in non-small cell lung cancer (NSCLC) is typically evaluated through invasive biopsies; however, recent advances in the identification of circulating tumor cells (CTCs) may be a less invasive method to assay tumor cells for these purposes. These liquid biopsies rely on accurate identification of CTCs from the diverse populations in the blood, where some tumor cells share characteristics with normal blood cells. While many blood cells can be excluded by their high expression of CD45, neutrophils and other immature myeloid subsets have low to absent expression of CD45 and also express PD-L1. Furthermore, cytokeratin is typically used to identify CTCs, but neutrophils may stain non-specifically for intracellular antibodies, including cytokeratin, thus preventing accurate evaluation of PD-L1 expression on tumor cells. This holds even greater significance when evaluating PD-L1 in epithelial cell adhesion molecule (EpCAM) positive and EpCAM negative CTCs (as in epithelial-mesenchymal transition (EMT)). Methods To evaluate the impact of CTC misidentification on PD-L1 evaluation, we utilized CD11b to identify myeloid cells. CTCs were isolated from patients with metastatic NSCLC using EpCAM, MUC1 or Vimentin capture antibodies and exclusion-based sample preparation (ESP) technology. Results Large populations of CD11b+CD45lo cells were identified in buffy coats and stained non-specifically for intracellular antibodies including cytokeratin. The amount of CD11b+ cells misidentified as CTCs varied among patients; accounting for 33–100% of traditionally identified CTCs. Cells captured with vimentin had a higher frequency of CD11b+ cells at 41%, compared to 20% and 18% with MUC1 or EpCAM, respectively. Cells misidentified as CTCs ultimately skewed PD-L1 expression to varying degrees across patient samples. Conclusions Interfering myeloid populations can be differentiated from true CTCs with additional staining criteria

  11. Analytical solutions of moisture flow equations and their numerical evaluation

    SciTech Connect

    Gibbs, A.G.

    1981-04-01

    The role of analytical solutions of idealized moisture flow problems is discussed. Some different formulations of the moisture flow problem are reviewed. A number of different analytical solutions are summarized, including the case of idealized coupled moisture and heat flow. The evaluation of special functions which commonly arise in analytical solutions is discussed, including some pitfalls in the evaluation of expressions involving combinations of special functions. Finally, perturbation theory methods are summarized which can be used to obtain good approximate analytical solutions to problems which are too complicated to solve exactly, but which are close to an analytically solvable problem.

  12. Evaluation and purchase of confocal microscopes: Numerous factors to consider

    EPA Science Inventory

    The purchase of a confocal microscope can be a complex and difficult decision for an individual scientist, group or evaluation committee. This is true even for scientists that have used confocal technology for many years. The task of reaching the optimal decision becomes almost i...

  13. Numerical evaluation of lateral diffusion inside diffusive gradients in thin films samplers.

    PubMed

    Santner, Jakob; Kreuzeder, Andreas; Schnepf, Andrea; Wenzel, Walter W

    2015-05-19

    Using numerical simulation of diffusion inside diffusive gradients in thin films (DGT) samplers, we show that the effect of lateral diffusion inside the sampler on the solute flux into the sampler is a nonlinear function of the diffusion layer thickness and the physical sampling window size. In contrast, earlier work concluded that this effect was constant irrespective of parameters of the sampler geometry. The flux increase caused by lateral diffusion inside the sampler was determined to be ∼8.8% for standard samplers, which is considerably lower than the previous estimate of ∼20%. Lateral diffusion is also propagated to the diffusive boundary layer (DBL), where it leads to a slightly stronger decrease in the mass uptake than suggested by the common 1D diffusion model that is applied for evaluating DGT results. We introduce a simple correction procedure for lateral diffusion and demonstrate how the effect of lateral diffusion on diffusion in the DBL can be accounted for. These corrections often result in better estimates of the DBL thickness (δ) and the DGT-measured concentration than earlier approaches and will contribute to more accurate concentration measurements in solute monitoring in waters.

  14. Numerical Evaluation of Lateral Diffusion Inside Diffusive Gradients in Thin Films Samplers

    PubMed Central

    2015-01-01

    Using numerical simulation of diffusion inside diffusive gradients in thin films (DGT) samplers, we show that the effect of lateral diffusion inside the sampler on the solute flux into the sampler is a nonlinear function of the diffusion layer thickness and the physical sampling window size. In contrast, earlier work concluded that this effect was constant irrespective of parameters of the sampler geometry. The flux increase caused by lateral diffusion inside the sampler was determined to be ∼8.8% for standard samplers, which is considerably lower than the previous estimate of ∼20%. Lateral diffusion is also propagated to the diffusive boundary layer (DBL), where it leads to a slightly stronger decrease in the mass uptake than suggested by the common 1D diffusion model that is applied for evaluating DGT results. We introduce a simple correction procedure for lateral diffusion and demonstrate how the effect of lateral diffusion on diffusion in the DBL can be accounted for. These corrections often result in better estimates of the DBL thickness (δ) and the DGT-measured concentration than earlier approaches and will contribute to more accurate concentration measurements in solute monitoring in waters. PMID:25877251

  15. Non-destructive evaluation of the cladding thickness in LEU fuel plates by accurate ultrasonic scanning technique

    SciTech Connect

    Borring, J.; Gundtoft, H.E.; Borum, K.K.; Toft, P.

    1997-08-01

    In an effort to improve their ultrasonic scanning technique for accurate determination of the cladding thickness in LEU fuel plates, new equipment and modifications to the existing hardware and software have been tested and evaluated. The authors are now able to measure an aluminium thickness down to 0.25 mm instead of the previous 0.35 mm. Furthermore, they have shown how the measuring sensitivity can be improved from 0.03 mm to 0.01 mm. It has now become possible to check their standard fuel plates for DR3 against the minimum cladding thickness requirements non-destructively. Such measurements open the possibility for the acceptance of a thinner nominal cladding than normally used today.

  16. Evaluation of a Second-Order Accurate Navier-Stokes Code for Detached Eddy Simulation Past a Circular Cylinder

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Singer, Bart A.

    2003-01-01

    We evaluate the applicability of a production computational fluid dynamics code for conducting detached eddy simulation for unsteady flows. A second-order accurate Navier-Stokes code developed at NASA Langley Research Center, known as TLNS3D, is used for these simulations. We focus our attention on high Reynolds number flow (Re = 5 x 10(sup 4) - 1.4 x 10(sup 5)) past a circular cylinder to simulate flows with large-scale separations. We consider two types of flow situations: one in which the flow at the separation point is laminar, and the other in which the flow is already turbulent when it detaches from the surface of the cylinder. Solutions are presented for two- and three-dimensional calculations using both the unsteady Reynolds-averaged Navier-Stokes paradigm and the detached eddy simulation treatment. All calculations use the standard Spalart-Allmaras turbulence model as the base model.

  17. Numerical evaluation of one-loop diagrams near exceptional momentum configurations

    SciTech Connect

    Walter T Giele; Giulia Zanderighi; E.W.N. Glover

    2004-07-06

    One problem which plagues the numerical evaluation of one-loop Feynman diagrams using recursive integration by part relations is a numerical instability near exceptional momentum configurations. In this contribution we will discuss a generic solution to this problem. As an example we consider the case of forward light-by-light scattering.

  18. Numerical evaluation of single central jet for turbine disk cooling

    NASA Technical Reports Server (NTRS)

    Subbaraman, M. R.; Hadid, A. H.; Mcconnaughey, P. K.

    1992-01-01

    The cooling arrangement of the Space Shuttle Main Engine High Pressure Oxidizer Turbopump (HPOTP) incorporates two jet rings, each of which produces 19 high-velocity coolant jets. At some operating conditions, the frequency of excitation associated with the 19 jets coincides with the natural frequency of the turbine blades, contributing to fatigue cracking of blade shanks. In this paper, an alternate turbine disk cooling arrangement, applicable to disk faces of zero hub radius, is evaluated, which consists of a single coolant jet impinging at the center of the turbine disk. Results of the CFD analysis show that replacing the jet ring with a single central coolant jet in the HPOTP leads to an acceptable thermal environment at the disk rim. Based on the predictions of flow and temperature fields for operating conditions, the single central jet cooling system was recommended for implementation into the development program of the Technology Test Bed Engine at NASA Marshall Space Flight Center.

  19. Evaluation of the diurnal variation of near-surface temperature and winds from WRF numerical simulations over complex terrain

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Pace, C.; Pu, Z.

    2011-12-01

    Near-surface atmospheric conditions, especially the temperature and winds, are characterized by their diurnal variations. Accurate representation and forecast of the diurnal variations are the essential components of numerical modeling and weather prediction. However, it is commonly challenging to accurately simulate and predict diurnal variations of near-surface atmospheric conditions over complex terrain, especially over the mountainous areas. In this study we evaluate the diurnal variation of near-surface temperature and winds from the numerical simulations generated by mesoscale community Weather Research and Forecasting (WRF) model. The model simulated surface temperature at 2-meter height and winds at 10-meter height are compared with these observations from surface mesonet observations in several different weather scenarios (winter inversion, cold front and low-level jet etc.) over the Inter-mountain West of US. Preliminary results show large discrepancies between model generated diurnal variations and observations in some cases. The mechanism and causes of these differences are further investigated. Implications of these results for model improvement and data assimilation are also discussed.

  20. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  1. Evaluation of a pan-serotype point-of-care rapid diagnostic assay for accurate detection of acute dengue infection.

    PubMed

    Vivek, Rosario; Ahamed, Syed Fazil; Kotabagi, Shalini; Chandele, Anmol; Khanna, Ira; Khanna, Navin; Nayak, Kaustuv; Dias, Mary; Kaja, Murali-Krishna; Shet, Anita

    2017-03-01

    The catastrophic rise in dengue infections in India and globally has created a need for an accurate, validated low-cost rapid diagnostic test (RDT) for dengue. We prospectively evaluated the diagnostic performance of NS1/IgM RDT (dengue day 1) using 211 samples from a pediatric dengue cohort representing all 4 serotypes in southern India. The dengue-positive panel consisted of 179 dengue real-time polymerase chain reaction (RT-PCR) positive samples from symptomatic children. The dengue-negative panel consisted of 32 samples from dengue-negative febrile children and asymptomatic individuals that were negative for dengue RT-PCR/NS1 enzyme-linked immunosorbent assay/IgM/IgG. NS1/IgM RDT sensitivity was 89.4% and specificity was 93.8%. The NS1/IgM RDT showed high sensitivity throughout the acute phase of illness, in primary and secondary infections, in different severity groups, and detected all 4 dengue serotypes, including coinfections. This NS1/IgM RDT is a useful point-of-care assay for rapid and reliable diagnosis of acute dengue and an excellent surveillance tool in our battle against dengue.

  2. Evaluating the use of high-resolution numerical weather forecast for debris flow prediction.

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Efthymios I.; Bartsotas, Nikolaos S.; Borga, Marco; Kallos, George

    2015-04-01

    The sudden occurrence combined with the high destructive power of debris flows pose a significant threat to human life and infrastructures. Therefore, developing early warning procedures for the mitigation of debris flows risk is of great economical and societal importance. Given that rainfall is the predominant factor controlling debris flow triggering, it is indisputable that development of effective debris flows warning procedures requires accurate knowledge of the properties (e.g. duration, intensity) of the triggering rainfall. Moreover, efficient and timely response of emergency operations depends highly on the lead-time provided by the warning systems. Currently, the majority of early warning systems for debris flows are based on nowcasting procedures. While the latter may be successful in predicting the hazard, they provide warnings with a relatively short lead-time (~6h). Increasing the lead-time is necessary in order to improve the pre-incident operations and communication of the emergency, thus coupling warning systems with weather forecasting is essential for advancing early warning procedures. In this work we evaluate the potential of using high-resolution (1km) rainfall fields forecasted with a state-of-the-art numerical weather prediction model (RAMS/ICLAMS), in order to predict the occurrence of debris flows. Analysis is focused over the Upper Adige region, Northeast Italy, an area where debris flows are frequent. Seven storm events that generated a large number (>80) of debris flows during the period 2007-2012 are analyzed. Radar-based rainfall estimates, available from the operational C-band radar located at Mt Macaion, are used as the reference to evaluate the forecasted rainfall fields. Evaluation is mainly focused on assessing the error in forecasted rainfall properties (magnitude, duration) and the correlation in space and time with the reference field. Results show that the forecasted rainfall fields captured very well the magnitude and

  3. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure.

    PubMed

    vom Saal, Frederick S; Welshons, Wade V

    2014-12-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources.

  4. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure

    PubMed Central

    vom Saal, Frederick S.; Welshons, Wade V.

    2016-01-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273

  5. A computationally efficient and accurate numerical representation of thermodynamic properties of steam and water for computations of non-equilibrium condensing steam flow in steam turbines

    NASA Astrophysics Data System (ADS)

    Hrubý, Jan

    2012-04-01

    Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.

  6. Description and Evaluation of Numerical Groundwater Flow Models for the Edwards Aquifer, South-Central Texas

    USGS Publications Warehouse

    Lindgren, Richard J.; Taylor, Charles J.; Houston, Natalie A.

    2009-01-01

    A substantial number of public water system wells in south-central Texas withdraw groundwater from the karstic, highly productive Edwards aquifer. However, the use of numerical groundwater flow models to aid in the delineation of contributing areas for public water system wells in the Edwards aquifer is problematic because of the complex hydrogeologic framework and the presence of conduit-dominated flow paths in the aquifer. The U.S. Geological Survey, in cooperation with the Texas Commission on Environmental Quality, evaluated six published numerical groundwater flow models (all deterministic) that have been developed for the Edwards aquifer San Antonio segment or Barton Springs segment, or both. This report describes the models developed and evaluates each with respect to accessibility and ease of use, range of conditions simulated, accuracy of simulations, agreement with dye-tracer tests, and limitations of the models. These models are (1) GWSIM model of the San Antonio segment, a FORTRAN computer-model code that pre-dates the development of MODFLOW; (2) MODFLOW conduit-flow model of San Antonio and Barton Springs segments; (3) MODFLOW diffuse-flow model of San Antonio and Barton Springs segments; (4) MODFLOW Groundwater Availability Modeling [GAM] model of the Barton Springs segment; (5) MODFLOW recalibrated GAM model of the Barton Springs segment; and (6) MODFLOW-DCM (dual conductivity model) conduit model of the Barton Springs segment. The GWSIM model code is not commercially available, is limited in its application to the San Antonio segment of the Edwards aquifer, and lacks the ability of MODFLOW to easily incorporate newly developed processes and packages to better simulate hydrologic processes. MODFLOW is a widely used and tested code for numerical modeling of groundwater flow, is well documented, and is in the public domain. These attributes make MODFLOW a preferred code with regard to accessibility and ease of use. The MODFLOW conduit-flow model

  7. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces.

    PubMed

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-12-01

    Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.

  8. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces

    NASA Astrophysics Data System (ADS)

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-07-01

    Laminar natural convection in differentially heated ( β = 0°, where β is the inclination angle), inclined ( β = 30° and 60°), and bottom-heated ( β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.

  9. Fast Numerical Evaluation of Time-Derivative Nonadiabatic Couplings for Mixed Quantum-Classical Methods.

    PubMed

    Ryabinkin, Ilya G; Nagesh, Jayashree; Izmaylov, Artur F

    2015-11-05

    We have developed a numerical differentiation scheme that eliminates evaluation of overlap determinants in calculating the time-derivative nonadiabatic couplings (TDNACs). Evaluation of these determinants was the bottleneck in previous implementations of mixed quantum-classical methods using numerical differentiation of electronic wave functions in the Slater determinant representation. The central idea of our approach is, first, to reduce the analytic time derivatives of Slater determinants to time derivatives of molecular orbitals and then to apply a finite-difference formula. Benchmark calculations prove the efficiency of the proposed scheme showing impressive several-order-of-magnitude speedups of the TDNAC calculation step for midsize molecules.

  10. Numerical evaluation of the incomplete airy functions and their application to high frequency scattering and diffraction

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1992-01-01

    The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals of such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. Here, a convergent series solution form for the incomplete Airy functions is derived. Asymptotic expansions involving several terms were also developed and serve as large argument approximations. The combination of the series solution form with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.

  11. Difficulties in applying numerical simulations to an evaluation of occupational hazards caused by electromagnetic fields.

    PubMed

    Zradziński, Patryk

    2015-01-01

    Due to the various physical mechanisms of interaction between a worker's body and the electromagnetic field at various frequencies, the principles of numerical simulations have been discussed for three areas of worker exposure: to low frequency magnetic field, to low and intermediate frequency electric field and to radiofrequency electromagnetic field. This paper presents the identified difficulties in applying numerical simulations to evaluate physical estimators of direct and indirect effects of exposure to electromagnetic fields at various frequencies. Exposure of workers operating a plastic sealer have been taken as an example scenario of electromagnetic field exposure at the workplace for discussion of those difficulties in applying numerical simulations. The following difficulties in reliable numerical simulations of workers' exposure to the electromagnetic field have been considered: workers' body models (posture, dimensions, shape and grounding conditions), working environment models (objects most influencing electromagnetic field distribution) and an analysis of parameters for which exposure limitations are specified in international guidelines and standards.

  12. Difficulties in applying numerical simulations to an evaluation of occupational hazards caused by electromagnetic fields

    PubMed Central

    Zradziński, Patryk

    2015-01-01

    Due to the various physical mechanisms of interaction between a worker's body and the electromagnetic field at various frequencies, the principles of numerical simulations have been discussed for three areas of worker exposure: to low frequency magnetic field, to low and intermediate frequency electric field and to radiofrequency electromagnetic field. This paper presents the identified difficulties in applying numerical simulations to evaluate physical estimators of direct and indirect effects of exposure to electromagnetic fields at various frequencies. Exposure of workers operating a plastic sealer have been taken as an example scenario of electromagnetic field exposure at the workplace for discussion of those difficulties in applying numerical simulations. The following difficulties in reliable numerical simulations of workers’ exposure to the electromagnetic field have been considered: workers’ body models (posture, dimensions, shape and grounding conditions), working environment models (objects most influencing electromagnetic field distribution) and an analysis of parameters for which exposure limitations are specified in international guidelines and standards. PMID:26323781

  13. A Framework for Evaluating Regional-Scale Numerical Photochemical Modeling Systems

    EPA Science Inventory

    This paper discusses the need for critically evaluating regional-scale (~ 200-2000 km) three dimensional numerical photochemical air quality modeling systems to establish a model's credibility in simulating the spatio-temporal features embedded in the observations. Because of li...

  14. Evaluation and Visualization of Surface Defects - a Numerical and Experimental Study on Sheet-Metal Parts

    SciTech Connect

    Andersson, A.

    2005-08-05

    The ability to predict surface defects in outer panels is of vital importance in the automotive industry, especially for brands in the premium car segment. Today, measures to prevent these defects can not be taken until a test part has been manufactured, which requires a great deal of time and expense. The decision as to whether a certain surface is of acceptable quality or not is based on subjective evaluation. It is quite possible to detect a defect by measurement, but it is not possible to correlate measured defects and the subjective evaluation. If all results could be based on the same criteria, it would be possible to compare a surface by both FE simulations, experiments and subjective evaluation with the same result.In order to find a solution concerning the prediction of surface defects, a laboratory tool was manufactured and analysed both experimentally and numerically. The tool represents the area around a fuel filler lid and the aim was to recreate surface defects, so-called 'teddy bear ears'. A major problem with the evaluation of such defects is that the panels are evaluated manually and to a great extent subjectivity is involved in the classification and judgement of the defects. In this study the same computer software was used for the evaluation of both the experimental and the numerical results. In this software the surface defects were indicated by a change in the curvature of the panel. The results showed good agreement between numerical and experimental results. Furthermore, the evaluation software gave a good indication of the appearance of the surface defects compared to an analysis done in existing tools for surface quality measurements. Since the agreement between numerical and experimental results was good, this indicates that these tools can be used for an early verification of surface defects in outer panels.

  15. A numerical algorithm with preference statements to evaluate the performance of scientists.

    PubMed

    Ricker, Martin

    Academic evaluation committees have been increasingly receptive for using the number of published indexed articles, as well as citations, to evaluate the performance of scientists. It is, however, impossible to develop a stand-alone, objective numerical algorithm for the evaluation of academic activities, because any evaluation necessarily includes subjective preference statements. In a market, the market prices represent preference statements, but scientists work largely in a non-market context. I propose a numerical algorithm that serves to determine the distribution of reward money in Mexico's evaluation system, which uses relative prices of scientific goods and services as input. The relative prices would be determined by an evaluation committee. In this way, large evaluation systems (like Mexico's Sistema Nacional de Investigadores) could work semi-automatically, but not arbitrarily or superficially, to determine quantitatively the academic performance of scientists every few years. Data of 73 scientists from the Biology Institute of Mexico's National University are analyzed, and it is shown that the reward assignation and academic priorities depend heavily on those preferences. A maximum number of products or activities to be evaluated is recommended, to encourage quality over quantity.

  16. Experimental and numerical evaluation of the force due to the impact of a dam-break wave on a structure

    NASA Astrophysics Data System (ADS)

    Aureli, Francesca; Dazzi, Susanna; Maranzoni, Andrea; Mignosa, Paolo; Vacondio, Renato

    2015-02-01

    Flood events caused by the collapse of dams or river levees can have damaging consequences on buildings and infrastructure located in prone areas. Accordingly, a careful prediction of the hydrodynamic load acting on structures is important for flood hazard assessment and potential damage evaluation. However, this represents a challenging task and requires the use of suitable mathematical models. This paper investigates the capability of three different models, i.e. a 2D depth-averaged model, a 3D Eulerian two-phase model, and a 3D Smoothed Particle Hydrodynamics (SPH) model, to estimate the impact load exerted by a dam-break wave on an obstacle. To this purpose, idealised dam-break experiments were carried out by generating a flip-through impact against a rigid squat structure, and measurements of the impact force were obtained directly by using a load cell. The dynamics of the impact event was analyzed and related to the measured load time history. A repeatability analysis was performed due to the great variability typically shown by impact phenomena, and a confidence range was estimated. The comparison between numerical results and experimental data shows the capability of 3D models to reproduce the key features of the flip-through impact. The 2D modelling based on the shallow water approach is not entirely suitable to accurately reproduce the load hydrograph and predict the load peak values; this difficulty increases with the strength of the wave impact. Nevertheless, the error in the peak load estimation is in the order of 10% only, thus the 2D approach may be considered appropriate for practical applications. Moreover, when the shallow water approximation is expected to work well, 2D results are comparable with the experimental data, as well as with the numerical predictions of far more sophisticated and computationally demanding 3D solvers. All the numerical models overestimate the falling limb of the load hydrograph after the impact. The SPH model ensures

  17. Addition theorem of Slater type orbitals: a numerical evaluation of Barnett Coulson/Löwdin functions

    NASA Astrophysics Data System (ADS)

    Bouferguene, Ahmed

    2005-04-01

    When using the one-centre two-range expansion method to evaluate multicentre integrals over Slater type orbitals (STOs), it may become necessary to compute numerical values of the corresponding Fourier coefficients, also known as Barnett-Coulson/Löwdin Functions (BCLFs) (Bouferguene and Jones 1998 J. Chem. Phys. 109 5718). To carry out this task, it is crucial to not only have a stable numerical procedure but also a fast algorithm. In previous work (Bouferguene and Rinaldi 1994 Int. J. Quantum Chem. 50 21), BCLFs were represented by a double integral which led to a numerically stable algorithm but this turned out to be disappointingly time consuming. The present work aims at exploring another path in which BCLFs are represented either by an infinite series involving modified Bessel functions {\\bf K}_{\

  18. Numerical investigation of acoustic field in enclosures: Evaluation of active and reactive components of sound intensity

    NASA Astrophysics Data System (ADS)

    Meissner, Mirosław

    2015-03-01

    The paper focuses on a theoretical description and numerical evaluation of active and reactive components of sound intensity in enclosed spaces. As the study was dedicated to low-frequency room responses, a modal expansion of the sound pressure was used. Numerical simulations have shown that the presence of energy vortices whose size and distribution depend on the character of the room response is a distinctive feature of the active intensity field. When several modes with frequencies close to a source frequency are excited, the vortices within the room are positioned irregularly. However, if the response is determined by one or two dominant modes, a regular distribution of vortices in the room can be observed. The irrotational component of the active intensity was found using the Helmholtz decomposition theorem. As was evidenced by numerical simulations, the suppression of the vortical flow of sound energy in the nearfield permits obtaining a clear image of the sound source.

  19. Generalization Evaluation of Machine Learning Numerical Observers for Image Quality Assessment

    PubMed Central

    Kalayeh, Mahdi M.; Marin, Thibault; Brankov, Jovan G.

    2014-01-01

    In this paper, we present two new numerical observers (NO) based on machine learning for image quality assessment. The proposed NOs aim to predict human observer performance in a cardiac perfusion-defect detection task for single-photon emission computed tomography (SPECT) images. Human observer (HumO) studies are now considered to be the gold standard for task-based evaluation of medical images. However such studies are impractical for use in early stages of development for imaging devices and algorithms, because they require extensive involvement of trained human observers who must evaluate a large number of images. To address this problem, numerical observers (also called model observers) have been developed as a surrogate for human observers. The channelized Hotelling observer (CHO), with or without internal noise model, is currently the most widely used NO of this kind. In our previous work we argued that development of a NO model to predict human observers' performance can be viewed as a machine learning (or system identification) problem. This consideration led us to develop a channelized support vector machine (CSVM) observer, a kernel-based regression model that greatly outperformed the popular and widely used CHO. This was especially evident when the numerical observers were evaluated in terms of generalization performance. To evaluate generalization we used a typical situation for the practical use of a numerical observer: after optimizing the NO (which for a CHO might consist of adjusting the internal noise model) based upon a broad set of reconstructed images, we tested it on a broad (but different) set of images obtained by a different reconstruction method. In this manuscript we aim to evaluate two new regression models that achieve accuracy higher than the CHO and comparable to our earlier CSVM method, while dramatically reducing model complexity and computation time. The new models are defined in a Bayesian machine-learning framework: a channelized

  20. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  1. Numerical evaluation of aperture coupling in resonant cavities and frequency perturbation analysis

    NASA Astrophysics Data System (ADS)

    Dash, R.; Nayak, B.; Sharma, A.; Mittal, K. C.

    2014-01-01

    This paper presents a general formulation for numerical evaluation of the coupling between two identical resonant cavities by a small elliptical aperture in a plane common wall of arbitrary thickness. It is organized into two parts. In the first one we discuss the aperture coupling that is expressed in terms of electric and magnetic dipole moments and polarizabilities using Carlson symmetric elliptical integrals. Carlson integrals have been numerically evaluated and under zero thickness approximation, the results match with the complete elliptical integrals of first and second kind. It is found that with zero wall thickness, the results obtained are the same as those of Bethe and Collin for an elliptical and circular aperture of zero thickness. In the second part, Slater's perturbation method is applied to find the frequency changes due to apertures of finite thickness on the cavity wall.

  2. Numerical modeling in the design and evaluation of scaffolds for orthopaedics applications.

    PubMed

    Swieszkowski, Wojciech; Kurzydlowski, Krzysztof J

    2012-01-01

    Numerical modeling becomes a very useful tool for design and preclinical evaluation of scaffold for tissue engineering. This chapter illustrates, how finite element analysis and genetic algorithm maybe applied to predict the mechanical performance of novel scaffolds, with a honeycomb-like pattern, a fully interconnected channel network, and controllable porosity fabricated in layers of directionally aligned microfibers deposited using a computer-controlled extrusion process.

  3. Radiative transfer in highly scattering materials - numerical solution and evaluation of approximate analytic solutions

    NASA Technical Reports Server (NTRS)

    Weston, K. C.; Reynolds, A. C., Jr.; Alikhan, A.; Drago, D. W.

    1974-01-01

    Numerical solutions for radiative transport in a class of anisotropically scattering materials are presented. Conditions for convergence and divergence of the iterative method are given and supported by computed results. The relation of two flux theories to the equation of radiative transfer for isotropic scattering is discussed. The adequacy of the two flux approach for the reflectance, radiative flux and radiative flux divergence of highly scattering media is evaluated with respect to solutions of the radiative transfer equation.

  4. Selection of a numerical unsaturated flow code for tilted capillary barrier performance evaluation

    SciTech Connect

    Webb, S.W.

    1996-09-01

    Capillary barriers consisting of tilted fine-over-coarse layers have been suggested as landfill covers as a means to divert water infiltration away from sensitive underground regions under unsaturated flow conditions, especially for arid and semi-arid regions. Typically, the HELP code is used to evaluate landfill cover performance and design. Unfortunately, due to its simplified treatment of unsaturated flow and its essentially one-dimensional nature, HELP is not adequate to treat the complex multidimensional unsaturated flow processes occurring in a tilted capillary barrier. In order to develop the necessary mechanistic code for the performance evaluation of tilted capillary barriers, an efficient and comprehensive unsaturated flow code needs to be selected for further use and modification. The present study evaluates a number of candidate mechanistic unsaturated flow codes for application to tilted capillary barriers. Factors considered included unsaturated flow modeling, inclusion of evapotranspiration, nodalization flexibility, ease of modification, and numerical efficiency. A number of unsaturated flow codes are available for use with different features and assumptions. The codes chosen for this evaluation are TOUGH2, FEHM, and SWMS{_}2D. All three codes chosen for this evaluation successfully simulated the capillary barrier problem chosen for the code comparison, although FEHM used a reduced grid. The numerical results are a strong function of the numerical weighting scheme. For the same weighting scheme, similar results were obtained from the various codes. Based on the CPU time of the various codes and the code capabilities, the TOUGH2 code has been selected as the appropriate code for tilted capillary barrier performance evaluation, possibly in conjunction with the infiltration, runoff, and evapotranspiration models of HELP. 44 refs.

  5. Neither Fair nor Accurate: Research-Based Reasons Why High-Stakes Tests Should Not Be Used to Evaluate Teachers

    ERIC Educational Resources Information Center

    Au, Wayne

    2011-01-01

    Current and former leaders of many major urban school districts, including Washington, D.C.'s Michelle Rhee and New Orleans' Paul Vallas, have sought to use tests to evaluate teachers. In fact, the use of high-stakes standardized tests to evaluate teacher performance in the manner of value-added measurement (VAM) has become one of the cornerstones…

  6. A FRAMEWORK FOR EVALUATING REGIONAL-SCALE NUMERICAL PHOTOCHEMICAL MODELING SYSTEMS

    PubMed Central

    Dennis, Robin; Fox, Tyler; Fuentes, Montse; Gilliland, Alice; Hanna, Steven; Hogrefe, Christian; Irwin, John; Rao, S.Trivikrama.; Scheffe, Richard; Schere, Kenneth; Steyn, Douw; Venkatram, Akula

    2011-01-01

    This paper discusses the need for critically evaluating regional-scale (~200-2000 km) three-dimensional numerical photochemical air quality modeling systems to establish a model’s credibility in simulating the spatio-temporal features embedded in the observations. Because of limitations of currently used approaches for evaluating regional air quality models, a framework for model evaluation is introduced here for determining the suitability of a modeling system for a given application, distinguishing the performance between different models through confidence-testing of model results, guiding model development, and analyzing the impacts of regulatory policy options. The framework identifies operational, diagnostic, dynamic, and probabilistic types of model evaluation. Operational evaluation techniques include statistical and graphical analyses aimed at determining whether model estimates are in agreement with the observations in an overall sense. Diagnostic evaluation focuses on process-oriented analyses to determine whether the individual processes and components of the model system are working correctly, both independently and in combination. Dynamic evaluation assesses the ability of the air quality model to simulate changes in air quality stemming from changes in source emissions and/or meteorology, the principal forces that drive the air quality model. Probabilistic evaluation attempts to assess the confidence that can be placed in model predictions using techniques such as ensemble modeling and Bayesian model averaging. The advantages of these types of model evaluation approaches are discussed in this paper. PMID:21461126

  7. Numerical evaluation of implantable hearing devices using a finite element model of human ear considering viscoelastic properties.

    PubMed

    Zhang, Jing; Tian, Jiabin; Ta, Na; Huang, Xinsheng; Rao, Zhushi

    2016-08-01

    Finite element method was employed in this study to analyze the change in performance of implantable hearing devices due to the consideration of soft tissues' viscoelasticity. An integrated finite element model of human ear including the external ear, middle ear and inner ear was first developed via reverse engineering and analyzed by acoustic-structure-fluid coupling. Viscoelastic properties of soft tissues in the middle ear were taken into consideration in this model. The model-derived dynamic responses including middle ear and cochlea functions showed a better agreement with experimental data at high frequencies above 3000 Hz than the Rayleigh-type damping. On this basis, a coupled finite element model consisting of the human ear and a piezoelectric actuator attached to the long process of incus was further constructed. Based on the electromechanical coupling analysis, equivalent sound pressure and power consumption of the actuator corresponding to viscoelasticity and Rayleigh damping were calculated using this model. The analytical results showed that the implant performance of the actuator evaluated using a finite element model considering viscoelastic properties gives a lower output above about 3 kHz than does Rayleigh damping model. Finite element model considering viscoelastic properties was more accurate to numerically evaluate implantable hearing devices.

  8. Accurate numerical simulation of the far-field tsunami caused by the 2011 Tohoku earthquake, including the effects of Boussinesq dispersion, seawater density stratification, elastic loading, and gravitational potential change

    NASA Astrophysics Data System (ADS)

    Baba, Toshitaka; Allgeyer, Sebastien; Hossen, Jakir; Cummins, Phil R.; Tsushima, Hiroaki; Imai, Kentaro; Yamashita, Kei; Kato, Toshihiro

    2017-03-01

    In this study, we considered the accurate calculation of far-field tsunami waveforms by using the shallow water equations and accounting for the effects of Boussinesq dispersion, seawater density stratification, elastic loading, and gravitational potential change in a finite difference scheme. By comparing numerical simulations that included and excluded each of these effects with the observed waveforms of the 2011 Tohoku tsunami, we found that all of these effects are significant and resolvable in the far field by the current generation of deep ocean-bottom pressure gauges. Our calculations using previously published, high-resolution models of the 2011 Tohoku tsunami source exhibited excellent agreement with the observed waveforms to a degree that has previously been possible only with near-field or regional observations. We suggest that the ability to model far-field tsunamis with high accuracy has important implications for tsunami source and hazard studies.

  9. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  10. Three-Dimensional Numerical Evaluation of Thermal Performance of Uninsulated Wall Assemblies: Preprint

    SciTech Connect

    Ridouane, E. H.; Bianchi, M.

    2011-11-01

    This study describes a detailed three-dimensional computational fluid dynamics modeling to evaluate the thermal performance of uninsulated wall assemblies accounting for conduction through framing, convection, and radiation. The model allows for material properties variations with temperature. Parameters that were varied in the study include ambient outdoor temperature and cavity surface emissivity. Understanding the thermal performance of uninsulated wall cavities is essential for accurate prediction of energy use in residential buildings. The results can serve as input for building energy simulation tools for modeling the temperature dependent energy performance of homes with uninsulated walls.

  11. Combined experimental and numerical evaluation of a prototype nano-PCM enhanced wallboard

    SciTech Connect

    Biswas, Kaushik; LuPh.D., Jue; Soroushian, Parviz; Shrestha, Som S

    2014-01-01

    In the United States, forty-eight (48) percent of the residential end-use energy consumption is spent on space heating and air conditioning. Reducing envelope-generated heating and cooling loads through application of phase change material (PCM)-enhanced building envelopes can facilitate maximizing the energy efficiency of buildings. Combined experimental testing and numerical modeling of PCM-enhanced envelope components are two important aspects of the evaluation of their energy benefits. An innovative phase change material (nano-PCM) was developed with PCM encapsulated with expanded graphite (interconnected) nanosheets, which is highly conductive for enhanced thermal storage and energy distribution, and is shape-stable for convenient incorporation into lightweight building components. A wall with cellulose cavity insulation and prototype PCM-enhanced interior wallboards was built and tested in a natural exposure test (NET) facility in a hot-humid climate location. The test wall contained PCM wallboards and regular gypsum wallboard, for a side-by-side annual comparison study. Further, numerical modeling of the walls containing the nano-PCM wallboard was performed to determine its actual impact on wall-generated heating and cooling loads. The model was first validated using experimental data, and then used for annual simulations using Typical Meteorological Year (TMY3) weather data. This article presents the measured performance and numerical analysis evaluating the energy-saving potential of the nano-PCM-enhanced wallboard.

  12. The numerical evaluation of safety valve size in the pipelines of cryogenic installations

    NASA Astrophysics Data System (ADS)

    Malecha, Z. M.

    2017-02-01

    The flow of cold helium in pipes is a fundamental issue of any cryogenic installation. Pipelines for helium transportation can reach lengths of hundreds of meters. The proper selection of size for individual pipelines and safety valves is a crucial part in the consideration of costs for the entire installation and its safe operation. The size of the safety valve must be properly designed in order to avoid a dangerous pressure build-up during normal operation, as well as in the case of emergency. The most commonly occurring dangerous situation is an undesired heat flux in the helium as a result of a broken insulation. In this case, the heat flux can be very intense and the build-up of the pressure in the pipe can be very rapid. In the present work, numerical calculations were used to evaluate the build-up of pressure and temperature in the pipe, in the case of a sudden and intense heat flux. The main goal of the applied numerical procedure was to evaluate the proper sizes of the safety valves in order to avoid a rise in pressure above the safety limit. The proposed numerical model and calculations were based on OpenFOAM, an open source CFD toolbox.

  13. Numerical evaluation of the Rayleigh integral for planar radiators using the FFT

    NASA Technical Reports Server (NTRS)

    Williams, E. G.; Maynard, J. D.

    1982-01-01

    Rayleigh's integral formula is evaluated numerically for planar radiators of any shape, with any specified velocity in the source plane using the fast Fourier transfrom algorithm. The major advantage of this technique is its speed of computation - over 400 times faster than a straightforward two-dimensional numerical integration. The technique is developed for computation of the radiated pressure in the nearfield of the source and can be easily extended to provide, with little computation time, the vector intensity in the nearfield. Computations with the FFT of the nearfield pressure of baffled rectangular plates with clamped and free boundaries are compared with the 'exact' solution to illuminate any errors. The bias errors, introduced by the FFT, are investigated and a technique is developed to significantly reduce them.

  14. Numerical evaluation of two-center integrals over Slater type orbitals

    NASA Astrophysics Data System (ADS)

    Kurt, S. A.; Yükçü, N.

    2016-03-01

    Slater Type Orbitals (STOs) which one of the types of exponential type orbitals (ETOs) are used usually as basis functions in the multicenter molecular integrals to better understand physical and chemical properties of matter. In this work, we develop algorithms for two-center overlap and two-center two-electron hybrid and Coulomb integrals which are calculated with help of translation method for STOs and some auxiliary functions by V. Magnasco's group. We use Mathematica programming language to produce algorithms for these calculations. Numerical results for some quantum numbers are presented in the tables. Consequently, we compare our obtained numerical results with the other known literature results and other details of evaluation method are discussed.

  15. A numerical analysis method for evaluating rod lenses using the Monte Carlo method.

    PubMed

    Yoshida, Shuhei; Horiuchi, Shuma; Ushiyama, Zenta; Yamamoto, Manabu

    2010-12-20

    We propose a numerical analysis method for evaluating GRIN lenses using the Monte Carlo method. Actual measurements of the modulation transfer function (MTF) of a GRIN lens using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. However the results differ greatly from those from experiments. We therefore developed an evaluation method similar to the experimental system based on the Monte Carlo method and verified that it more closely matches the experimental results than the conventional method.

  16. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    NASA Astrophysics Data System (ADS)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  17. Implementation and evaluation of the Level Set method: Towards efficient and accurate simulation of wet etching for microengineering applications

    NASA Astrophysics Data System (ADS)

    Montoliu, C.; Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Colom, R. J.

    2013-10-01

    The use of atomistic methods, such as the Continuous Cellular Automaton (CCA), is currently regarded as a computationally efficient and experimentally accurate approach for the simulation of anisotropic etching of various substrates in the manufacture of Micro-electro-mechanical Systems (MEMS). However, when the features of the chemical process are modified, a time-consuming calibration process needs to be used to transform the new macroscopic etch rates into a corresponding set of atomistic rates. Furthermore, changing the substrate requires a labor-intensive effort to reclassify most atomistic neighborhoods. In this context, the Level Set (LS) method provides an alternative approach where the macroscopic forces affecting the front evolution are directly applied at the discrete level, thus avoiding the need for reclassification and/or calibration. Correspondingly, we present a fully-operational Sparse Field Method (SFM) implementation of the LS approach, discussing in detail the algorithm and providing a thorough characterization of the computational cost and simulation accuracy, including a comparison to the performance by the most recent CCA model. We conclude that the SFM implementation achieves similar accuracy as the CCA method with less fluctuations in the etch front and requiring roughly 4 times less memory. Although SFM can be up to 2 times slower than CCA for the simulation of anisotropic etchants, it can also be up to 10 times faster than CCA for isotropic etchants. In addition, we present a parallel, GPU-based implementation (gSFM) and compare it to an optimized, multicore CPU version (cSFM), demonstrating that the SFM algorithm can be successfully parallelized and the simulation times consequently reduced, while keeping the accuracy of the simulations. Although modern multicore CPUs provide an acceptable option, the massively parallel architecture of modern GPUs is more suitable, as reflected by computational times for gSFM up to 7.4 times faster than

  18. Numerical evaluation of cavitation void ratio significance on hydrofoil dynamic response

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Wang, Zhengwei; Escaler, Xavier; Zhou, Lingjiu

    2015-12-01

    The added mass effects on a NACA0009 hydrofoil under cavitation conditions determined in a cavitation tunnel have been numerically simulated using finite element method (FEM). Based on the validated model, the effects of averaged properties of the cavity considered as a two-phase mixture have been evaluated. The results indicate that the void ratio of the cavity plays an increasing role on the frequency reduction ratio and on the mode shape as the mode number increases. Moreover, the sound speed shows a more important role than the average cavity density.

  19. The numerical evaluation of the maximum-likelihood estimate of a subset of mixture proportions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    Necessary and sufficient conditions are given for a maximum likelihood estimate of a subset of mixture proportions. From these conditions, likelihood equations are derived satisfied by the maximum-likelihood estimate and a successive-approximations procedure is discussed as suggested by equations for numerically evaluating the maximum-likelihood estimate. It is shown that, with probability one for large samples, this procedure converges locally to the maximum-likelihood estimate whenever a certain step-size lies between zero and two. Furthermore, optimal rates of local convergence are obtained for a step-size which is bounded below by a number between one and two.

  20. Accurate Bit-Error Rate Evaluation for TH-PPM Systems in Nakagami Fading Channels Using Moment Generating Functions

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Gunawan, Erry; Law, Choi Look; Teh, Kah Chan

    Analytical expressions based on the Gauss-Chebyshev quadrature (GCQ) rule technique are derived to evaluate the bit-error rate (BER) for the time-hopping pulse position modulation (TH-PPM) ultra-wide band (UWB) systems under a Nakagami-m fading channel. The analyses are validated by the simulation results and adopted to assess the accuracy of the commonly used Gaussian approximation (GA) method. The influence of the fading severity on the BER performance of TH-PPM UWB system is investigated.

  1. Analytical expression for gas-particle equilibration time scale and its numerical evaluation

    NASA Astrophysics Data System (ADS)

    Anttila, Tatu; Lehtinen, Kari E. J.; Dal Maso, Miikka

    2016-05-01

    We have derived a time scale τeq that describes the characteristic time for a single compound i with a saturation vapour concentration Ceff,i to reach thermodynamic equilibrium between the gas and particle phases. The equilibration process was assumed to take place via gas-phase diffusion and absorption into a liquid-like phase present in the particles. It was further shown that τeq combines two previously derived and often applied time scales τa and τs that account for the changes in the gas and particle phase concentrations of i resulting from the equilibration, respectively. The validity of τeq was tested by comparing its predictions against results from a numerical model that explicitly simulates the transfer of i between the gas and particle phases. By conducting a large number of simulations where the values of the key input parameters were varied randomly, it was found out that τeq yields highly accurate results when i is a semi-volatile compound in the sense that the ratio of total (gas and particle phases) concentration of i to the saturation vapour concentration of i, μ, is below unity. On the other hand, the comparison of analytical and numerical time scales revealed that using τa or τs alone to calculate the equilibration time scale may lead to considerable errors. It was further shown that τeq tends to overpredict the equilibration time when i behaves as a non-volatile compound in a sense that μ > 1. Despite its simplicity, the time scale derived here has useful applications. First, it can be used to assess if semi-volatile compounds reach thermodynamic equilibrium during dynamic experiments that involve changes in the compound volatility. Second, the time scale can be used in modeling of secondary organic aerosol (SOA) to check whether SOA forming compounds equilibrate over a certain time interval.

  2. A numerical re-evaluation of the Mcdonald-Vaughan model for Raman depth profiling

    NASA Astrophysics Data System (ADS)

    Caro, Jacob; Heldens, Jeroen; Leenman, Dennis

    2013-02-01

    We re-evaluate the Macdonald-Vaughan model for Raman depth profiling [J. Raman Spectrosc. 38, 584 (2007)]. The model is an geometrical description of the sample regions from which Raman signal is collected in a confocal geometry and indicates that Raman signal also originates from far outside the focus. Although correct shapes of Raman depth profiles were obtained, quantitatively the results were not satisfactory, in view of the highly deviating values of the fitted extinction coefficients of the sample material. Our re-evaluation, based on a new numerical implementation of the model, indicates that the model is not only capable of predicting the proper profiles but also yields the right extinction coefficients. As a result, the model now is highly useful for interpretation of depth profiles, also for biomedical samples such as the human skin.

  3. Numerical evaluation of the bispectrum in multiple field inflation—the transport approach with code

    NASA Astrophysics Data System (ADS)

    Dias, Mafalda; Frazer, Jonathan; Mulryne, David J.; Seery, David

    2016-12-01

    We present a complete framework for numerical calculation of the power spectrum and bispectrum in canonical inflation with an arbitrary number of light or heavy fields. Our method includes all relevant effects at tree-level in the loop expansion, including (i) interference between growing and decaying modes near horizon exit; (ii) correlation and coupling between species near horizon exit and on superhorizon scales; (iii) contributions from mass terms; and (iv) all contributions from coupling to gravity. We track the evolution of each correlation function from the vacuum state through horizon exit and the superhorizon regime, with no need to match quantum and classical parts of the calculation; when integrated, our approach corresponds exactly with the tree-level Schwinger or `in-in' formulation of quantum field theory. In this paper we give the equations necessary to evolve all two- and three-point correlation functions together with suitable initial conditions. The final formalism is suitable to compute the amplitude, shape, and scale dependence of the bispectrum in models with |fNL| of order unity or less, which are a target for future galaxy surveys such as Euclid, DESI and LSST. As an illustration we apply our framework to a number of examples, obtaining quantitatively accurate predictions for their bispectra for the first time. Two accompanying reports describe publicly-available software packages that implement the method.

  4. Evaluation of numerical sediment quality targets for the St. Louis River Area of Concern

    USGS Publications Warehouse

    Crane, J.L.; MacDonald, D.D.; Ingersoll, C.G.; Smorong, D.E.; Lindskoog, R.A.; Severn, C.G.; Berger, T.A.; Field, L.J.

    2002-01-01

    Numerical sediment quality targets (SQTs) for the protection of sediment-dwelling organisms have been established for the St. Louis River Area of Concern (AOC), 1 of 42 current AOCs in the Great Lakes basin. The two types of SQTs were established primarily from consensus-based sediment quality guidelines. Level I SQTs are intended to identify contaminant concentrations below which harmful effects on sediment-dwelling organisms are unlikely to be observed. Level II SQTs are intended to identify contaminant concentrations above which harmful effects on sediment-dwelling organisms are likely to be observed. The predictive ability of the numerical SQTs was evaluated using the matching sediment chemistry and toxicity data set for the St. Louis River AOC. This evaluation involved determination of the incidence of toxicity to amphipods (Hyalella azteca) and midges (Chironomus tentans) within five ranges of Level II SQT quotients (i.e., mean probable effect concentration quotients [PEC-Qs]). The incidence of toxicity was determined based on the results of 10-day toxicity tests with amphipods (endpoints: survival and growth) and 10-day toxicity tests with midges (endpoints: survival and growth). For both toxicity tests, the incidence of toxicity increased as the mean PEC-Q ranges increased. The incidence of toxicity observed in these tests was also compared to that for other geographic areas in the Great Lakes region and in North America for 10- to 14-day amphipod (H. azteca) and 10- to 14-day midge (C. tentans or C. riparius) toxicity tests. In general, the predictive ability of the mean PEC-Qs was similar across geographic areas. The results of these predictive ability evaluations indicate that collectively the mean PEC-Qs provide a reliable basis for classifying sediments as toxic or not toxic in the St. Louis River AOC, in the larger geographic areas of the Great Lakes, and elsewhere in North America.

  5. Evaluation of accurate mass and relative isotopic abundance measurements in the LTQ-orbitrap mass spectrometer for further metabolomics database building.

    PubMed

    Xu, Ying; Heilier, Jean-François; Madalinski, Geoffrey; Genin, Eric; Ezan, Eric; Tabet, Jean-Claude; Junot, Christophe

    2010-07-01

    Recently, high-resolution mass spectrometry has been largely employed for compound identification, thanks to accurate mass measurements. As additional information, relative isotope abundance (RIA) is often needed to reduce the number of candidates prior to tandem MS(n). Here, we report on the evaluation of the LTQ-Orbitrap, in terms of accurate mass and RIA measurements for building further metabolomics spectral databases. Accurate mass measurements were achieved in the ppm range, using external calibration within 24 h, and remained at <5 ppm over a one-week period. The experimental relative abundances of (M+1) isotopic ions were evaluated in different data sets. First of all, 137 solutions of commercial compounds were analyzed by flow injection analysis in both the positive and negative ion modes. It was found that the ion abundance was the main factor impacting the accuracy of RIA measurements. It was possible to define some intensity thresholds above which errors were systematically <20% of their theoretical values. The same type of results were obtained with analyses from two biological media. Otherwise, no significant effect of ion transmission between the LTQ ion trap and the Orbitrap analyzer on RIA measurement errors was found, whereas the reliability of RIA measurements was dramatically improved by reducing the mass detection window. It was also observed that the signal integration method had a significant impact on RIA measurement errors, with the most-reliable results being obtained with peak height integrations. Finally, automatic integrations using the data preprocessing software XCMS and MZmine gave results similar to those obtained by manual integration, suggesting that it is relevant to use the RIA information in automatic elemental composition determination software from metabolomic peak tables.

  6. Numerical evaluation of the groundwater drainage system for underground storage caverns

    NASA Astrophysics Data System (ADS)

    Park, Eui Seob; Chae, Byung Gon

    2015-04-01

    A novel concept storing cryogenic liquefied natural gas in a hard rock lined cavern has been developed and tested for several years as an alternative. In this concept, groundwater in rock mass around cavern has to be fully drained until the early stage of construction and operation to avoid possible adverse effect of groundwater near cavern. And then rock mass should be re-saturated to form an ice ring, which is the zone around cavern including ice instead of water in several joints within the frozen rock mass. The drainage system is composed of the drainage tunnel excavated beneath the cavern and drain holes drilled on rock surface of the drainage tunnel. In order to de-saturate sufficiently rock mass around the cavern, the position and horizontal spacing of drain holes should be designed efficiently. In this paper, a series of numerical study results related to the drainage system of the full-scale cavern are presented. The rock type in the study area consists mainly of banded gneiss and mica schist. Gneiss is in slightly weathered state and contains a little joint and fractures. Schist contains several well-developed schistosities that mainly stand vertically, so that vertical joints are better developed than the horizontals in the area. Lugeon tests revealed that upper aquifer and bedrock are divided in the depth of 40-50m under the surface. Groundwater level was observed in twenty monitoring wells and interpolated in the whole area. Numerical study using Visual Modflow and Seep/W has been performed to evaluate the efficiency of drainage system for underground liquefied natural gas storage cavern in two hypothetically designed layouts and determine the design parameters. In Modflow analysis, groundwater flow change in an unconfined aquifer was simulated during excavation of cavern and operation of drainage system. In Seep/W analysis, amount of seepage and drainage was also estimated in a representative vertical section of each cavern. From the results

  7. SEQUESTRATION OF METALS IN ACTIVE CAP MATERIALS: A LABORATORY AND NUMERICAL EVALUATION

    SciTech Connect

    Dixon, K.; Knox, A.

    2012-02-13

    Active capping involves the use of capping materials that react with sediment contaminants to reduce their toxicity or bioavailability. Although several amendments have been proposed for use in active capping systems, little is known about their long-term ability to sequester metals. Recent research has shown that the active amendment apatite has potential application for metals contaminated sediments. The focus of this study was to evaluate the effectiveness of apatite in the sequestration of metal contaminants through the use of short-term laboratory column studies in conjunction with predictive, numerical modeling. A breakthrough column study was conducted using North Carolina apatite as the active amendment. Under saturated conditions, a spike solution containing elemental As, Cd, Co, Se, Pb, Zn, and a non-reactive tracer was injected into the column. A sand column was tested under similar conditions as a control. Effluent water samples were periodically collected from each column for chemical analysis. Relative to the non-reactive tracer, the breakthrough of each metal was substantially delayed by the apatite. Furthermore, breakthrough of each metal was substantially delayed by the apatite compared to the sand column. Finally, a simple 1-D, numerical model was created to qualitatively predict the long-term performance of apatite based on the findings from the column study. The results of the modeling showed that apatite could delay the breakthrough of some metals for hundreds of years under typical groundwater flow velocities.

  8. Numerical evaluation of the radiation from unbaffled, finite plates using the FFT

    NASA Technical Reports Server (NTRS)

    Williams, E. G.

    1983-01-01

    An iteration technique is described which numerically evaluates the acoustic pressure and velocity on and near unbaffled, finite, thin plates vibrating in air. The technique is based on Rayleigh's integral formula and its inverse. These formulas are written in their angular spectrum form so that the fast Fourier transform (FFT) algorithm may be used to evaluate them. As an example of the technique the pressure on the surface of a vibrating, unbaffled disk is computed and shown to be in excellent agreement with the exact solution using oblate spheroidal functions. Furthermore, the computed velocity field outside the disk shows the well-known singularity at the rim of the disk. The radiated fields from unbaffled flat sources of any geometry with prescribed surface velocity may be evaluated using this technique. The use of the FFT to perform the integrations in Rayleigh's formulas provides a great savings in computation time compared with standard integration algorithms, especially when an array processor can be used to implement the FFT.

  9. Evaluation and Numerical Simulation of Tsunami for Coastal Nuclear Power Plants of India

    SciTech Connect

    Sharma, Pavan K.; Singh, R.K.; Ghosh, A.K.; Kushwaha, H.S.

    2006-07-01

    Recent tsunami generated on December 26, 2004 due to Sumatra earthquake of magnitude 9.3 resulted in inundation at the various coastal sites of India. The site selection and design of Indian nuclear power plants demand the evaluation of run up and the structural barriers for the coastal plants: Besides it is also desirable to evaluate the early warning system for tsunami-genic earthquakes. The tsunamis originate from submarine faults, underwater volcanic activities, sub-aerial landslides impinging on the sea and submarine landslides. In case of a submarine earthquake-induced tsunami the wave is generated in the fluid domain due to displacement of the seabed. There are three phases of tsunami: generation, propagation, and run-up. Reactor Safety Division (RSD) of Bhabha Atomic Research Centre (BARC), Trombay has initiated computational simulation for all the three phases of tsunami source generation, its propagation and finally run up evaluation for the protection of public life, property and various industrial infrastructures located on the coastal regions of India. These studies could be effectively utilized for design and implementation of early warning system for coastal region of the country apart from catering to the needs of Indian nuclear installations. This paper presents some results of tsunami waves based on different analytical/numerical approaches with shallow water wave theory. (authors)

  10. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  11. Numerical evaluation of a 13.5-nm high-brightness microplasma extreme ultraviolet source

    SciTech Connect

    Hara, Hiroyuki Arai, Goki; Dinh, Thanh-Hung; Higashiguchi, Takeshi; Jiang, Weihua; Miura, Taisuke; Endo, Akira; Ejima, Takeo; Li, Bowen; Dunne, Padraig; O'Sullivan, Gerry; Sunahara, Atsushi

    2015-11-21

    The extreme ultraviolet (EUV) emission and its spatial distribution as well as plasma parameters in a microplasma high-brightness light source are characterized by the use of a two-dimensional radiation hydrodynamic simulation. The expected EUV source size, which is determined by the expansion of the microplasma due to hydrodynamic motion, was evaluated to be 16 μm (full width) and was almost reproduced by the experimental result which showed an emission source diameter of 18–20 μm at a laser pulse duration of 150 ps [full width at half-maximum]. The numerical simulation suggests that high brightness EUV sources should be produced by use of a dot target based microplasma with a source diameter of about 20 μm.

  12. Numerical evaluation of linearized image reconstruction based on finite element method for biomedical photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Okawa, Shinpei; Hirasawa, Takeshi; Kushibiki, Toshihiro; Ishihara, Miya

    2013-09-01

    An image reconstruction algorithm for biomedical photoacoustic imaging is discussed. The algorithm solves the inverse problem of the photoacoustic phenomenon in biological media and images the distribution of large optical absorption coefficients, which can indicate diseased tissues such as cancers with angiogenesis and the tissues labeled by exogenous photon absorbers. The linearized forward problem, which relates the absorption coefficients to the detected photoacoustic signals, is formulated by using photon diffusion and photoacoustic wave equations. Both partial differential equations are solved by a finite element method. The inverse problem is solved by truncated singular value decomposition, which reduces the effects of the measurement noise and the errors between forward modeling and actual measurement systems. The spatial resolution and the robustness to various factors affecting the image reconstruction are evaluated by numerical experiments with 2D geometry.

  13. Design and numerical evaluation of a volume coil array for parallel MR imaging at ultrahigh fields

    PubMed Central

    Pang, Yong; Wong, Ernest W.H.; Yu, Baiying

    2014-01-01

    In this work, we propose and investigate a volume coil array design method using different types of birdcage coils for MR imaging. Unlike the conventional radiofrequency (RF) coil arrays of which the array elements are surface coils, the proposed volume coil array consists of a set of independent volume coils including a conventional birdcage coil, a transverse birdcage coil, and a helix birdcage coil. The magnetic fluxes of these three birdcage coils are intrinsically cancelled, yielding a highly decoupled volume coil array. In contrast to conventional non-array type volume coils, the volume coil array would be beneficial in improving MR signal-to-noise ratio (SNR) and also gain the capability of implementing parallel imaging. The volume coil array is evaluated at the ultrahigh field of 7T using FDTD numerical simulations, and the g-factor map at different acceleration rates was also calculated to investigate its parallel imaging performance. PMID:24649435

  14. Evaluation of a new parallel numerical parameter optimization algorithm for a dynamical system

    NASA Astrophysics Data System (ADS)

    Duran, Ahmet; Tuncel, Mehmet

    2016-10-01

    It is important to have a scalable parallel numerical parameter optimization algorithm for a dynamical system used in financial applications where time limitation is crucial. We use Message Passing Interface parallel programming and present such a new parallel algorithm for parameter estimation. For example, we apply the algorithm to the asset flow differential equations that have been developed and analyzed since 1989 (see [3-6] and references contained therein). We achieved speed-up for some time series to run up to 512 cores (see [10]). Unlike [10], we consider more extensive financial market situations, for example, in presence of low volatility, high volatility and stock market price at a discount/premium to its net asset value with varying magnitude, in this work. Moreover, we evaluated the convergence of the model parameter vector, the nonlinear least squares error and maximum improvement factor to quantify the success of the optimization process depending on the number of initial parameter vectors.

  15. Evaluation of Temperature Gradient in Advanced Automated Directional Solidification Furnace (AADSF) by Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.

    1996-01-01

    A numerical model of heat transfer using combined conduction, radiation and convection in AADSF was used to evaluate temperature gradients in the vicinity of the crystal/melt interface for variety of hot and cold zone set point temperatures specifically for the growth of mercury cadmium telluride (MCT). Reverse usage of hot and cold zones was simulated to aid the choice of proper orientation of crystal/melt interface regarding residual acceleration vector without actual change of furnace location on board the orbiter. It appears that an additional booster heater will be extremely helpful to ensure desired temperature gradient when hot and cold zones are reversed. Further efforts are required to investigate advantages/disadvantages of symmetrical furnace design (i.e. with similar length of hot and cold zones).

  16. Development of a numerical simulator of human swallowing using a particle method (Part 2. Evaluation of the accuracy of a swallowing simulation using the 3D MPS method).

    PubMed

    Kamiya, Tetsu; Toyama, Yoshio; Michiwaki, Yukihiro; Kikuchi, Takahiro

    2013-01-01

    The aim of this study was to develop and evaluate the accuracy of a three-dimensional (3D) numerical simulator of the swallowing action using the 3D moving particle simulation (MPS) method, which can simulate splashes and rapid changes in the free surfaces of food materials. The 3D numerical simulator of the swallowing action using the MPS method was developed based on accurate organ models, which contains forced transformation by elapsed time. The validity of the simulation results were evaluated qualitatively based on comparisons with videofluorography (VF) images. To evaluate the validity of the simulation results quantitatively, the normalized brightness around the vallecula was used as the evaluation parameter. The positions and configurations of the food bolus during each time step were compared in the simulated and VF images. The simulation results corresponded to the VF images during each time step in the visual evaluations, which suggested that the simulation was qualitatively correct. The normalized brightness of the simulated and VF images corresponded exactly at all time steps. This showed that the simulation results, which contained information on changes in the organs and the food bolus, were numerically correct. Based on these results, the accuracy of this simulator was high and it could be used to study the mechanism of disorders that cause dysphasia. This simulator also calculated the shear rate at a specific point and the timing with Newtonian and non-Newtonian fluids. We think that the information provided by this simulator could be useful for development of food products, medicines, and in rehabilitation facilities.

  17. A technique for evaluating bone ingrowth into 3D printed, porous Ti6Al4V implants accurately using X-ray micro-computed tomography and histomorphometry.

    PubMed

    Palmquist, Anders; Shah, Furqan A; Emanuelsson, Lena; Omar, Omar; Suska, Felicia

    2017-03-01

    This paper investigates the application of X-ray micro-computed tomography (micro-CT) to accurately evaluate bone formation within 3D printed, porous Ti6Al4V implants manufactured using Electron Beam Melting (EBM), retrieved after six months of healing in sheep femur and tibia. All samples were scanned twice (i.e., before and after resin embedding), using fast, low-resolution scans (Skyscan 1172; Bruker micro-CT, Kontich, Belgium), and were analysed by 2D and 3D morphometry. The main questions posed were: (i) Can low resolution, fast scans provide morphometric data of bone formed inside (and around) metal implants with a complex, open-pore architecture?, (ii) Can micro-CT be used to accurately quantify both the bone area (BA) and bone-implant contact (BIC)?, (iii) What degree of error is introduced in the quantitative data by varying the threshold values?, and (iv) Does resin embedding influence the accuracy of the analysis? To validate the accuracy of micro-CT measurements, each data set was correlated with a corresponding centrally cut histological section. The results show that quantitative histomorphometry corresponds strongly with 3D measurements made by micro-CT, where a high correlation exists between the two techniques for bone area/volume measurements around and inside the porous network. On the contrary, the direct bone-implant contact is challenging to estimate accurately or reproducibly. Large errors may be introduced in micro-CT measurements when segmentation is performed without calibrating the data set against a corresponding histological section. Generally, the bone area measurement is strongly influenced by the lower threshold limit, while the upper threshold limit has little or no effect. Resin embedding does not compromise the accuracy of micro-CT measurements, although there is a change in the contrast distributions and optimisation of the threshold ranges is required.

  18. Numerical and Experimental Evaluation of Picoliter Inkjet Head for Micropatterning of Printed Electronics

    NASA Astrophysics Data System (ADS)

    Yoo, Young-Seuck; Kim, Changsung Sean; Sok Park, Yoon; Sim, Won-Chul; Park, Changsung; Joung, Jaewoo; Park, Jin-Goo; Oh, Yongsoo

    2010-05-01

    A design process based on multiphysics modeling and micro-electro-mechanical systems (MEMS) fabrication has been established to develop a picoliter inkjet printhead for micro-patterning for printed electronics. Piezoelectric actuator is designed with numerical analysis using Covent-Ware with consideration of electrical characteristic of piezoelectric material and physical characteristic of silicon structure. The displacements of a piezoelectric actuator according to voltage waveform are evaluated and verified by laser doppler vibrometer (LDV). Piezoelectric printheads have been fabricated with silicon and silicon-on-insulator (SOI) wafers by MEMS process and silicon to silicon bonding method. As a preliminary approach, liquid metal jetting phenomena are identified by simulating droplet ejection and droplet formation in a consequent manner. Parametric studies are followed by the design optimization process to deduce key issues to inkjet head performance: printhead configuration, input voltage amplitude, ink viscosity and meniscus movement using computational fluid dynamics (CFD). By adjusting the driving voltage along with optimizing the drive waveform, the droplet volume and velocity can be controlled and evaluated by a drop watcher system. As a result, inkjet printhead capable of ejecting 1 pL droplet, which is required by electronic applications such as fabricating metal lines on printed circuit board (PCB), is developed.

  19. On the importance of 3D, geometrically accurate, and subject-specific finite element analysis for evaluation of in-vivo soft tissue loads.

    PubMed

    Moerman, Kevin M; van Vijven, Marc; Solis, Leandro R; van Haaften, Eline E; Loenen, Arjan C Y; Mushahwar, Vivian K; Oomens, Cees W J

    2017-04-01

    Pressure ulcers are a type of local soft tissue injury due to sustained mechanical loading and remain a common issue in patient care. People with spinal cord injury (SCI) are especially at risk of pressure ulcers due to impaired mobility and sensory perception. The development of load improving support structures relies on realistic tissue load evaluation e.g. using finite element analysis (FEA). FEA requires realistic subject-specific mechanical properties and geometries. This study focuses on the effect of geometry. MRI is used for the creation of geometrically accurate models of the human buttock for three able-bodied volunteers and three volunteers with SCI. The effect of geometry on observed internal tissue deformations for each subject is studied by comparing FEA findings for equivalent loading conditions. The large variations found between subjects confirms the importance of subject-specific FEA.

  20. Numerical modeling of geothermal heat pump system: evaluation of site specific groundwater thermal impact

    NASA Astrophysics Data System (ADS)

    Pedron, Roberto; Sottani, Andrea; Vettorello, Luca

    2014-05-01

    A pilot plant using a geothermal open-loop heat pump system has been realized in the city of Vicenza (Northern Italy), in order to meet the heating and cooling needs of the main monumental building in the historical center, the Palladian Basilica. The low enthalpy geothermal system consists of a pumping well and a reinjection well, both intercepting the same confined aquifer; three other monitoring wells have been drilled and then provided with water level and temperature dataloggers. After about 1 year and a half of activity, during a starting experimental period of three years, we have now the opportunity to analyze long term groundwater temperature data series and to evaluate the numerical modeling reliability about thermal impact prediction. The initial model, based on MODFLOW and SHEMAT finite difference codes, has been calibrated using pumping tests and other field investigations data, obtaining a valid and reliable groundwater flow simulation. But thermal parameters, such as thermal conductivity and volumetric heat capacity, didn't have a site specific direct estimation and therefore they have been assigned to model cells referring to bibliographic standards, usually derived from laboratory tests and barely representing real aquifer properties. Anyway preliminary heat transport results have been compared with observed temperature trends, showing an efficient representation of the thermal plume extension and shape. The ante operam simulation could not consider heat pump real utilization, that happened to be relevantly different from the expected project values; so the first numerical model could not properly simulate the groundwater temperature evolution. Consequently a second model has been implemented, in order to calibrate the mathematical simulation with monitored groundwater temperature datasets, trying to achieve higher levels of reliability in heat transport phenomena interpretation. This second step analysis focuses on aquifer thermal parameters

  1. A New Look at Stratospheric Sudden Warmings. Part II: Evaluation of Numerical Model Simulations

    NASA Technical Reports Server (NTRS)

    Charlton, Andrew J.; Polvani, Lorenza M.; Perlwitz, Judith; Sassi, Fabrizio; Manzini, Elisa; Shibata, Kiyotaka; Pawson, Steven; Nielsen, J. Eric; Rind, David

    2007-01-01

    The simulation of major midwinter stratospheric sudden warmings (SSWs) in six stratosphere-resolving general circulation models (GCMs) is examined. The GCMs are compared to a new climatology of SSWs, based on the dynamical characteristics of the events. First, the number, type, and temporal distribution of SSW events are evaluated. Most of the models show a lower frequency of SSW events than the climatology, which has a mean frequency of 6.0 SSWs per decade. Statistical tests show that three of the six models produce significantly fewer SSWs than the climatology, between 1.0 and 2.6 SSWs per decade. Second, four process-based diagnostics are calculated for all of the SSW events in each model. It is found that SSWs in the GCMs compare favorably with dynamical benchmarks for SSW established in the first part of the study. These results indicate that GCMs are capable of quite accurately simulating the dynamics required to produce SSWs, but with lower frequency than the climatology. Further dynamical diagnostics hint that, in at least one case, this is due to a lack of meridional heat flux in the lower stratosphere. Even though the SSWs simulated by most GCMs are dynamically realistic when compared to the NCEP-NCAR reanalysis, the reasons for the relative paucity of SSWs in GCMs remains an important and open question.

  2. Numerical Evaluation of Fluid Mixing Phenomena in Boiling Water Reactor Using Advanced Interface Tracking Method

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Takase, Kazuyuki

    Thermal-hydraulic design of the current boiling water reactor (BWR) is performed with the subchannel analysis codes which incorporated the correlations based on empirical results including actual-size tests. Then, for the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) core, an actual size test of an embodiment of its design is required to confirm or modify such correlations. In this situation, development of a method that enables the thermal-hydraulic design of nuclear reactors without these actual size tests is desired, because these tests take a long time and entail great cost. For this reason, we developed an advanced thermal-hydraulic design method for FLWRs using innovative two-phase flow simulation technology. In this study, a detailed Two-Phase Flow simulation code using advanced Interface Tracking method: TPFIT is developed to calculate the detailed information of the two-phase flow. In this paper, firstly, we tried to verify the TPFIT code by comparing it with the existing 2-channel air-water mixing experimental results. Secondary, the TPFIT code was applied to simulation of steam-water two-phase flow in a model of two subchannels of a current BWRs and FLWRs rod bundle. The fluid mixing was observed at a gap between the subchannels. The existing two-phase flow correlation for fluid mixing is evaluated using detailed numerical simulation data. This data indicates that pressure difference between fluid channels is responsible for the fluid mixing, and thus the effects of the time average pressure difference and fluctuations must be incorporated in the two-phase flow correlation for fluid mixing. When inlet quality ratio of subchannels is relatively large, it is understood that evaluation precision of the existing two-phase flow correlations for fluid mixing are relatively low.

  3. Numerical Evaluation Of Shape Memory Alloy Recentering Braces In Reinforced Concrete Buildings Subjected To Seismic Loading

    NASA Astrophysics Data System (ADS)

    Charles, Winsbert Curt

    Seismic protective techniques utilizing specialized energy dissipation devices within the lateral resisting frames have been successfully used to limit inelastic deformation in reinforced concrete buildings by increasing damping and/or altering the stiffness of these structures. However, there is a need to investigate and develop systems with self-centering capabilities; systems that are able to assist in returning a structure to its original position after an earthquake. In this project, the efficacy of a shape memory alloy (SMA) based device, as a structural recentering device is evaluated through numerical analysis using the OpenSees framework. OpenSees is a software framework for simulating the seismic response of structural and geotechnical systems. OpenSees has been developed as the computational platform for research in performance-based earthquake engineering at the Pacific Earthquake Engineering Research Center (PEER). A non-ductile reinforced concrete building, which is modelled using OpenSees and verified with available experimental data is used for the analysis in this study. The model is fitted with Tension/Compression (TC) SMA devices. The performance of the SMA recentering device is evaluated for a set of near-field and far-field ground motions. Critical performance measures of the analysis include residual displacements, interstory drift and acceleration (horizontal and vertical) for different types of ground motions. The results show that the TC device's performance is unaffected by the type of ground motion. The analysis also shows that the inclusion of the device in the lateral force resisting system of the building resulted in a 50% decrease in peak horizontal displacement, and inter-story drift elimination of residual deformations, acceleration was increased up to 110%.

  4. Critical evaluation of three hemodynamic models for the numerical simulation of intra-stent flows.

    PubMed

    Chabi, Fatiha; Champmartin, Stéphane; Sarraf, Christophe; Noguera, Ricardo

    2015-07-16

    We evaluate here three hemodynamic models used for the numerical simulation of bare and stented artery flows. We focus on two flow features responsible for intra-stent restenosis: the wall shear stress and the re-circulation lengths around a stent. The studied models are the Poiseuille profile, the simplified pulsatile profile and the complete pulsatile profile based on the analysis of Womersley. The flow rate of blood in a human left coronary artery is considered to compute the velocity profiles. "Ansys Fluent 14.5" is used to solve the Navier-Stokes and continuity equations. As expected our results show that the Poiseuille profile is questionable to simulate the complex flow dynamics involved in intra-stent restenosis. Both pulsatile models give similar results close to the strut but diverge far from it. However, the computational time for the complete pulsatile model is five times that of the simplified pulsatile model. Considering the additional "cost" for the complete model, we recommend using the simplified pulsatile model for future intra-stent flow simulations.

  5. Numerical models for the evaluation of the contact angle from axisymmetric drop profiles: a statistical comparison.

    PubMed

    Bortolotti, Mauro; Brugnara, Marco; Della Volpe, Claudio; Siboni, Stefano

    2009-08-01

    Axisymmetric drop shape analysis (ADSA) is a well-established methodology for estimating the contact angle value and the surface tension of liquids starting from sessile drops images. It consists of an iterative procedure in which a best fit between a theoretical axisymmetric Laplacian curve and an experimental drop profile is performed. When only an evaluation of the geometric contact angle value is needed, a similar numerical approach can be adopted by using simpler algebraic models in place of a Laplace profile, thus allowing more straightforward implementations and shorter computation times. In this work the relative merits of the different methodologies are compared. Beside the standard ADSA procedure, four different mathematical models are examined, namely the circular and elliptical models, the first-order perturbative solution of the Laplace equation, and a cubic spline model. Their relative statistical performances are tested on both calculated and experimental drop profiles. For simulated drops, the actual capability of the models to predict the correct contact angle is also investigated.

  6. Numerical evaluation of E-fields induced by body motion near high-field MRI scanner.

    PubMed

    Crozier, S; Liu, F

    2004-01-01

    In modern magnetic resonance imaging (MRI), both patients and radiologists are exposed to strong, nonuniform static magnetic fields inside or outside of the scanner, in which the body movement may be able to induce electric currents in tissues which could be possibly harmful. This paper presents theoretical investigations into the spatial distribution of induced E-fields in the human model when moving at various positions around the magnet. The numerical calculations are based on an efficient, quasistatic, finite-difference scheme and an anatomically realistic, full-body, male model. 3D field profiles from an actively-shielded 4 T magnet system are used and the body model projected through the field profile with normalized velocity. The simulation shows that it is possible to induce E-fields/currents near the level of physiological significance under some circumstances and provides insight into the spatial characteristics of the induced fields. The results are easy to extrapolate to very high field strengths for the safety evaluation at a variety of field strengths and motion velocities.

  7. Need for accurate and standardized determination of amino acids and bioactive peptides for evaluating protein quality and potential health effects of foods and dietary supplements.

    PubMed

    Gilani, G Sarwar; Xiao, Chaowu; Lee, Nora

    2008-01-01

    Accurate standardized methods for the determination of amino acid in foods are required to assess the nutritional safety and compositional adequacy of sole source foods such as infant formulas and enteral nutritionals, and protein and amino acid supplements and their hydrolysates, and to assess protein claims of foods. Protein digestibility-corrected amino acid score (PDCAAS), which requires information on amino acid composition, is the official method for assessing protein claims of foods and supplements sold in the United States. PDCAAS has also been adopted internationally as the most suitable method for routine evaluation of protein quality of foods by the Food and Agriculture Organization/World Health Organization. Standardized methods for analysis of amino acids by ion-exchange chromatography have been developed. However, there is a need to develop validated methods of amino acid analysis in foods using liquid chromatographic techniques, which have replaced ion-exchange methods for quantifying amino acids in most laboratories. Bioactive peptides from animal and plant proteins have been found to potentially impact human health. A wide range of physiological effects, including blood pressure-lowering effects, cholesterol-lowering ability, antithrombotic effects, enhancement of mineral absorption, and immunomodulatory effects have been described for bioactive peptides. There is considerable commercial interest in developing functional foods containing bioactive peptides. There is also a need to develop accurate standardized methods for the characterization (amino acid sequencing) and quantification of bioactive peptides and to carry out dose-response studies in animal models and clinical trials to assess safety, potential allergenicity, potential intolerance, and efficacy of bioactive peptides. Information from these studies is needed for determining the upper safe levels of bioactive peptides and as the basis for developing potential health claims for bioactive

  8. Evaluation of the influence mode on the CVC GaN HEMT using numerical modeling

    NASA Astrophysics Data System (ADS)

    Parnes, Ya M.; Tikhomirov, V. G.; Petrov, V. A.; Gudkov, A. G.; Marzhanovskiy, I. N.; Kukhareva, E. S.; Vyuginov, V. N.; Volkov, V. V.; Zybin, A. A.

    2016-08-01

    Done numerically simulated the effects of certain modes of operation on the CVC of field microwave transistors on the basis of heterostructures AlGaN / GaN (HEMT). The results of these studies suggest the possibility of quite efficient use of numerical simulation for the development of HEMT microwave transistors allowing for the real instrument designs.

  9. Seismic fragility evaluation of a piping system in a nuclear power plant by shaking table test and numerical analysis

    SciTech Connect

    Kim, M. K.; Kim, J. H.; Choi, I. K.

    2012-07-01

    In this study, a seismic fragility evaluation of the piping system in a nuclear power plant was performed. For the evaluation of seismic fragility of the piping system, this research was progressed as three steps. At first, several piping element capacity tests were performed. The monotonic and cyclic loading tests were conducted under the same internal pressure level of actual nuclear power plants to evaluate the performance. The cracks and wall thinning were considered as degradation factors of the piping system. Second, a shaking tale test was performed for an evaluation of seismic capacity of a selected piping system. The multi-support seismic excitation was performed for the considering a difference of an elevation of support. Finally, a numerical analysis was performed for the assessment of seismic fragility of piping system. As a result, a seismic fragility for piping system of NPP in Korea by using a shaking table test and numerical analysis. (authors)

  10. Evaluation of numerical weather predictions performed in the context of the project DAPHNE

    NASA Astrophysics Data System (ADS)

    Tegoulias, Ioannis; Pytharoulis, Ioannis; Bampzelis, Dimitris; Karacostas, Theodore

    2014-05-01

    The region of Thessaly in central Greece is one of the main areas of agricultural production in Greece. Severe weather phenomena affect the agricultural production in this region with adverse effects for farmers and the national economy. For this reason the project DAPHNE aims at tackling the problem of drought by means of weather modification through the development of the necessary tools to support the application of a rainfall enhancement program. In the present study the numerical weather prediction system WRF-ARW is used, in order to assess its ability to represent extreme weather phenomena in the region of Thessaly. WRF is integrated in three domains covering Europe, Eastern Mediterranean and Central-Northern Greece (Thessaly and a large part of Macedonia) using telescoping nesting with grid spacing of 15km, 5km and 1.667km, respectively. The cases examined span throughout the transitional and warm period (April to September) of the years 2008 to 2013, including days with thunderstorm activity. Model results are evaluated against all available surface observations and radar products, taking into account the spatial characteristics and intensity of the storms. Preliminary results indicate a good level of agreement between the simulated and observed fields as far as the standard parameters (such as temperature, humidity and precipitation) are concerned. Moreover, the model generally exhibits a potential to represent the occurrence of the convective activity, but not its exact spatiotemporal characteristics. Acknowledgements This research work has been co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-2013)

  11. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    PubMed Central

    2010-01-01

    Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references for normalization of gene

  12. Accurate Analysis and Evaluation of Acidic Plant Growth Regulators in Transgenic and Nontransgenic Edible Oils with Facile Microwave-Assisted Extraction-Derivatization.

    PubMed

    Liu, Mengge; Chen, Guang; Guo, Hailong; Fan, Baolei; Liu, Jianjun; Fu, Qiang; Li, Xiu; Lu, Xiaomin; Zhao, Xianen; Li, Guoliang; Sun, Zhiwei; Xia, Lian; Zhu, Shuyun; Yang, Daoshan; Cao, Ziping; Wang, Hua; Suo, Yourui; You, Jinmao

    2015-09-16

    Determination of plant growth regulators (PGRs) in a signal transduction system (STS) is significant for transgenic food safety, but may be challenged by poor accuracy and analyte instability. In this work, a microwave-assisted extraction-derivatization (MAED) method is developed for six acidic PGRs in oil samples, allowing an efficient (<1.5 h) and facile (one step) pretreatment. Accuracies are greatly improved, particularly for gibberellin A3 (-2.72 to -0.65%) as compared with those reported (-22 to -2%). Excellent selectivity and quite low detection limits (0.37-1.36 ng mL(-1)) are enabled by fluorescence detection-mass spectrum monitoring. Results show the significant differences in acidic PGRs between transgenic and nontransgenic oils, particularly 1-naphthaleneacetic acid (1-NAA), implying the PGRs induced variations of components and genes. This study provides, for the first time, an accurate and efficient determination for labile PGRs involved in STS and a promising concept for objectively evaluating the safety of transgenic foods.

  13. Development of a numerical simulator of human swallowing using a particle method (part 1. Preliminary evaluation of the possibility of numerical simulation using the MPS method).

    PubMed

    Kamiya, Tetsu; Toyama, Yoshio; Michiwaki, Yukihiro; Kikuchi, Takahiro

    2013-01-01

    The aim of the present study was to evaluate the possibility of numerical simulation of the swallowing process using a moving particle simulation (MPS) method, which defined the food bolus as a number of particles in a fluid, a solid, and an elastic body. In order to verify the accuracy of the simulation results, a simple water bolus falling model was solved using the three-dimensional (3D) MPS method. We also examined the simplified swallowing simulation using a two-dimensional (2D) MPS method to confirm the interactions between the liquid, solid, elastic bolus, and organ structure. In a comparison of the 3D MPS simulation and experiments, the falling time of the water bolus and the configuration of the interface between the liquid and air corresponded exactly to the experimental measurements and the visualization images. The results showed that the accuracy of the 3D MPS simulation was qualitatively high for the simple falling model. Based on the results of the simplified swallowing simulation using the 2D MPS method, each bolus, defined as a liquid, solid, and elastic body, exhibited different behavior when the organs were transformed forcedly. This confirmed that the MPS method could be used for coupled simulations of the fluid, the solid, the elastic body, and the organ structures. The results suggested that the MPS method could be used to develop a numerical simulator of the swallowing process.

  14. A numerical model for CO effect evaluation in HT-PEMFCs: Part 1 - Experimental validation

    NASA Astrophysics Data System (ADS)

    Cozzolino, R.; Chiappini, D.; Tribioli, L.

    2016-06-01

    In this paper, a self-made numerical model of a high temperature polymer electrolyte membrane fuel cell is presented. In particular, the experimental activity has been addressed to the impact on cell performance of the CO content in the anode gas feeding, for the whole operating range, and a numerical code has been implemented and validated against these experimental results. The proposed numerical model employs a zero-dimensional framework coupled with a semi-empirical approach, which aims at providing a smart and flexible tool useful for investigating the membrane behavior under different working conditions. Results show an acceptable agreement between numerical and experimental data, confirming the potentiality and reliability of the developed tool, despite its simplicity.

  15. Development and Evaluation of a Remedial Numerical Skills Workbook for Navy Training.

    DTIC Science & Technology

    1981-02-01

    field test groups were used in the computations. (See table 4.) The estimated reliability of Form A ranged from .75 to .86 on the Kuder - Richardson ...the Kuder - Richardson Formula 20 and from .79 to .87 on the Spearman-Brown for the three populations investigated. TABLE 4. RELIABILITY ESTIMATES FOR...Data of Recruits on the Numerical Skills Test . . .. 13 4 Reliability Estimates for the Numerical Skills Test ... ...... 14 5 Summary of Data of

  16. Accurate and fast computation of transmission cross coefficients

    NASA Astrophysics Data System (ADS)

    Apostol, Štefan; Hurley, Paul; Ionescu, Radu-Cristian

    2015-03-01

    Precise and fast computation of aerial images are essential. Typical lithographic simulators employ a Köhler illumination system for which aerial imagery is obtained using a large number of Transmission Cross Coefficients (TCCs). These are generally computed by a slow numerical evaluation of a double integral. We review the general framework in which the 2D imagery is solved and then propose a fast and accurate method to obtain the TCCs. We acquire analytical solutions and thus avoid the complexity-accuracy trade-off encountered with numerical integration. Compared to other analytical integration methods, the one presented is faster, more general and more tractable.

  17. Numerical analysis on the effect of angle of attack on evaluating radio-frequency blackout in atmospheric reentry

    NASA Astrophysics Data System (ADS)

    Jung, Minseok; Kihara, Hisashi; Abe, Ken-ichi; Takahashi, Yusuke

    2016-06-01

    A three-dimensional numerical simulation model that considers the effect of the angle of attack was developed to evaluate plasma flows around reentry vehicles. In this simulation model, thermochemical nonequilibrium of flowfields is considered by using a four-temperature model for high-accuracy simulations. Numerical simulations were performed for the orbital reentry experiment of the Japan Aerospace Exploration Agency, and the results were compared with experimental data to validate the simulation model. A comparison of measured and predicted results showed good agreement. Moreover, to evaluate the effect of the angle of attack, we performed numerical simulations around the Atmospheric Reentry Demonstrator of the European Space Agency by using an axisymmetric model and a three-dimensional model. Although there were no differences in the flowfields in the shock layer between the results of the axisymmetric and the three-dimensional models, the formation of the electron number density, which is an important parameter in evaluating radio-frequency blackout, was greatly changed in the wake region when a non-zero angle of attack was considered. Additionally, the number of altitudes at which radio-frequency blackout was predicted in the numerical simulations declined when using the three-dimensional model for considering the angle of attack.

  18. Numerical evaluation of multilayer holographic data storage with a varifocal lens generated with a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Nobukawa, Teruyoshi; Nomura, Takanori

    2015-08-01

    A multilayer recording using a varifocal lens generated with a phase-only spatial light modulator (SLM) is proposed. A phase-only SLM is used for not only improving interference efficiency between signal and reference beams but also shifting a focus plane along an optical axis. A focus plane can be shifted by adding a spherical phase to a phase modulation pattern displayed on a phase-only SLM. A focal shift with adding a spherical phase was numerically confirmed. In addition, shift selectivity and recording performance of the proposed multilayer recording method were numerically evaluated in coaxial holographic data storage.

  19. Use of Numerical Groundwater Modeling to Evaluate Uncertainty in Conceptual Models of Recharge and Hydrostratigraphy

    SciTech Connect

    Pohlmann, Karl; Ye, Ming; Pohll, Greg; Chapman, Jenny

    2007-01-19

    Numerical groundwater models are based on conceptualizations of hydrogeologic systems that are by necessity developed from limited information and therefore are simplifications of real conditions. Each aspect (e.g. recharge, hydrostratigraphy, boundary conditions) of the groundwater model is often based on a single conceptual model that is considered to be the best representation given the available data. However, the very nature of their construction means that each conceptual model is inherently uncertain and the available information may be insufficient to refute plausible alternatives, thereby raising the possibility that the flow model is underestimating overall uncertainty. In this study we use the Death Valley Regional Flow System model developed by the U.S. Geological Survey as a framework to predict regional groundwater flow southward into Yucca Flat on the Nevada Test Site. An important aspect of our work is to evaluate the uncertainty associated with multiple conceptual models of groundwater recharge and subsurface hydrostratigraphy and quantify the impacts of this uncertainty on model predictions. In our study, conceptual model uncertainty arises from two sources: (1) alternative interpretations of the hydrostratigraphy in the northern portion of Yucca Flat where, owing to sparse data, the hydrogeologic system can be conceptualized in different ways, and (2) uncertainty in groundwater recharge in the region as evidenced by the existence of several independent approaches for estimating this aspect of the hydrologic system. The composite prediction of groundwater flow is derived from the regional model that formally incorporates the uncertainty in these alternative input models using the maximum likelihood Bayesian model averaging method. An assessment of the joint predictive uncertainty of the input conceptual models is also produced. During this process, predictions of the alternative models are weighted by model probability, which is the degree of

  20. Evaluation of transverse dispersion effects in tank experiments by numerical modeling: parameter estimation, sensitivity analysis and revision of experimental design.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2012-06-01

    Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications.

  1. Numerical evaluation of a sensible heat balance method to determine rates of soil freezing and thawing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In-situ determination of ice formation and thawing in soils is difficult despite its importance for many environmental processes. A sensible heat balance (SHB) method using a sequence of heat pulse probes has been shown to accurately measure water evaporation in subsurface soil, and it has the poten...

  2. Electron transport and energy degradation in the ionosphere: Evaluation of the numerical solution, comparison with laboratory experiments and auroral observations

    NASA Technical Reports Server (NTRS)

    Lummerzheim, D.; Lilensten, J.

    1994-01-01

    Auroral electron transport calculations are a critical part of auroral models. We evaluate a numerical solution to the transport and energy degradation problem. The numerical solution is verified by reproducing simplified problems to which analytic solutions exist, internal self-consistency tests, comparison with laboratory experiments of electron beams penetrating a collision chamber, and by comparison with auroral observations, particularly the emission ratio of the N2 second positive to N2(+) first negative emissions. Our numerical solutions agree with range measurements in collision chambers. The calculated N(2)2P to N2(+)1N emission ratio is independent of the spectral characteristics of the incident electrons, and agrees with the value observed in aurora. Using different sets of energy loss cross sections and different functions to describe the energy distribution of secondary electrons that emerge from ionization collisions, we discuss the uncertainties of the solutions to the electron transport equation resulting from the uncertainties of these input parameters.

  3. Numerical Simulation Approaches to Evaluating the Electromagnetic Loads on the EAST Vacuum Vessel

    NASA Astrophysics Data System (ADS)

    Li, Jun; Xu, Weiwei; Song, Yuntao; Lu, Mingxuan

    2013-12-01

    Numerical simulation approaches are developed to compute the electromagnetic forces on the EAST vacuum vessel during major disruptions and vertical displacement events, with the halo current also considered. The finite element model built with ANSYS includes the vacuum vessel, the plasma facing components and their support structure, and the toroidal and poloidal field coils. The numerical methods are explained to convince of its validity. The eddy current induced by the magnetic flux variation and the conducting current caused by the halo current are also presented for discussion. The electromagnetic forces resulting from the numerical simulation are proven to be useful for structure design optimization. Similar methods can be applied in the upgrades of the EAST device.

  4. Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems

    NASA Technical Reports Server (NTRS)

    Cerro, J. A.; Scotti, S. J.

    1991-01-01

    Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.

  5. Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems

    NASA Astrophysics Data System (ADS)

    Cerro, J. A.; Scotti, S. J.

    1991-07-01

    Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.

  6. Small and efficient basis sets for the evaluation of accurate interaction energies: aromatic molecule-argon ground-state intermolecular potentials and rovibrational states.

    PubMed

    Cybulski, Hubert; Baranowska-Łączkowska, Angelika; Henriksen, Christian; Fernández, Berta

    2014-11-06

    By evaluating a representative set of CCSD(T) ground state interaction energies for van der Waals dimers formed by aromatic molecules and the argon atom, we test the performance of the polarized basis sets of Sadlej et al. (J. Comput. Chem. 2005, 26, 145; Collect. Czech. Chem. Commun. 1988, 53, 1995) and the augmented polarization-consistent bases of Jensen (J. Chem. Phys. 2002, 117, 9234) in providing accurate intermolecular potentials for the benzene-, naphthalene-, and anthracene-argon complexes. The basis sets are extended by addition of midbond functions. As reference we consider CCSD(T) results obtained with Dunning's bases. For the benzene complex a systematic basis set study resulted in the selection of the (Z)Pol-33211 and the aug-pc-1-33321 bases to obtain the intermolecular potential energy surface. The interaction energy values and the shape of the CCSD(T)/(Z)Pol-33211 calculated potential are very close to the best available CCSD(T)/aug-cc-pVTZ-33211 potential with the former basis set being considerably smaller. The corresponding differences for the CCSD(T)/aug-pc-1-33321 potential are larger. In the case of the naphthalene-argon complex, following a similar study, we selected the (Z)Pol-3322 and aug-pc-1-333221 bases. The potentials show four symmetric absolute minima with energies of -483.2 cm(-1) for the (Z)Pol-3322 and -486.7 cm(-1) for the aug-pc-1-333221 basis set. To further check the performance of the selected basis sets, we evaluate intermolecular bound states of the complexes. The differences between calculated vibrational levels using the CCSD(T)/(Z)Pol-33211 and CCSD(T)/aug-cc-pVTZ-33211 benzene-argon potentials are small and for the lowest energy levels do not exceed 0.70 cm(-1). Such differences are substantially larger for the CCSD(T)/aug-pc-1-33321 calculated potential. For naphthalene-argon, bound state calculations demonstrate that the (Z)Pol-3322 and aug-pc-1-333221 potentials are of similar quality. The results show that these

  7. Repair, Evaluation, Maintenance, and Rehabilitation Research Program: Explicit Numerical Algorithm for Modeling Incompressible Approach Flow

    DTIC Science & Technology

    1989-03-01

    by Colorado State University, Fort Collins, CO, for US Army Engineer Waterways Experiment Station, Vicksburg, MS. Thompson , J . F . 1983 (Mar). "A...Waterways Experiment Station, Vicksburg, MS. Thompson , J . F ., and Bernard, R. S. 1985 (Aug). "WESSEL: Code for Numerical Simulation of Two-Dimensional Time

  8. Numerical models to evaluate the temperature increase induced by ex vivo microwave thermal ablation.

    PubMed

    Cavagnaro, M; Pinto, R; Lopresto, V

    2015-04-21

    Microwave thermal ablation (MTA) therapies exploit the local absorption of an electromagnetic field at microwave (MW) frequencies to destroy unhealthy tissue, by way of a very high temperature increase (about 60 °C or higher). To develop reliable interventional protocols, numerical tools able to correctly foresee the temperature increase obtained in the tissue would be very useful. In this work, different numerical models of the dielectric and thermal property changes with temperature were investigated, looking at the simulated temperature increments and at the size of the achievable zone of ablation. To assess the numerical data, measurement of the temperature increases close to a MTA antenna were performed in correspondence with the antenna feed-point and the antenna cooling system, for increasing values of the radiated power. Results show that models not including the changes of the dielectric and thermal properties can be used only for very low values of the power radiated by the antenna, whereas a good agreement with the experimental values can be obtained up to 20 W if water vaporization is included in the numerical model. Finally, for higher power values, a simulation that dynamically includes the tissue's dielectric and thermal property changes with the temperature should be performed.

  9. Numerical models to evaluate the temperature increase induced by ex vivo microwave thermal ablation

    NASA Astrophysics Data System (ADS)

    Cavagnaro, M.; Pinto, R.; Lopresto, V.

    2015-04-01

    Microwave thermal ablation (MTA) therapies exploit the local absorption of an electromagnetic field at microwave (MW) frequencies to destroy unhealthy tissue, by way of a very high temperature increase (about 60 °C or higher). To develop reliable interventional protocols, numerical tools able to correctly foresee the temperature increase obtained in the tissue would be very useful. In this work, different numerical models of the dielectric and thermal property changes with temperature were investigated, looking at the simulated temperature increments and at the size of the achievable zone of ablation. To assess the numerical data, measurement of the temperature increases close to a MTA antenna were performed in correspondence with the antenna feed-point and the antenna cooling system, for increasing values of the radiated power. Results show that models not including the changes of the dielectric and thermal properties can be used only for very low values of the power radiated by the antenna, whereas a good agreement with the experimental values can be obtained up to 20 W if water vaporization is included in the numerical model. Finally, for higher power values, a simulation that dynamically includes the tissue’s dielectric and thermal property changes with the temperature should be performed.

  10. An evaluation of analog and numerical techniques for unsteady heat transfer measurement with thin-film gauges in transient facilities

    NASA Technical Reports Server (NTRS)

    George, William K.; Rae, William J.; Woodward, Scott H.

    1991-01-01

    The importance of frequency response considerations in the use of thin-film gages for unsteady heat transfer measurements in transient facilities is considered, and methods for evaluating it are proposed. A departure frequency response function is introduced and illustrated by an existing analog circuit. A Fresnel integral temperature which possesses the essential features of the film temperature in transient facilities is introduced and is used to evaluate two numerical algorithms. Finally, criteria are proposed for the use of finite-difference algorithms for the calculation of the unsteady heat flux from a sampled temperature signal.

  11. Gulf of Mexico numerical model. Project summary. [For evaluating thermal impact of OTEC

    SciTech Connect

    Blumberg, A.F.; Mellor, G.L.; Herring, H.J.

    1981-02-01

    The objective of this investigation is to develop and assess the skill of a three-dimensional, prognostic numerical model of the Gulf of Mexico and to identify the thermal impact of operating OTEC (Ocean Thermal Energy Conversion) power plants on the physical environment. The investigation consists of the following technical elements: adaptation and refinement of an existing multi-layer numerical model to the Gulf of Mexico basin, including realistic boundary conditions and bottom topography, to predict the time dependent circulation and temperature distribution in the Gulf throughout the year; comparison of the model predictions with the observed features of the Gulf; and application of the model to the case of OTEC power plants in the Gulf to estimate the physical perturbations to the environment.

  12. Numerical evaluation of external magnetic effect on electromagnetic wave transmission through reentry plasma layer

    NASA Astrophysics Data System (ADS)

    Zhao, Qing; Bo, Yong; Lei, Mingda; Liu, Shuzhang; Liu, Ying; Liu, Jianwei; Zhao, Yizhe

    2016-11-01

    Numerical study of electromagnetic (EM) wave transmission through the magnetized plasma layer is presented in this paper. The plasma parameters are derived from computational fluid dynamics simulation of the flow field around a blunt body flying at supersonic speed and serve as the background plasma condition in the numerical modeling for EM wave transmission. The EM wave is generated by our newly designed coaxial feed GPS patch antenna. The external magnetic field is applied and assumed to vary linearly as a function of wall distance. The effects of the external applied magnetic field and the plasma parameters on wave transmission are studied, and the results show that EM wave propagation in the non-uniformly magnetized plasma is a matter of impedance matching, and the EM wave transmission can be adjusted only when the proper strength of the magnetic field is applied.

  13. Numerical Skills of Navy Students: An Evaluation of a Skill Development Workbook.

    DTIC Science & Technology

    1980-12-01

    sig- nificantly for the two parts or total test. (See table 2.) The reliability estimates for the two forms were computed using the Kuder - Richardson (K... Richardson Formula 20 correlation S-B = Spearman-Brown correlation 6 Technical Note 8-80 indicate that the two forms of the Navy Numerical Skills Test had...the basis of topics with one or more lessons on each topic. The introduction to each topic deals with the significance, concept, and/or formulas

  14. Stress analysis and damage evaluation of flawed composite laminates by hybrid-numerical methods

    NASA Technical Reports Server (NTRS)

    Yang, Yii-Ching

    1992-01-01

    Structural components in flight vehicles is often inherited flaws, such as microcracks, voids, holes, and delamination. These defects will degrade structures the same as that due to damages in service, such as impact, corrosion, and erosion. It is very important to know how a structural component can be useful and survive after these flaws and damages. To understand the behavior and limitation of these structural components researchers usually do experimental tests or theoretical analyses on structures with simulated flaws. However, neither approach has been completely successful. As Durelli states that 'Seldom does one method give a complete solution, with the most efficiency'. Examples of this principle is seen in photomechanics which additional strain-gage testing can only average stresses at locations of high concentration. On the other hand, theoretical analyses including numerical analyses are implemented with simplified assumptions which may not reflect actual boundary conditions. Hybrid-Numerical methods which combine photomechanics and numerical analysis have been used to correct this inefficiency since 1950's. But its application is limited until 1970's when modern computer codes became available. In recent years, researchers have enhanced the data obtained from photoelasticity, laser speckle, holography and moire' interferometry for input of finite element analysis on metals. Nevertheless, there is only few of literature being done on composite laminates. Therefore, this research is dedicated to this highly anisotropic material.

  15. Evaluation of gravimetric and volumetric dispensers of particles of nuclear material. [Accurate dispensing of fissile and fertile fuel into fuel rods

    SciTech Connect

    Bayne, C.K.; Angelini, P.

    1981-08-01

    Theoretical and experimental studies compared the abilities of volumetric and gravimetric dispensers to dispense accurately fissile and fertile fuel particles. Such devices are being developed for the fabrication of sphere-pac fuel rods for high-temperature gas-cooled light water and fast breeder reactors. The theoretical examination suggests that, although the fuel particles are dispensed more accurately by the gravimetric dispenser, the amount of nuclear material in the fuel particles dispensed by the two methods is not significantly different. The experimental results demonstrated that the volumetric dispenser can dispense both fuel particles and nuclear materials that meet standards for fabricating fuel rods. Performance of the more complex gravimetric dispenser was not significantly better than that of the simple yet accurate volumetric dispenser.

  16. Theory of axially symmetric cusped focusing: numerical evaluation of a Bessoid integral by an adaptive contour algorithm

    NASA Astrophysics Data System (ADS)

    Kirk, N. P.; Connor, J. N. L.; Curtis, P. R.; Hobbs, C. A.

    2000-07-01

    A numerical procedure for the evaluation of the Bessoid canonical integral J({x,y}) is described. J({x,y}) is defined, for x and y real, by eq1 where J0(·) is a Bessel function of order zero. J({x,y}) plays an important role in the description of cusped focusing when there is axial symmetry present. It arises in the diffraction theory of aberrations, in the design of optical instruments and of highly directional microwave antennas and in the theory of image formation for high-resolution electron microscopes. The numerical procedure replaces the integration path along the real t axis with a more convenient contour in the complex t plane, thereby rendering the oscillatory integrand more amenable to numerical quadrature. The computations use a modified version of the CUSPINT computer code (Kirk et al 2000 Comput. Phys. Commun. at press), which evaluates the cuspoid canonical integrals and their first-order partial derivatives. Plots and tables of J({x,y}) and its zeros are presented for the grid -8.0≤x≤8.0 and -8.0≤y≤8.0. Some useful series expansions of J({x,y}) are also derived.

  17. An experimental evaluation of a helicopter rotor section designed by numerical optimization

    NASA Technical Reports Server (NTRS)

    Hicks, R. M.; Mccroskey, W. J.

    1980-01-01

    The wind tunnel performance of a 10-percent thick helicopter rotor section design by numerical optimization is presented. The model was tested at Mach number from 0.2 to 0.84 with Reynolds number ranging from 1,900,000 at Mach 0.2 to 4,000,000 at Mach numbers above 0.5. The airfoil section exhibited maximum lift coefficients greater than 1.3 at Mach numbers below 0.45 and a drag divergence Mach number of 0.82 for lift coefficients near 0. A moderate 'drag creep' is observed at low lift coefficients for Mach numbers greater than 0.6.

  18. Numerical performance evaluation of design modifications on a centrifugal pump impeller running in reverse mode

    NASA Astrophysics Data System (ADS)

    Kassanos, Ioannis; Chrysovergis, Marios; Anagnostopoulos, John; Papantonis, Dimitris; Charalampopoulos, George

    2016-06-01

    In this paper the effect of impeller design variations on the performance of a centrifugal pump running as turbine is presented. Numerical simulations were performed after introducing various modifications in the design for various operating conditions. Specifically, the effects of the inlet edge shape, the meridional channel width, the number of blades and the addition of splitter blades on impeller performance was investigated. The results showed that, an increase in efficiency can be achieved by increasing the number of blades and by introducing splitter blades.

  19. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    SciTech Connect

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.

  20. Evaluation of 3 numerical methods for propulsion integration studies on transonic transport configurations

    NASA Technical Reports Server (NTRS)

    Yaros, S. F.; Carlson, J. R.; Chandrasekaran, B.

    1986-01-01

    An effort has been undertaken at the NASA Langley Research Center to assess the capabilities of available computational methods for use in propulsion integration design studies of transonic transport aircraft, particularly of pylon/nacelle combinations which exhibit essentially no interference drag. The three computer codes selected represent state-of-the-art computational methods for analyzing complex configurations at subsonic and transonic flight conditions. These are: EULER, a finitie volume solution of the Euler equation; VSAERO, a panel solution of the Laplace equation; and PPW, a finite difference solution of the small disturbance transonic equations. In general, all three codes have certain capabilities that allow them to be of some value in predicting the flows about transport configurations, but all have limitations. Until more accurate methods are available, careful application and interpretation of the results of these codes are needed.

  1. Evaluation of three numerical methods for propulsion integration studies on transonic transport configurations

    NASA Technical Reports Server (NTRS)

    Yaros, Steven F.; Carlson, John R.; Chandrasekaran, Balasubramanyan

    1986-01-01

    An effort has been undertaken at the NASA Langley Research Center to assess the capabilities of available computational methods for use in propulsion integration design studies of transonic transport aircraft, particularly of pylon/nacelle combinations which exhibit essentially no interference drag. The three computer codes selected represent state-of-the-art computational methods for analyzing complex configurations at subsonic and transonic flight conditions. These are: EULER, a finite volume solution of the Euler equation; VSAERO, a panel solution of the Laplace equation; and PPW, a finite difference solution of the small disturbance transonic equations. In general, all three codes have certain capabilities that allow them to be of some value in predicting the flows about transport configurations, but all have limitations. Until more accurate methods are available, careful application and interpretation of the results of these codes are needed.

  2. EVALUATION OF U10MO FUEL PLATE IRRADIATION BEHAVIOR VIA NUMERICAL AND EXPERIMENTAL BENCHMARKING

    SciTech Connect

    Samuel J. Miller; Hakan Ozaltun

    2012-11-01

    This article analyzes dimensional changes due to irradiation of monolithic plate-type nuclear fuel and compares results with finite element analysis of the plates during fabrication and irradiation. Monolithic fuel plates tested in the Advanced Test Reactor (ATR) at Idaho National Lab (INL) are being used to benchmark proposed fuel performance for several high power research reactors. Post-irradiation metallographic images of plates sectioned at the midpoint were analyzed to determine dimensional changes of the fuel and the cladding response. A constitutive model of the fabrication process and irradiation behavior of the tested plates was developed using the general purpose commercial finite element analysis package, Abaqus. Using calculated burn-up profiles of irradiated plates to model the power distribution and including irradiation behaviors such as swelling and irradiation enhanced creep, model simulations allow analysis of plate parameters that are either impossible or infeasible in an experimental setting. The development and progression of fabrication induced stress concentrations at the plate edges was of primary interest, as these locations have a unique stress profile during irradiation. Additionally, comparison between 2D and 3D models was performed to optimize analysis methodology. In particular, the ability of 2D and 3D models account for out of plane stresses which result in 3-dimensional creep behavior that is a product of these components. Results show that assumptions made in 2D models for the out-of-plane stresses and strains cannot capture the 3-dimensional physics accurately and thus 2D approximations are not computationally accurate. Stress-strain fields are dependent on plate geometry and irradiation conditions, thus, if stress based criteria is used to predict plate behavior (as opposed to material impurities, fine micro-structural defects, or sharp power gradients), unique 3D finite element formulation for each plate is required.

  3. Thermophysical properties of medium density fiberboards measured by quasi-stationary method: experimental and numerical evaluation

    NASA Astrophysics Data System (ADS)

    Troppová, Eva; Tippner, Jan; Hrčka, Richard

    2017-01-01

    This paper presents an experimental measurement of thermal properties of medium density fiberboards with different thicknesses (12, 18 and 25 mm) and sample sizes (50 × 50 mm and 100 × 100 mm) by quasi-stationary method. The quasi-stationary method is a transient method which allows measurement of three thermal parameters (thermal conductivity, thermal diffusivity and heat capacity). The experimentally gained values were used to verify a numerical model and furthermore served as input parameters for the numerical probabilistic analysis. The sensitivity of measured outputs (time course of temperature) to influential factors (density, heat transfer coefficient and thermal conductivities) was established and described by the Spearman's rank correlation coefficients. The dependence of thermal properties on density was confirmed by the data measured. Density was also proved to be an important factor for sensitivity analyses as it highly correlated with all output parameters. The accuracy of the measurement method can be improved based on the results of the probabilistic analysis. The relevancy of the experiment is mainly influenced by the choice of a proper ratio between thickness and width of samples.

  4. Evaluating aerosol impacts on Numerical Weather Prediction in two extreme dust and biomass-burning events

    NASA Astrophysics Data System (ADS)

    Remy, Samuel; Benedetti, Angela; Jones, Luke; Razinger, Miha; Haiden, Thomas

    2014-05-01

    The WMO-sponsored Working Group on Numerical Experimentation (WGNE) set up a project aimed at understanding the importance of aerosols for numerical weather prediction (NWP). Three cases are being investigated by several NWP centres with aerosol capabilities: a severe dust case that affected Southern Europe in April 2012, a biomass burning case in South America in September 2012, and an extreme pollution event in Beijing (China) which took place in January 2013. At ECMWF these cases are being studied using the MACC-II system with radiatively interactive aerosols. Some preliminary results related to the dust and the fire event will be presented here. A preliminary verification of the impact of the aerosol-radiation direct interaction on surface meteorological parameters such as 2m Temperature and surface winds over the region of interest will be presented. Aerosol optical depth (AOD) verification using AERONET data will also be discussed. For the biomass burning case, the impact of using injection heights estimated by a Plume Rise Model (PRM) for the biomass burning emissions will be presented.

  5. Doppler echo evaluation of pulmonary venous-left atrial pressure gradients: human and numerical model studies

    NASA Technical Reports Server (NTRS)

    Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; Prior, D. L.; Scalia, G. M.; Thomas, J. D.; Garcia, M. J.

    2000-01-01

    The simplified Bernoulli equation relates fluid convective energy derived from flow velocities to a pressure gradient and is commonly used in clinical echocardiography to determine pressure differences across stenotic orifices. Its application to pulmonary venous flow has not been described in humans. Twelve patients undergoing cardiac surgery had simultaneous high-fidelity pulmonary venous and left atrial pressure measurements and pulmonary venous pulsed Doppler echocardiography performed. Convective gradients for the systolic (S), diastolic (D), and atrial reversal (AR) phases of pulmonary venous flow were determined using the simplified Bernoulli equation and correlated with measured actual pressure differences. A linear relationship was observed between the convective (y) and actual (x) pressure differences for the S (y = 0.23x + 0.0074, r = 0.82) and D (y = 0.22x + 0.092, r = 0.81) waves, but not for the AR wave (y = 0. 030x + 0.13, r = 0.10). Numerical modeling resulted in similar slopes for the S (y = 0.200x - 0.127, r = 0.97), D (y = 0.247x - 0. 354, r = 0.99), and AR (y = 0.087x - 0.083, r = 0.96) waves. Consistent with numerical modeling, the convective term strongly correlates with but significantly underestimates actual gradient because of large inertial forces.

  6. Evaluation of Sulfur Flow Emplacement on Io from Galileo Data and Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Williams, David A.; Greeley, Ronald; Lopes, Rosaly M. C.; Davies, Ashley G.

    2001-01-01

    Galileo images of bright lava flows surrounding Emakong Patera have bee0 analyzed and numerical modeling has been performed to assess whether these flows could have resulted from the emplacement of sulfur lavas on Io. Images from the solid-state imaging (SSI) camera show that these bright, white to yellow Emakong flows are up to 370 km long and contain dark, sinuous features that are interpreted to be lava conduits, -300-500 m wide and >lo0 km lorig. Neiu-Infrared Mapping S estimate of 344 K f 60 G131'C) within the Bmakong caldera. We suggest that these bright flows likely resulted from either sulfur lavas or silicate lavas that have undergone extensive cooling, pyroclastic mantling, and/or alteration with bright sulfurous materials. The Emakoag bright flows have estimated volume of -250-350 km', similar to some of the smaller Columbia River Basalt flows, If the Emakong flows did result from effusive sulfur eruptions, then they are orders of magnitude reater in volume than any terrestrial sulfur flows. Our numerical modeling capable of traveling tens to hundreds of kilometers, consistent with the predictions of Sagan. Our modeled flow distances are also consistent with the measured lengths of the Emakong channels and bright flows.

  7. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  8. Numerical evaluation of Auger recombination coefficients in relaxed and strained germanium

    NASA Astrophysics Data System (ADS)

    Dominici, Stefano; Wen, Hanqing; Bertazzi, Francesco; Goano, Michele; Bellotti, Enrico

    2016-05-01

    The potential applications of germanium and its alloys in infrared silicon-based photonics have led to a renewed interest in their optical properties. In this letter, we report on the numerical determination of Auger coefficients at T = 300 K for relaxed and biaxially strained germanium. We use a Green's function based model that takes into account all relevant direct and phonon-assisted processes and perform calculations up to a strain level corresponding to the transition from indirect to direct energy gap. We have considered excess carrier concentrations ranging from 1016 cm-3 to 5 × 1019 cm-3. For use in device level simulations, we also provide fitting formulas for the calculated electron and hole Auger coefficients as functions of carrier density.

  9. Numerical evaluation of an innovative cup layout for open volumetric solar air receivers

    NASA Astrophysics Data System (ADS)

    Cagnoli, Mattia; Savoldi, Laura; Zanino, Roberto; Zaversky, Fritz

    2016-05-01

    This paper proposes an innovative volumetric solar absorber design to be used in high-temperature air receivers of solar power tower plants. The innovative absorber, a so-called CPC-stacked-plate configuration, applies the well-known principle of a compound parabolic concentrator (CPC) for the first time in a volumetric solar receiver, heating air to high temperatures. The proposed absorber configuration is analyzed numerically, applying first the open-source ray-tracing software Tonatiuh in order to obtain the solar flux distribution on the absorber's surfaces. Next, a Computational Fluid Dynamic (CFD) analysis of a representative single channel of the innovative receiver is performed, using the commercial CFD software ANSYS Fluent. The solution of the conjugate heat transfer problem shows that the behavior of the new absorber concept is promising, however further optimization of the geometry will be necessary in order to exceed the performance of the classical absorber designs.

  10. The CAV program for numerical evaluation of laminar natural convection heat transfer in vertical rectangular cavities

    NASA Astrophysics Data System (ADS)

    Novak, Milos H.; Nowak, Edwin S.

    1993-12-01

    To analyze the laminar natural convection heat transfer and fluid flow distribution in vertical rectangular cavities with or without inner partitions, the personal computer finite difference program entitled CAV is used. The CAV program was tested successfully for slender cavities with aspect ratios as high as R = H/ L = 90 and for the Grashof numbers, based on the cavity height, up to GrH = 3 x10 9. To make the CAV program useful for a number of applications, various types of boundary conditions can also be imposed on the program calculations. Presented are program applications dealing with the 2-D numerical analysis of natural convection heat transfer in very slender window cavities with and without small inner partitions and recommendations are made for window design.

  11. Carbon capture and storage reservoir properties from poroelastic inversion: A numerical evaluation

    NASA Astrophysics Data System (ADS)

    Lepore, Simone; Ghose, Ranajit

    2015-11-01

    We investigate the prospect of estimating carbon capture and storage (CCS) reservoir properties from P-wave intrinsic attenuation and velocity dispersion. Numerical analogues for two CCS reservoirs are examined: the Utsira saline formation at Sleipner (Norway) and the coal-bed methane basin at Atzbach-Schwanestadt (Austria). P-wave intrinsic dispersion curves in the field-seismic frequency band, obtained from theoretical studies based on simulation of oscillatory compressibility and shear tests upon representative rock samples, are considered as observed data. We carry out forward modelling using poroelasticity theories, making use of previously established empirical relations, pertinent to CCS reservoirs, to link pressure, temperature and CO2 saturation to other properties. To derive the reservoir properties, poroelastic inversions are performed through a global multiparameter optimization using simulated annealing. We find that the combination of attenuation and velocity dispersion in the error function helps significantly in eliminating the local minima and obtaining a stable result in inversion. This is because of the presence of convexity in the solution space when an integrated error function is minimized, which is governed by the underlying physics. The results show that, even in the presence of fairly large model discrepancies, the inversion provides reliable values for the reservoir properties, with the error being less than 10% for most of them. The estimated values of velocity and attenuation and their sensitivity to effective stress and CO2 saturation generally agree with the earlier experimental observation. Although developed and tested for numerical analogues of CCS reservoirs, the approach presented here can be adapted in order to predict key properties in a fluid-bearing porous reservoir, in general.

  12. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    SciTech Connect

    Bu Sunyoung Huang Jingfang Boyer, Treavor H. Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  13. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process.

    PubMed

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  14. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process

    PubMed Central

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-01-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570

  15. Numerical evaluation of lactoperoxidase inactivation during continuous pulsed electric field processing.

    PubMed

    Buckow, Roman; Semrau, Julius; Sui, Qian; Wan, Jason; Knoerzer, Kai

    2012-01-01

    A computational fluid dynamics (CFD) model describing the flow, electric field and temperature distribution of a laboratory-scale pulsed electric field (PEF) treatment chamber with co-field electrode configuration was developed. The predicted temperature increase was validated by means of integral temperature studies using thermocouples at the outlet of each flow cell for grape juice and salt solutions. Simulations of PEF treatments revealed intensity peaks of the electric field and laminar flow conditions in the treatment chamber causing local temperature hot spots near the chamber walls. Furthermore, thermal inactivation kinetics of lactoperoxidase (LPO) dissolved in simulated milk ultrafiltrate were determined with a glass capillary method at temperatures ranging from 65 to 80 °C. Temperature dependence of first order inactivation rate constants was accurately described by the Arrhenius equation yielding an activation energy of 597.1 kJ mol(-1). The thermal impact of different PEF processes on LPO activity was estimated by coupling the derived Arrhenius model with the CFD model and the predicted enzyme inactivation was compared to experimental measurements. Results indicated that LPO inactivation during combined PEF/thermal treatments was largely due to thermal effects, but 5-12% enzyme inactivation may be related to other electro-chemical effects occurring during PEF treatments.

  16. Three-Dimensional Numerical Evaluation of Thermal Performance of Uninsulated Wall Assemblies

    SciTech Connect

    Ridouane, El Hassan; Bianchi, Marcus V.A.

    2011-11-01

    This study describes a detailed 3D computational fluid dynamics model that evaluates the thermal performance of uninsulated wall assemblies. It accounts for conduction through framing, convection, and radiation and allows for material property variations with temperature. This research was presented at the ASME 2011 International Mechanical Engineering Congress and Exhibition; Denver, Colorado; November 11-17, 2011

  17. Evaluation of Sulfur Flow Emplacement on Io from Galileo Data and Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Williams, David A.; Greeley, Ronald; Lopes, Rosaly M. C.; Davies, Ashley G.

    2001-01-01

    Galileo images of bright lava flows surrounding Emakong Patera have been analyzed and numerical modeling has been performed to assess whether these flows could have resulted from the emplacement of sulfur lavas on Io. Images from the solid-state imaging.(SSI) camera show that these bright, white to yellow Emakong flows are up to 370 km long and contain dark, sinuous features that are interpreted to be lava conduits, approx. 300-500 m wide and > 100 km long. Near-Infrared Mapping Spectrometer (NIMS) thermal emission data yield a color temperature estimate of 344 K +/- 60 K (less than or equal to 131 C) within the Emakong caldera. We suggest that these bright flows likely resulted from either sulfur lavas or silicate lavas that have undergone extensive cooling, pyroclastic mantling, and/or alteration with bright sulfurous materials. The Emakong bright flows have estimated volumes of approx. 250-350 cu km, similar to some of the smaller Columbia River Basalt flows. If the Emakong flows did result from effusive sulfur eruptions, then they are orders of magnitude greater in volume than any terrestrial sulfur flows. Our numerical modeling results show that sulfur lavas on Io could have been emplaced as turbulent flows, which were capable of traveling tens to hundreds of kilometers, consistent with the predictions of Sagan [ 19793 and Fink et al. [ 19831. Our modeled flow distances are also consistent with the measured lengths of the Emakong channels and bright flows. Modeled thermal erosion rates are approx. 1-4 m/d for flows erupted at approx. 140-180 C, which are consistent with the melting rates of Kieffer et al. [2000]. The Emakong channels could be thermal erosional in nature; however, the morphologic signatures of thermal erosion channels cannot be discerned from available images. There are planned Galileo flybys of Io in 2001 which provide excellent opportunities to obtain high-resolution morphologic and color data of Emakong Patera. Such observations could, along

  18. Numerical evaluation of the effectiveness of NO2 and N2O5 generation during the NO ozonation process.

    PubMed

    Wang, Haiqiang; Zhuang, Zhuokai; Sun, Chenglang; Zhao, Nan; Liu, Yue; Wu, Zhongbiao

    2016-03-01

    Wet scrubbing combined with ozone oxidation has become a promising technology for simultaneous removal of SO2 and NOx in exhaust gas. In this paper, a new 20-species, 76-step detailed kinetic mechanism was proposed between O3 and NOx. The concentration of N2O5 was measured using an in-situ IR spectrometer. The numerical evaluation results kept good pace with both the public experiment results and our experiment results. Key reaction parameters for the generation of NO2 and N2O5 during the NO ozonation process were investigated by a numerical simulation method. The effect of temperature on producing NO2 was found to be negligible. To produce NO2, the optimal residence time was 1.25sec and the molar ratio of O3/NO about 1. For the generation of N2O5, the residence time should be about 8sec while the temperature of the exhaust gas should be strictly controlled and the molar ratio of O3/NO about 1.75. This study provided detailed investigations on the reaction parameters of ozonation of NOx by a numerical simulation method, and the results obtained should be helpful for the design and optimization of ozone oxidation combined with the wet flue gas desulfurization methods (WFGD) method for the removal of NOx.

  19. Numerical evaluation of crack growth in polymer electrolyte fuel cell membranes based on plastically dissipated energy

    NASA Astrophysics Data System (ADS)

    Ding, Guoliang; Santare, Michael H.; Karlsson, Anette M.; Kusoglu, Ahmet

    2016-06-01

    Understanding the mechanisms of growth of defects in polymer electrolyte membrane (PEM) fuel cells is essential for improving cell longevity. Characterizing the crack growth in PEM fuel cell membrane under relative humidity (RH) cycling is an important step towards establishing strategies essential for developing more durable membrane electrode assemblies (MEA). In this study, a crack propagation criterion based on plastically dissipated energy is investigated numerically. The accumulation of plastically dissipated energy under cyclical RH loading ahead of the crack tip is calculated and compared to a critical value, presumed to be a material parameter. Once the accumulation reaches the critical value, the crack propagates via a node release algorithm. From the literature, it is well established experimentally that membranes reinforced with expanded polytetrafluoroethylene (ePTFE) reinforced perfluorosulfonic acid (PFSA) have better durability than unreinforced membranes, and through-thickness cracks are generally found under the flow channel regions but not land regions in unreinforced PFSA membranes. We show that the proposed plastically dissipated energy criterion captures these experimental observations and provides a framework for investigating failure mechanisms in ionomer membranes subjected to similar environmental loads.

  20. Analytical and Numerical Evaluation of Limit States of MSE Wall Structure

    NASA Astrophysics Data System (ADS)

    Drusa, Marián; Vlček, Jozef; Holičková, Martina; Kais, Ladislav

    2016-12-01

    Simplification of the design of Mechanically Stabilized Earth wall structures (MSE wall or MSEW) is now an important factor that helps us not only to save a time and costs, but also to achieve the desired results more reliably. It is quite common way in practice, that the designer of a section of motorway or railway line gives order for design to a supplier of geosynthetics materials. However, supplier company has experience and skills, but a general designer does not review the safety level of design and its efficiency, and is simply incorporating into the overall design of the construction project. Actually, large number of analytical computational methods for analysis and design of MSE walls or similar structures are known. The problem of these analytical methods is the verification of deformations and global stability of structure. The article aims to clarify two methods of calculating the internal stability of MSE wall and their comparison with FEM numerical model. Comparison of design approaches allows us to draft an effective retaining wall and tells us about the appropriateness of using a reinforcing element.

  1. Numerical evaluation of tree canopy shape near noise barriers to improve downwind shielding.

    PubMed

    Van Renterghem, T; Botteldooren, D

    2008-02-01

    The screen-induced refraction of sound by wind results in a reduced noise shielding for downwind receivers. Placing a row of trees behind a highway noise barrier modifies the wind field, and this was proven to be an important curing measure in previous studies. In this paper, the wind field modification by the canopy of trees near noise barriers is numerically predicted by using common quantitative tree properties. A realistic range of pressure resistance coefficients are modeled, for two wind speed profiles. As canopy shape influences vertical gradients in the horizontal component of the wind velocity, three typical shapes are simulated. A triangular crown shape, where the pressure resistance coefficient is at maximum at the bottom of the canopy and decreases linearly toward the top, is the most interesting configuration. A canopy with uniform aerodynamic properties with height behaves similarly at low wind speeds. The third crown shape that was modeled is the ellipse form, which has a worse performance than the first two types, but still gives a significant improvement compared to barriers without trees. With increasing wind speed, the optimum pressure resistance coefficient increases. Coniferous trees are more suited than deciduous trees to increase the downwind noise barrier efficiency.

  2. Parametric Evaluation of Absorption Losses and Comparison of Numerical Results to Boeing 707 Aircraft Experimental HIRF Results

    NASA Astrophysics Data System (ADS)

    Kitaygorsky, J.; Amburgey, C.; Elliott, J. R.; Fisher, R.; Perala, R. A.

    A broadband (100 MHz-1.2 GHz) plane wave electric field source was used to evaluate electric field penetration inside a simplified Boeing 707 aircraft model with a finite-difference time-domain (FDTD) method using EMA3D. The role of absorption losses inside the simplified aircraft was investigated. It was found that, in this frequency range, none of the cavities inside the Boeing 707 model are truly reverberant when frequency stirring is applied, and a purely statistical electromagnetics approach cannot be used to predict or analyze the field penetration or shielding effectiveness (SE). Thus it was our goal to attempt to understand the nature of losses in such a quasi-statistical environment by adding various numbers of absorbing objects inside the simplified aircraft and evaluating the SE, decay-time constant τ, and quality factor Q. We then compare our numerical results with experimental results obtained by D. Mark Johnson et al. on a decommissioned Boeing 707 aircraft.

  3. A GIS tool for the evaluation of the precipitation forecasts of a numerical weather prediction model using satellite data

    NASA Astrophysics Data System (ADS)

    Feidas, Haralambos; Kontos, Themistoklis; Soulakellis, Nikolaos; Lagouvardos, Konstantinos

    2007-08-01

    In this study, the possibility of implementing Geographic Information Systems (GIS) for developing an integrated and automatic operational system for the real-time evaluation of the precipitation forecasts of the numerical weather prediction model BOLAM (BOlogna Limited Area Model) in Greece, is examined. In fact, the precipitation estimates derived by an infrared satellite technique are used for real-time qualitative and quantitative verification of the precipitation forecasts of the model BOLAM through the use of a GIS tool named as precipitation forecasts evaluator (PFE). The application of the developed tool in a case associated with intense precipitation in Greece, suggested that PFE could be a very important support tool for nowcasting and very short-range forecasting of such events.

  4. Numerical modeling of debris avalanches at Nevado de Toluca (Mexico): implications for hazard evaluation and mapping

    NASA Astrophysics Data System (ADS)

    Grieco, F.; Capra, L.; Groppelli, G.; Norini, G.

    2007-05-01

    The present study concerns the numerical modeling of debris avalanches on the Nevado de Toluca Volcano (Mexico) using TITAN2D simulation software, and its application to create hazard maps. Nevado de Toluca is an andesitic to dacitic stratovolcano of Late Pliocene-Holocene age, located in central México near to the cities of Toluca and México City; its past activity has endangered an area with more than 25 million inhabitants today. The present work is based upon the data collected during extensive field work finalized to the realization of the geological map of Nevado de Toluca at 1:25,000 scale. The activity of the volcano has developed from 2.6 Ma until 10.5 ka with both effusive and explosive events; the Nevado de Toluca has presented long phases of inactivity characterized by erosion and emplacement of debris flow and debris avalanche deposits on its flanks. The largest epiclastic events in the history of the volcano are wide debris flows and debris avalanches, occurred between 1 Ma and 50 ka, during a prolonged hiatus in eruptive activity. Other minor events happened mainly during the most recent volcanic activity (less than 50 ka), characterized by magmatic and tectonic-induced instability of the summit dome complex. According to the most recent tectonic analysis, the active transtensive kinematics of the E-W Tenango Fault System had a strong influence on the preferential directions of the last three documented lateral collapses, which generated the Arroyo Grande and Zaguàn debris avalanche deposits towards E and Nopal debris avalanche deposit towards W. The analysis of the data collected during the field work permitted to create a detailed GIS database of the spatial and temporal distribution of debris avalanche deposits on the volcano. Flow models, that have been performed with the software TITAN2D, developed by GMFG at Buffalo, were entirely based upon the information stored in the geological database. The modeling software is built upon equations

  5. Determining the optimal planting density and land expectation value -- a numerical evaluation of decision model

    SciTech Connect

    Gong, P. . Dept. of Forest Economics)

    1998-08-01

    Different decision models can be constructed and used to analyze a regeneration decision in even-aged stand management. However, the optimal decision and management outcomes determined in an analysis may depend on the decision model used in the analysis. This paper examines the proper choice of decision model for determining the optimal planting density and land expectation value (LEV) for a Scots pine (Pinus sylvestris L.) plantation in northern Sweden. First, a general adaptive decision model for determining the regeneration alternative that maximizes the LEV is presented. This model recognizes future stand state and timber price uncertainties by including multiple stand state and timber price scenarios, and assumes that the harvest decision in each future period will be made conditional on the observed stand state and timber prices. Alternative assumptions about future stand states, timber prices, and harvest decisions can be incorporated into this general decision model, resulting in several different decision models that can be used to analyze a specific regeneration problem. Next, the consequences of choosing different modeling assumptions are determined using the example Scots pine plantation problem. Numerical results show that the most important sources of uncertainty that affect the optimal planting density and LEV are variations of the optimal clearcut time due to short-term fluctuations of timber prices. It is appropriate to determine the optimal planting density and harvest policy using an adaptive decision model that recognizes uncertainty only in future timber prices. After the optimal decisions have been found, however, the LEV should be re-estimated by incorporating both future stand state and timber price uncertainties.

  6. Evaluation of the numeric rating scale for perception of effort during isometric elbow flexion exercise.

    PubMed

    Lampropoulou, Sofia; Nowicky, Alexander V

    2012-03-01

    The aim of the study was to examine the reliability and validity of the numerical rating scale (0-10 NRS) for rating perception of effort during isometric elbow flexion in healthy people. 33 individuals (32 ± 8 years) participated in the study. Three re-test measurements within one session and three weekly sessions were undertaken to determine the reliability of the scale. The sensitivity of the scale following 10 min isometric fatiguing exercise of the elbow flexors as well as the correlation of the effort with the electromyographic (EMG) activity of the flexor muscles were tested. Perception of effort was tested during isometric elbow flexion at 10, 30, 50, 70, 90, and 100% MVC. The 0-10 NRS demonstrated an excellent test-retest reliability [intra class correlation (ICC) = 0.99 between measurements taken within a session and 0.96 between 3 consecutive weekly sessions]. Exploratory curve fitting for the relationship between effort ratings and voluntary force, and underlying EMG showed that both are best described by power functions (y = ax ( b )). There were also strong correlations (range 0.89-0.95) between effort ratings and EMG recordings of all flexor muscles supporting the concurrent criterion validity of the measure. The 0-10 NRS was sensitive enough to detect changes in the perceived effort following fatigue and significantly increased at the level of voluntary contraction used in its assessment (p < 0.001). These findings suggest the 0-10 NRS is a valid and reliable scale for rating perception of effort in healthy individuals. Future research should seek to establish the validity of the 0-10 NRS in clinical settings.

  7. Numerical evaluation of the phase-field model for brittle fracture with emphasis on the length scale

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Vignes, Chet; Sloan, Scott W.; Sheng, Daichao

    2017-01-01

    The phase-field model has been attracting considerable attention due to its capability of capturing complex crack propagations without mesh dependence. However, its validation studies have primarily focused on the ability to predict reasonable, sharply defined crack paths. Very limited works have so far been contributed to estimate its accuracy in predicting force responses, which is majorly attributed to the difficulty in the determination of the length scale. Indeed, accurate crack path simulation can be achieved by setting the length scale to be sufficiently small, whereas a very small length scale may lead to unrealistic force-displacement responses and overestimate critical structural loads. This paper aims to provide a critical numerical investigation of the accuracy of phase-field modelling of brittle fracture with special emphasis on a possible formula for the length scale estimation. Phase-field simulations of a number of classical fracture experiments for brittle fracture in concretes are performed with simulated results compared with experimental data qualitatively and quantitatively to achieve this goal. Furthermore, discussions are conducted with the aim to provide guidelines for the application of the phase-field model.

  8. Numerical model for the evaluation of Earthquake effects on a magmatic system.

    NASA Astrophysics Data System (ADS)

    Garg, Deepak; Longo, Antonella; Papale, Paolo

    2016-04-01

    A finite element numerical model is presented to compute the effect of an Earthquake on the dynamics of magma in reservoirs with deformable walls. The magmatic system is hit by a Mw 7.2 Earthquake (Petrolia/Capo Mendocina 1992) with hypocenter at 15 km diagonal distance. At subsequent times the seismic wave reaches the nearest side of the magmatic system boundary, travels through the magmatic fluid and arrives to the other side of the boundary. The modelled physical system consists in the magmatic reservoir with a thin surrounding layer of rocks. Magma is considered as an homogeneous multicomponent multiphase Newtonian mixture with exsolution and dissolution of volatiles (H2O+CO2). The magmatic reservoir is made of a small shallow magma chamber filled with degassed phonolite, connected by a vertical dike to a larger deeper chamber filled with gas-rich shoshonite, in condition of gravitational instability. The coupling between the Earthquake and the magmatic system is computed by solving the elastostatic equation for the deformation of the magmatic reservoir walls, along with the conservation equations of mass of components and momentum of the magmatic mixture. The characteristic elastic parameters of rocks are assigned to the computational domain at the boundary of magmatic system. Physically consistent Dirichlet and Neumann boundary conditions are assigned according to the evolution of the seismic signal. Seismic forced displacements and velocities are set on the part of the boundary which is hit by wave. On the other part of boundary motion is governed by the action of fluid pressure and deviatoric stress forces due to fluid dynamics. The constitutive equations for the magma are solved in a monolithic way by space-time discontinuous-in-time finite element method. To attain additional stability least square and discontinuity capturing operators are included in the formulation. A partitioned algorithm is used to couple the magma and thin layer of rocks. The

  9. Design of tissue engineering scaffolds based on hyperbolic surfaces: structural numerical evaluation.

    PubMed

    Almeida, Henrique A; Bártolo, Paulo J

    2014-08-01

    Tissue engineering represents a new field aiming at developing biological substitutes to restore, maintain, or improve tissue functions. In this approach, scaffolds provide a temporary mechanical and vascular support for tissue regeneration while tissue in-growth is being formed. These scaffolds must be biocompatible, biodegradable, with appropriate porosity, pore structure and distribution, and optimal vascularization with both surface and structural compatibility. The challenge is to establish a proper balance between porosity and mechanical performance of scaffolds. This work investigates the use of two different types of triple periodic minimal surfaces, Schwarz and Schoen, in order to design better biomimetic scaffolds with high surface-to-volume ratio, high porosity and good mechanical properties. The mechanical behaviour of these structures is assessed through the finite element method software Abaqus. The effect of two parametric parameters (thickness and surface radius) is also evaluated regarding its porosity and mechanical behaviour.

  10. Electronic differential for tramcar bogies: system development and performance evaluation by means of numerical simulation

    NASA Astrophysics Data System (ADS)

    Barbera, Andrea N.; Bucca, Giuseppe; Corradi, Roberto; Facchinetti, Alan; Mapelli, Ferdinando

    2014-05-01

    The dynamic behaviour of railway vehicles depends on the wheelset configuration, i.e. solid axle wheelset or independently rotating wheels (IRWs). The self-centring behaviour, peculiar of the solid axle wheelset, makes this kind of wheelset very suitable for tangent track running at low speed: the absence of the self-centring mechanism in the IRWs may lead to anomalous wheel/rail wear, reduced vehicle safety and passengers' discomfort. On the contrary, during negotiation of the sharp curves typical of urban tramways, solid axle wheelsets produce lateral contact forces higher than those of IRWs. This paper illustrates an electronic differential system to be applied to tramcar bogies equipped with wheel-hub motors which allows switching from solid axle in tangent track to IRWs in sharp curve (and vice versa). An electro-mechanical vehicle model is adopted for the design of the control system and for the evaluation of the vehicle dynamic performances.

  11. Evaluation of the role of heterogeneities on transverse mixing in bench-scale tank experiments by numerical modeling.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2014-01-01

    In this work, numerical modeling is used to evaluate and interpret a series of detailed and well-controlled two-dimensional bench-scale conservative tracer tank experiments performed to investigate transverse mixing in porous media. The porous medium used consists of a fine matrix and a more permeable lens vertically aligned with the tracer source and the flow direction. A sensitivity analysis shows that the tracer distribution after passing the lens is only slightly sensitive to variations in transverse dispersivity, but strongly sensitive to the contrast of hydraulic conductivities. A unique parameter set could be calibrated to closely fit the experimental observations. On the basis of calibrated and validated model, synthetic experiments with different contrasts in hydraulic conductivity and more complex setups were performed and the efficiency of mixing evaluated. Flux-related dilution indices derived from these simulations show that the contrasts in hydraulic conductivity between matrix and high-permeable lenses as well as the spatial configuration of tracer plumes and lenses dominate mixing, rather than the actual pore scale dispersivities. These results indicate that local material distributions, the magnitude of permeability contrasts, and their spatial and scale relation to solute plumes are more important for macro-scale transverse dispersion than the micro-scale dispersivities of individual materials. Local material characterization by thorough site investigation hence is of utmost importance for the evaluation of mixing-influenced or -governed problems in groundwater, such as tracer test evaluation or an assessment of contaminant natural attenuation.

  12. Numerical simulation and evaluation of a new hydrological model coupled with GRAPES

    NASA Astrophysics Data System (ADS)

    Zheng, Ziyan; Zhang, Wanchang; Xu, Jingwen; Zhao, Linna; Chen, Jing; Yan, Zhongwei

    2012-10-01

    Hydrological processes exert enormous influences on the land surface water and energy balance, and have a close relationship with human society. We have developed a new hydrological runoff parameterization called XXT to improve the performance of a coupled land surface-atmosphere modeling system. The XXT parameterization, which is based upon the Xinanjiang hydrological model and TOPMODEL, includes an optimized function of runoff calculation with a new soil moisture storage capacity distribution curve (SMSCC). We then couple XXT with the Global/Regional Assimilation Prediction System (GRAPES) and compare it to GRAPES coupled with a simple water balance model (SWB). For the model evaluation and comparison, we perform 72-h online simulations using GRAPES-XXT and GRAPES-SWB during two torrential events in August 2007 and July 2008, respectively. The results show that GRAPES can reproduce the rainfall distribution and intensity fairly well in both cases. Differences in the representation of feedback processes between surface hydrology and the atmosphere result in differences in the distributions and amounts of precipitation simulated by GRAPES-XXT and GRAPES-SWB. The runoff simulations are greatly improved by the use of XXT in place of SWB, particularly with respect to the distribution and amount of runoff. The average runoff depth is nearly doubled in the rainbelt area, and unreasonable runoff distributions simulated by GRAPES-SWB are made more realistic by the introduction of XXT. Differences in surface soil moisture between GRAPES-XXT and GRAPES-SWB show that the XXT model changes infiltration and increases surface runoff. We also evaluate river flood discharge in the Yishu River basin. The peak values of flood discharge calculated from the output of GRAPES-XXT agree more closely with observations than those calculated from the output of GRAPES-SWB.

  13. Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0

    NASA Astrophysics Data System (ADS)

    Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun

    2013-02-01

    We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization

  14. A Numerical Study of Some Potential Sources of Error in Side-by-Side Seismometer Evaluations

    USGS Publications Warehouse

    Holcomb, L. Gary

    1990-01-01

    INTRODUCTION This report presents the results of a series of computer simulations of potential errors in test data, which might be obtained when conducting side-by-side comparisons of seismometers. These results can be used as guides in estimating potential sources and magnitudes of errors one might expect when analyzing real test data. First, the derivation of a direct method for calculating the noise levels of two sensors in a side-by-side evaluation is repeated and extended slightly herein. This bulk of this derivation was presented previously (see Holcomb 1989); it is repeated here for easy reference. This method is applied to the analysis of a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of white noise spectra with known signal-to-noise ratios (SNR's). This report extends this analysis to high SNR's to determine the limitations of the direct method for calculating the noise levels at signal-to-noise levels which are much higher than presented previously (see Holcomb 1989). Next, the method is used to analyze a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of bandshaped noise spectra with known signal-to-noise ratios. This is a much more realistic representation of real world data because the earth's background spectrum is certainly not flat. Finally, the results of the analysis of simulated white and bandshaped side-by-side test data are used to assist in interpreting the analysis of the effects of simulated azimuthal misalignment in side-by-side sensor evaluations. A thorough understanding of azimuthal misalignment errors is important because of the physical impossibility of perfectly aligning two sensors in a real world situation. The analysis herein indicates that alignment errors place lower limits on the levels of system noise which can be resolved in a side-by-side measurement. It also indicates that alignment errors are the source of the fact that

  15. Numerical evaluation of sequential bone drilling strategies based on thermal damage.

    PubMed

    Tai, Bruce L; Palmisano, Andrew C; Belmont, Barry; Irwin, Todd A; Holmes, James; Shih, Albert J

    2015-09-01

    Sequentially drilling multiple holes in bone is used clinically for surface preparation to aid in fusion of a joint, typically under non-irrigated conditions. Drilling induces a significant amount of heat and accumulates after multiple passes, which can result in thermal osteonecrosis and various complications. To understand the heat propagation over time, a 3D finite element model was developed to simulate sequential bone drilling. By incorporating proper material properties and a modified bone necrosis criteria, this model can visualize the propagation of damaged areas. For this study, comparisons between a 2.0 mm Kirschner wire and 2.0 mm twist drill were conducted with their heat sources determined using an inverse method and experimentally measured bone temperatures. Three clinically viable solutions to reduce thermally-induced bone damage were evaluated using finite element analysis, including tool selection, time interval between passes, and different drilling sequences. Results show that the ideal solution would be using twist drills rather than Kirschner wires if the situation allows. A shorter time interval between passes was also found to be beneficial as it reduces the total heat exposure time. Lastly, optimizing the drilling sequence reduced the thermal damage of bone, but the effect may be limited. This study demonstrates the feasibility of using the proposed model to study clinical issues and find potential solutions prior to clinical trials.

  16. Experimental and numerical evaluations on palm microwave heating for Red Palm Weevil pest control

    PubMed Central

    Massa, Rita; Panariello, Gaetano; Pinchera, Daniele; Schettino, Fulvio; Caprio, Emilio; Griffo, Raffaele; Migliore, Marco Donald

    2017-01-01

    The invasive Red Palm Weevil is the major pest of palms. Several control methods have been applied, however concern is raised regarding the treatments that can cause significant environmental pollution. In this context the use of microwaves is particularly attractive. Microwave heating applications are increasingly proposed in the management of a wide range of agricultural and wood pests, exploiting the thermal death induced in the insects that have a thermal tolerance lower than that of the host matrices. This paper describes research aiming to combat the Red Palm pest using microwave heating systems. An electromagnetic-thermal model was developed to better control the temperature profile inside the palm tissues. In this process both electromagnetic and thermal parameters are involved, the latter being particularly critical depending on plant physiology. Their evaluation was carried out by fitting experimental data and the thermal model with few free parameters. The results obtained by the simplified model well match with both that of a commercial software 3D model and measurements on treated Phoenix canariensis palms with a ring microwave applicator. This work confirms that microwave heating is a promising, eco-compatible solution to fight the spread of weevil. PMID:28361964

  17. Evaluating Geothermal Potential in Germany by Numerical Reservoir Modeling of Engineered Geothermal Systems

    NASA Astrophysics Data System (ADS)

    Jain, Charitra; Vogt, Christian; Clauser, Christoph

    2014-05-01

    We model hypothetical Engineered Geothermal System (EGS) reservoirs by solving coupled partial differential equations governing fluid flow and heat transport. Building on EGS's strengths of inherent modularity and storage capability, it is possible to implement multiple wells in the reservoir to extend the rock volume accessible for circulating water in order to increase the heat yield. By varying parameters like flow rates and well-separations in the subsurface, this study looks at their long-term impacts on the reservoir development. This approach allows us to experiment with different placements of the engineered fractures and propose several EGS layouts for achieving optimized heat extraction. Considering the available crystalline area and accounting for the competing land uses, this study evaluates the overall EGS potential and compares it with those of other used renewables in Germany. There is enough area to support 13450 EGS plants, each with six reversed-triplets (18 wells) and an average electric power of 35.3MWe. When operated at full capacity, these systems can collectively supply 4155TWh of electric energy in one year which would be roughly six times the electric energy produced in Germany in the year 2011. Engineered Geothermal Systems make a compelling case for contributing towards national power production in a future powered by a sustainable, decentralized energy system.

  18. Evaluation of the successive approximations method for acoustic streaming numerical simulations.

    PubMed

    Catarino, S O; Minas, G; Miranda, J M

    2016-05-01

    This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.

  19. Numerical evaluation of the capping tendency of microcrystalline cellulose tablets during a diametrical compression test.

    PubMed

    Furukawa, Ryoichi; Chen, Yuan; Horiguchi, Akio; Takagaki, Keisuke; Nishi, Junichi; Konishi, Akira; Shirakawa, Yoshiyuki; Sugimoto, Masaaki; Narisawa, Shinji

    2015-09-30

    Capping is one of the major problems that occur during the tabletting process in the pharmaceutical industry. This study provided an effective method for evaluating the capping tendency during diametrical compression test using the finite element method (FEM). In experiments, tablets of microcrystalline cellulose (MCC) were compacted with a single tabletting machine, and the capping tendency was determined by visual inspection of the tablet after a diametrical compression test. By comparing the effects of double-radius and single-radius concave punch shapes on the capping tendency, it was observed that the capping tendency of double-radius tablets occurred at a lower compaction force compared with single-radius tablets. Using FEM, we investigated the variation in plastic strain within tablets during the diametrical compression test and visualised it using the output variable actively yielding (AC YIELD) of ABAQUS. For both single-radius and double-radius tablets, a capping tendency is indicated if the variation in plastic strain was initiated from the centre of tablets, while capping does not occur if the variation began from the periphery of tablets. The compaction force estimated by the FEM analysis at which the capping tendency was observed was in reasonable agreement with the experimental results.

  20. Evaluation of a spectral line width for the Phillips spectrum by means of numerical simulation

    NASA Astrophysics Data System (ADS)

    Korotkevich, A. O.; Zakharov, V. E.

    2015-05-01

    The work aims to check one of the assumptions under which the kinetic equation for water waves was derived in order to understand whether it can be applied to the situations described by the Phillips spectrum. We evaluate a spectral line width of the spectrum from the simulations in the framework of primordial dynamical equations at different levels of nonlinearity in the system, corresponding to the weakly turbulent Kolmogorov-Zakharov spectra ω-4, Phillips spectra ω-5, and intermediate cases. The original motivation of the work was to check one of the assumptions under which the kinetic equation for water waves was derived in order to understand whether it can be applied to the Phillips spectrum. It is shown that, even in the case of relatively high average steepness, when the Phillips spectrum is present in the system, the spectral lines are still very narrow, at least in the region of the direct cascade spectrum. It allows us to state that, even in the case of the Phillips spectrum, one of the assumptions used for the derivation of the Hasselmann kinetic equation is still valid, at least in the case of moderate whitecapping.

  1. Numerical evaluation of bioaccumulation and depuration kinetics of PAHs in Mytilus galloprovincialis.

    PubMed

    Yakan, S D; Focks, A; Klasmeier, J; Okay, O S

    2017-01-01

    Polycyclic aromatic hydrocarbons (PAHs) are important organic pollutants in the aquatic environment due to their persistence and bioaccumulation potential both in organisms and in sediments. Benzo(a)anthracene (BaA) and phenanthrene (PHE), which are in the priority pollutant list of the U.S. EPA (Environmental Protection Agency), are selected as model compounds of the present study. Bioaccumulation and depuration experiments with local Mediterranean mussel species, Mytilus galloprovincialis were used as the basis of the study. Mussels were selected as bioindicator organisms due to their broad geographic distribution, immobility and low enzyme activity. Bioaccumulation and depuration kinetics of selected PAHs in Mytilus galloprovincialis were described using first order kinetic equations in a three compartment model. The compartments were defined as: (1) biota (mussel), (2) surrounding environment (seawater), and (3) algae (Phaeodactylum tricornutum) as food source of the mussels. Experimental study had been performed for three different concentrations. Middle concentration of the experimental data was used as the model input in order to represent other high and low concentrations of selected PAHs. Correlations of the experiment and model data revealed that they are in good agreement. Accumulation and depuration trend of PAHs in mussels regarding also the durations can be estimated effectively with the present study. Thus, this study can be evaluated as a supportive tool for risk assessment in addition to monitoring studies.

  2. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  3. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  4. Evaluation of operational numerical weather predictions in relation to the prevailing synoptic conditions

    NASA Astrophysics Data System (ADS)

    Pytharoulis, Ioannis; Tegoulias, Ioannis; Karacostas, Theodore; Kotsopoulos, Stylianos; Kartsios, Stergios; Bampzelis, Dimitrios

    2015-04-01

    The Thessaly plain, which is located in central Greece, has a vital role in the financial life of the country, because of its significant agricultural production. The aim of DAPHNE project (http://www.daphne-meteo.gr) is to tackle the problem of drought in this area by means of Weather Modification in convective clouds. This problem is reinforced by the increase of population and the water demand for irrigation, especially during the warm period of the year. The nonhydrostatic Weather Research and Forecasting model (WRF), is utilized for research and operational purposes of DAPHNE project. The WRF output fields are employed by the partners in order to provide high-resolution meteorological guidance and plan the project's operations. The model domains cover: i) Europe, the Mediterranean sea and northern Africa, ii) Greece and iii) the wider region of Thessaly (at selected periods), at horizontal grid-spacings of 15km, 5km and 1km, respectively, using 2-way telescoping nesting. The aim of this research work is to investigate the model performance in relation to the prevailing upper-air synoptic circulation. The statistical evaluation of the high-resolution operational forecasts of near-surface and upper air fields is performed at a selected period of the operational phase of the project using surface observations, gridded fields and weather radar data. The verification is based on gridded, point and object oriented techniques. The 10 upper-air circulation types, which describe the prevailing conditions over Greece, are employed in the synoptic classification. This methodology allows the identification of model errors that occur and/or are maximized at specific synoptic conditions and may otherwise be obscured in aggregate statistics. Preliminary analysis indicates that the largest errors are associated with cyclonic conditions. Acknowledgments This research work of Daphne project (11SYN_8_1088) is co-funded by the European Union (European Regional Development Fund

  5. The FlatModel: a 2D numerical code to evaluate debris flow dynamics. Eastern Pyrenees basins application.

    NASA Astrophysics Data System (ADS)

    Bateman, A.; Medina, V.; Hürlimann, M.

    2009-04-01

    Debris flows are present in every country where a combination of high mountains and flash floods exists. In the northern part of the Iberian Peninsula, at the Pyrenees, sporadic debris events occur. We selected two different events. The first one was triggered at La Guingueta by the big exceptional flood event that produced many debris flows in 1982 which were spread all over the Catalonian Pyrenees. The second, more local event occurred in 2000 at the mountain Montserrat at the Pre-litoral mountain chain. We present here some results of the FLATModel, entirely developed at the Research Group in Sediment Transport of the Hydraulic, Marine and Environmental Engineering Department (GITS-UPC). The 2D FLATModel is a Finite Volume method that uses the Godunov scheme. Some numerical arranges have been made to analyze the entrainment process during the events, the Stop & Go phenomena and the final deposit of the material. The material rheology implemented is the Voellmy approach, because it acts very well evaluating the frictional and turbulent behavior. The FLATModel uses a GIS environment that facilitates the data analysis as the comparison between field and numerical data. The two events present two different characteristics, one is practically a one dimensional problem of 1400 m in length and the other has a more two dimensional behavior that forms a big fan.

  6. Numerical and experimental evaluation of the impact performance of advanced high-strength steel sheets based on a damage model

    NASA Astrophysics Data System (ADS)

    Ma, Ning; Park, Taejoon; Kim, Dongun; Kim, Chongmin; Chung, Kwansoo

    2010-06-01

    The impact performance in a Charpy impact test was experimentally and numerically studied for the advanced high-strength steel sheets (AHSS) TWIP940 and TRIP590 as well as the high-strength grade known as 340R. To characterize the mechanical properties, uni-axial simple tension tests were conducted to determine the anisotropic properties and strain rate sensitivities of these materials. In particular, the high-speed strain-rate sensitivity of TRIP590 and 340R (rate sensitive) was also characterized to account for the high strain rates involved in the Charpy impact test. To evaluate fracture behavior in the Charpy impact test, a new damage model including a triaxiality-dependent fracture criterion and hardening behavior with stiffness deterioration was introduced. The model was calibrated via numerical simulations and experiments involving simple tension and V-notch tests. The new damage model along with the anisotropic yield function Hill 1948 was incorporated into the ABAQUS/Explicit FEM code, which performed reasonably well to predict the impact energy absorbed during the Charpy impact test.

  7. A numerical coefficient for evaluation of the environmental impact of electromagnetic fields radiated by base stations for mobile communications.

    PubMed

    Russo, P; Cerri, G; Vespasiani, V

    2010-12-01

    The aim of this study is the development of an Electromagnetic Environmental Impact Factor (EEIF). This is a global parameter that represents the level of electromagnetic impact on a specific area due to the presence of radiating systems, such as base station (BS) antennas for mobile communications. The numerical value of the EEIF depends only on the electromagnetic field intensity, a well-defined physical quantity that can easily be measured or computed. The paper describes the significant parameters of the field distribution adopted to evaluate the EEIF, and the assumptions used to develop a proper scale of values. Finally, some examples of application of the EEIF method are analyzed for real situations in a typical urban area.

  8. Evaluation of energy band offset of Si1‑ x Sn x semiconductors by numerical calculation using density functional theory

    NASA Astrophysics Data System (ADS)

    Nagae, Yuki; Kurosawa, Masashi; Araidai, Masaaki; Nakatsuka, Osamu; Shiraishi, Kenji; Zaima, Shigeaki

    2017-04-01

    We examined the numerical calculation for evaluating the energy band offset of Si1‑ x Sn x semiconductors and compared our calculation results with the results of previous theoretical calculation and experimental estimation. By estimating the charge neutrality level of Si1‑ x Sn x as a mutual basis level for comparison of the first-principles calculations for different Si1‑ x Sn x contents, the calculated valence band offset of Si1‑ x Sn x to Si was found to sensitively shift with upward bowing with increasing Sn content compared with that obtained using a conventional linear interpolation model. This is in good agreements with the experimental result.

  9. Numerical evaluation of the use of granulated coal ash to reduce an oxygen-deficient water mass.

    PubMed

    Yamamoto, Hironori; Yamamoto, Tamiji; Mito, Yugo; Asaoka, Satoshi

    2016-06-15

    Granulated coal ash (GCA), which is a by-product of coal thermal electric power stations, effectively decreases phosphate and hydrogen sulfide (H2S) concentrations in the pore water of coastal marine sediments. In this study, we developed a pelagic-benthic coupled ecosystem model to evaluate the effectiveness of GCA for diminishing the oxygen-deficient water mass formed in coastal bottom water of Hiroshima Bay in Japan. Numerical experiments revealed the application of GCA was effective for reducing the oxygen-deficient water masses, showing alleviation of the DO depletion in summer increased by 0.4-3mgl(-1). The effect of H2S adsorption onto the GCA lasted for 5.25years in the case in which GCA was mixed with the sediment in a volume ratio of 1:1. The application of this new GCA-based environmental restoration technique could also make a substantial contribution to form a recycling-oriented society.

  10. Error estimate evaluation in numerical approximations of partial differential equations: A pilot study using data mining methods

    NASA Astrophysics Data System (ADS)

    Assous, Franck; Chaskalovic, Joël

    2013-03-01

    In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.

  11. Ultrasonic field profile evaluation in acoustically inhomogeneous anisotropic materials using 2D ray tracing model: Numerical and experimental comparison.

    PubMed

    Kolkoori, S R; Rahman, M-U; Chinta, P K; Ktreutzbruck, M; Rethmeier, M; Prager, J

    2013-02-01

    Ultrasound propagation in inhomogeneous anisotropic materials is difficult to examine because of the directional dependency of elastic properties. Simulation tools play an important role in developing advanced reliable ultrasonic non destructive testing techniques for the inspection of anisotropic materials particularly austenitic cladded materials, austenitic welds and dissimilar welds. In this contribution we present an adapted 2D ray tracing model for evaluating ultrasonic wave fields quantitatively in inhomogeneous anisotropic materials. Inhomogeneity in the anisotropic material is represented by discretizing into several homogeneous layers. According to ray tracing model, ultrasonic ray paths are traced during its energy propagation through various discretized layers of the material and at each interface the problem of reflection and transmission is solved. The presented algorithm evaluates the transducer excited ultrasonic fields accurately by taking into account the directivity of the transducer, divergence of the ray bundle, density of rays and phase relations as well as transmission coefficients. The ray tracing model is able to calculate the ultrasonic wave fields generated by a point source as well as a finite dimension transducer. The ray tracing model results are validated quantitatively with the results obtained from 2D Elastodynamic Finite Integration Technique (EFIT) on several configurations generally occurring in the ultrasonic non destructive testing of anisotropic materials. Finally, the quantitative comparison of ray tracing model results with experiments on 32mm thick austenitic weld material and 62mm thick austenitic cladded material is discussed.

  12. Evaluating some indicators for identifying mountain waves situations in snow days by means of numerical modeling and continuous data

    NASA Astrophysics Data System (ADS)

    Sanchez, Jose Luis; Posada, Rafael; Hierro, Rodrigo; García-Ortega, Eduardo; Lopez, Laura; Gascón, Estibaliz

    2013-04-01

    Madrid - Barajas airport is placed at 70 km away from the Central System and snow days and mountains waves are considered as risks days for landing operations. This motivated the study of mesoscale factors affecting this type of situations. The availability of observational data gathered during three consecutives winter campaigns in the Central System along with data from high-resolution numerical models, have allowed the evaluation of the environmental conditions necessary for mountain waves formations in snow days and were characterized from observational data and numerical simulations. By means of Meteosat Second Generation satellite images, lee clouds were observed in 25 days corresponding to the 2008-2011 winter seasons. Six of them, which also presented NW low level flow over the mountain range, were analyzed. Necessary conditions for oscillations as well as vertical wave propagation were studied from radiometer data and MM5 model simulations. From radiometer data the presence of stable environment in the six selected events is confirmed. From MM5 model, dynamic conditions allowing the flow to cross the mountain range were evaluated in three different locations around the mountain range. Simulations of vertical velocity show that MM5 model is able to detect mountain waves. The waves present in the six selected events are examined. Tropospheric were able to forecast energy release associated with the mountain waves. The vertical wavelength presented a high variability due to intense background winds at high tropospheric levels. The average values estimated for λz were between 3 and 12 km. The intrinsic period estimated was around 30 and 12 km. The simulations were able to forecast energy release associated with mountain waves. Acknowledgments: This study was supported by the Plan Nacional de I+D of Spain, through the grants CGL2010-15930, Micrometeo IPT-310000-2010-022 and the Junta de Castilla y León through the grant LE220A11-2.

  13. Evaluation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in Pyrus pyrifolia using different tissue samples and seasonal conditions.

    PubMed

    Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya

    2014-01-01

    We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out

  14. Evaluation of the parameters of 1:1 charge transfer complexes from spectrophotometric data by non-linear numerical method

    NASA Astrophysics Data System (ADS)

    Grebenyuk, Serhiy A.; Perepichka, Igor F.; Popov, Anatolii F.

    2002-11-01

    The non-linear numerical method for evaluation of equilibrium constants and molar extinction coefficients of molecular complexes from a spectrophotometric experiment is described, which in contrast to linear models has no limitations with respect to concentrations of the components. The proposed procedure is applied to donor-acceptor interaction in solution between N-ethyl carbazole (EtCz) and 7,7,8,8-tetracyanoquinodimethane (TCNQ) or n-hexyl 2,5,7-trinitro-9-dicyanomethylenefluorene-4-carboxylate (HexDTFC) to evaluate the method and to obtain the parameters of charge transfer complexes (CTCs) formation. Association constants ( K) and molar extinction coefficients ( ɛ) of CTCs derived from non-linear approach (EtCz-TCNQ: K=2.49±0.19 M -1; ɛ=2950±160 M -1 cm -1. EtCz-HexDTFC: K=12.1±0.3 M -1; ɛ=1335±24 M -1 cm -1) are close to that from linear models but show lower standard errors in parameter estimations.

  15. Establishment of computerized numerical databases on thermophysical and other properties of molten as well as solid materials and data evaluation and validation for generating recommended reliable reference data

    NASA Technical Reports Server (NTRS)

    Ho, C. Y.

    1993-01-01

    The Center for Information and Numerical Data Analysis and Synthesis, (CINDAS), measures and maintains databases on thermophysical, thermoradiative, mechanical, optical, electronic, ablation, and physical properties of materials. Emphasis is on aerospace structural materials especially composites and on infrared detector/sensor materials. Within CINDAS, the Department of Defense sponsors at Purdue several centers: the High Temperature Material Information Analysis Center (HTMIAC), the Ceramics Information Analysis Center (CIAC) and the Metals Information Analysis Center (MIAC). The responsibilities of CINDAS are extremely broad encompassing basic and applied research, measurement of the properties of thin wires and thin foils as well as bulk materials, acquisition and search of world-wide literature, critical evaluation of data, generation of estimated values to fill data voids, investigation of constitutive, structural, processing, environmental, and rapid heating and loading effects, and dissemination of data. Liquids, gases, molten materials and solids are all considered. The responsibility of maintaining widely used databases includes data evaluation, analysis, correlation, and synthesis. Material property data recorded on the literature are often conflicting, diverging, and subject to large uncertainties. It is admittedly difficult to accurately measure materials properties. Systematic and random errors both enter. Some errors result from lack of characterization of the material itself (impurity effects). In some cases assumed boundary conditions corresponding to a theoretical model are not obtained in the experiments. Stray heat flows and losses must be accounted for. Some experimental methods are inappropriate and in other cases appropriate methods are carried out with poor technique. Conflicts in data may be resolved by curve fitting of the data to theoretical or empirical models or correlation in terms of various affecting parameters. Reasons (e.g. phase

  16. Identification and evaluation of reference genes for accurate gene expression normalization of fresh and frozen-thawed spermatozoa of water buffalo (Bubalus bubalis).

    PubMed

    Ashish, Shende; Bhure, S K; Harikrishna, Pillai; Ramteke, S S; Muhammed Kutty, V H; Shruthi, N; Ravi Kumar, G V P P S; Manish, Mahawar; Ghosh, S K; Mihir, Sarkar

    2017-04-01

    The quantitative real time PCR (qRT-PCR) has become an important tool for gene-expression analysis for a selected number of genes in life science. Although large dynamic range, sensitivity and reproducibility of qRT-PCR is good, the reliability majorly depend on the selection of proper reference genes (RGs) employed for normalization. Although, RGs expression has been reported to vary considerably within same cell type with different experimental treatments. No systematic study has been conducted to identify and evaluate the appropriate RGs in spermatozoa of domestic animals. Therefore, this study was conducted to analyze suitable stable RGs in fresh and frozen-thawed spermatozoa. We have assessed 13 candidate RGs (BACT, RPS18s, RPS15A, ATP5F1, HMBS, ATP2B4, RPL13, EEF2, TBP, EIF2B2, MDH1, B2M and GLUT5) of different functions and pathways using five algorithms. Regardless of the approach, the ranking of the most and the least candidate RGs remained almost same. The comprehensive ranking by RefFinder showed GLUT5, ATP2B4 and B2M, MDH1 as the top two stable and least stable RGs, respectively. The expression levels of four heat shock proteins (HSP) were employed as a target gene to evaluate RGs efficiency for normalization. The results demonstrated an exponential difference in expression levels of the four HSP genes upon normalization of the data with the most stable and the least stable RGs. Our study, provides a convenient RGs for normalization of gene-expression of key metabolic pathways effected during freezing and thawing of spermatozoa of buffalo and other closely related bovines.

  17. Numerical Evaluation of Mode 1 Stress Intensity Factor as a Function of Material Orientation For BX-265 Foam Insulation Material

    NASA Technical Reports Server (NTRS)

    Knudsen, Erik; Arakere, Nagaraj K.

    2006-01-01

    Foam; a cellular material, is found all around us. Bone and cork are examples of biological cell materials. Many forms of man-made foam have found practical applications as insulating materials. NASA uses the BX-265 foam insulation material on the external tank (ET) for the Space Shuttle. This is a type of Spray-on Foam Insulation (SOFI), similar to the material used to insulate attics in residential construction. This foam material is a good insulator and is very lightweight, making it suitable for space applications. Breakup of segments of this foam insulation on the shuttle ET impacting the shuttle thermal protection tiles during liftoff is believed to have caused the space shuttle Columbia failure during re-entry. NASA engineers are very interested in understanding the processes that govern the breakup/fracture of this complex material from the shuttle ET. The foam is anisotropic in nature and the required stress and fracture mechanics analysis must include the effects of the direction dependence on material properties. Material testing at NASA MSFC has indicated that the foam can be modeled as a transversely isotropic material. As a first step toward understanding the fracture mechanics of this material, we present a general theoretical and numerical framework for computing stress intensity factors (SIFs), under mixed-mode loading conditions, taking into account the material anisotropy. We present mode I SIFs for middle tension - M(T) - test specimens, using 3D finite element stress analysis (ANSYS) and FRANC3D fracture analysis software, developed by the Cornel1 Fracture Group. Mode I SIF values are presented for a range of foam material orientations. Also, NASA has recorded the failure load for various M(T) specimens. For a linear analysis, the mode I SIF will scale with the far-field load. This allows us to numerically estimate the mode I fracture toughness for this material. The results represent a quantitative basis for evaluating the strength and

  18. Evaluation of a numerical simulation model for a system coupling atmospheric gas, surface water and unsaturated or saturated porous medium.

    PubMed

    Hibi, Yoshihiko; Tomigashi, Akira; Hirose, Masafumi

    2015-12-01

    Numerical simulations that couple flow in a surface fluid with that in a porous medium are useful for examining problems of pollution that involve interactions among the atmosphere, surface water and groundwater, including, for example, saltwater intrusion along coasts. We previously developed a numerical simulation method for simulating a coupled atmospheric gas, surface water, and groundwater system (called the ASG method) that employs a saturation equation for flow in a porous medium; this equation allows both the void fraction of water in the surface system and water saturation in the porous medium to be solved simultaneously. It remained necessary, however, to evaluate how global pressure, including gas pressure, water pressure, and capillary pressure, should be specified at the boundary between the surface and the porous medium. Therefore, in this study, we derived a new equation for global pressure and integrated it into the ASG method. We then simulated water saturation in a porous medium and the void fraction of water in a surface system by the ASG method and reproduced fairly well the results of two column experiments. Next, we simulated water saturation in a porous medium (sand) with a bank, by using both the ASG method and a modified Picard (MP) method. We found only a slight difference in water saturation between the ASG and MP simulations. This result confirmed that the derived equation for global pressure was valid for a porous medium, and that the global pressure value could thus be used with the saturation equation for porous media. Finally, we used the ASG method to simulate a system coupling atmosphere, surface water, and a porous medium (110m wide and 50m high) with a trapezoidal bank. The ASG method was able to simulate the complex flow of fluids in this system and the interaction between the porous medium and the surface water or the atmosphere.

  19. A New Method for Evaluating Actual Drug Release Kinetics of Nanoparticles inside Dialysis Devices via Numerical Deconvolution.

    PubMed

    Zhou, Yousheng; He, Chunsheng; Chen, Kuan; Ni, Jieren; Cai, Yu; Guo, Xiaodi; Wu, Xiao Yu

    2016-12-10

    Nanoparticle formulations have found increasing applications in modern therapies. To achieve desired treatment efficacy and safety profiles, drug release kinetics of nanoparticles must be controlled tightly. However, actual drug release kinetics of nanoparticles cannot be readily measured due to technique difficulties, although various methods have been attempted. Among existing experimental approaches, dialysis method is the most widely applied one due to its simplicity and avoidance of separating released drug from the nanoparticles. Yet this method only measures the released drug in the medium outside a dialysis device (the receiver), instead of actual drug release from the nanoparticles inside the dialysis device (the donor). Thus we proposed a new method using numerical deconvolution to evaluate actual drug release kinetics of nanoparticles inside the donor based on experimental release profiles of nanoparticles and free drug solution in the receptor determined by existing dialysis tests. Two computer programs were developed based on two different numerical methods, namely least square criteria with prescribed Weibull function or orthogonal polynomials as input function. The former was used for all analyses in this work while the latter for verifying the reliability of the predictions. Experimental data of drug release from various nanoparticle formulations obtained from different dialysis settings and membrane pore sizes were used to substantiate this approach. The results demonstrated that this method is applicable to a broad range of nanoparticle and microparticle formulations requiring no additional experiments. It is independent of particle formulations, drug release mechanisms, and testing conditions. This new method may also be used, in combination with existing dialysis devices, to develop a standardized method for quality control, in vitro-in vivo correlation, and for development of nanoparticles and other types of dispersion formulations.

  20. Numerical, hydraulic, and hemolytic evaluation of an intravascular axial flow blood pump to mechanically support Fontan patients.

    PubMed

    Throckmorton, Amy L; Kapadia, Jugal Y; Chopski, Steven G; Bhavsar, Sonya S; Moskowitz, William B; Gullquist, Scott D; Gangemi, James J; Haggerty, Christopher M; Yoganathan, Ajit P

    2011-01-01

    Currently available mechanical circulatory support systems are limited for adolescent and adult patients with a Fontan physiology. To address this growing need, we are developing a collapsible, percutaneously-inserted, axial flow blood pump to support the cavopulmonary circulation in Fontan patients. During the first phase of development, the design and experimental evaluation of an axial flow blood pump was performed. We completed numerical modeling of the pump using computational fluid dynamics analysis, hydraulic testing of a plastic pump prototype, and blood bag experiments (n=7) to measure the levels of hemolysis produced by the pump. Statistical analyses using regression were performed. The prototype with a 4-bladed impeller generated a pressure rise of 2-30 mmHg with a flow rate of 0.5-4 L/min for 3000-6000 RPM. A comparison of the experimental performance data to the numerical predictions demonstrated an excellent agreement with a maximum deviation being less than 6%. A linear increase in the plasma-free hemoglobin (pfHb) levels during the 6-h experiments was found, as desired. The maximum pfHb level was measured to be 21 mg/dL, and the average normalized index of hemolysis was determined to be 0.0097 g/100 L for all experiments. The hydraulic performance of the prototype and level of hemolysis are indicative of significant progress in the design of this blood pump. These results support the continued development of this intravascular pump as a bridge-to-transplant, bridge-to-recovery, bridge-to-hemodynamic stability, or bridge-to-surgical reconstruction for Fontan patients.

  1. Seasonal variation of residence time in spring and groundwater evaluated by CFCs and numerical simulation in mountainous headwater catchment

    NASA Astrophysics Data System (ADS)

    Tsujimura, Maki; Watanabe, Yasuto; Ikeda, Koichi; Yano, Shinjiro; Abe, Yutaka

    2016-04-01

    Headwater catchments in mountainous region are the most important recharge area for surface and subsurface waters, additionally time information of the water is principal to understand hydrological processes in the catchments. However, there have been few researches to evaluate variation of residence time of subsurface water in time and space at the mountainous headwaters especially with steep slope. We investigated the temporal variation of the residence time of the spring and groundwater with tracing of hydrological flow processes in mountainous catchments underlain by granite, Yamanashi Prefecture, central Japan. We conducted intensive hydrological monitoring and water sampling of spring, stream and ground waters in high-flow and low-flow seasons from 2008 through 2013 in River Jingu Watershed underlain by granite, with an area of approximately 15 km2 and elevation ranging from 950 m to 2000 m. The CFCs, stable isotopic ratios of oxygen-18 and deuterium, inorganic solute constituent concentrations were determined on all water samples. Also, a numerical simulation was conducted to reproduce of the average residence times of the spring and groundwater. The residence time of the spring water estimated by the CFCs concentration ranged from 10 years to 60 years in space within the watershed, and it was higher (older) during the low flow season and lower (younger) during the high flow season. We tried to reproduce the seasonal change of the residence time in the spring water by numerical simulation, and the calculated residence time of the spring water and discharge of the stream agreed well with the observed values. The groundwater level was higher during the high flow season and the groundwater dominantly flowed through the weathered granite with higher permeability, whereas that was lower during the low flow season and that flowed dominantly through the fresh granite with lower permeability. This caused the seasonal variation of the residence time of the spring

  2. Numerical evaluation of community-scale aquifer storage, transfer and recovery technology: A case study from coastal Bangladesh

    NASA Astrophysics Data System (ADS)

    Barker, Jessica L. B.; Hassan, Md. Mahadi; Sultana, Sarmin; Ahmed, Kazi Matin; Robinson, Clare E.

    2016-09-01

    Aquifer storage, transfer and recovery (ASTR) may be an efficient low cost water supply technology for rural coastal communities that experience seasonal freshwater scarcity. The feasibility of ASTR as a water supply alternative is being evaluated in communities in south-western Bangladesh where the shallow aquifers are naturally brackish and severe seasonal freshwater scarcity is compounded by frequent extreme weather events. A numerical variable-density groundwater model, first evaluated against data from an existing community-scale ASTR system, was applied to identify the influence of hydrogeological as well as design and operational parameters on system performance. For community-scale systems, it is a delicate balance to achieve acceptable water quality at the extraction well whilst maintaining a high recovery efficiency (RE) as dispersive mixing can dominate relative to the small size of the injected freshwater plume. For the existing ASTR system configuration used in Bangladesh where the injection head is controlled and the extraction rate is set based on the community water demand, larger aquifer hydraulic conductivity, aquifer depth and injection head improve the water quality (lower total dissolved solids concentration) in the extracted water because of higher injection rates, but the RE is reduced. To support future ASTR system design in similar coastal settings, an improved system configuration was determined and relevant non-dimensional design criteria were identified. Analyses showed that four injection wells distributed around a central single extraction well leads to high RE provided the distance between the injection wells and extraction well is less than half the theoretical radius of the injected freshwater plume. The theoretical plume radius relative to the aquifer dispersivity is also an important design consideration to ensure adequate system performance. The results presented provide valuable insights into the feasibility and design

  3. Accurate numerical solutions for elastic-plastic models. [LMFBR

    SciTech Connect

    Schreyer, H. L.; Kulak, R. F.; Kramer, J. M.

    1980-03-01

    The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated.

  4. Numerical assessment of accurate measurements of laminar flame speed

    NASA Astrophysics Data System (ADS)

    Goulier, Joules; Bizon, Katarzyna; Chaumeix, Nabiha; Meynet, Nicolas; Continillo, Gaetano

    2016-12-01

    In combustion, the laminar flame speed constitutes an important parameter that reflects the chemistry of oxidation for a given fuel, along with its transport and thermal properties. Laminar flame speeds are used (i) in turbulent models used in CFD codes, and (ii) to validate detailed or reduced mechanisms, often derived from studies using ideal reactors and in diluted conditions as in jet stirred reactors and in shock tubes. End-users of such mechanisms need to have an assessment of their capability to predict the correct heat released by combustion in realistic conditions. In this view, the laminar flame speed constitutes a very convenient parameter, and it is then very important to have a good knowledge of the experimental errors involved with its determination. Stationary configurations (Bunsen burners, counter-flow flames, heat flux burners) or moving flames (tubes, spherical vessel, soap bubble) can be used. The spherical expanding flame configuration has recently become popular, since it can be used at high pressures and temperatures. With this method, the flame speed is not measured directly, but derived through the recording of the flame radius. The method used to process the radius history will have an impact on the estimated flame speed. Aim of this work is to propose a way to derive the laminar flame speed from experimental recording of expanding flames, and to assess the error magnitude.

  5. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  6. Tillandsia stricta Sol (Bromeliaceae) leaves as monitors of airborne particulate matter-A comparative SEM methods evaluation: Unveiling an accurate and odd HP-SEM method.

    PubMed

    de Oliveira, Martha Lima; de Melo, Edésio José Tenório; Miguens, Flávio Costa

    2016-09-01

    Airborne particulate matter (PM) has been included among the most important air pollutants by governmental environment agencies and academy researchers. The use of terrestrial plants for monitoring PM has been widely accepted, particularly when it is coupled with SEM/EDS. Herein, Tillandsia stricta leaves were used as monitors of PM, focusing on a comparative evaluation of Environmental SEM (ESEM) and High-Pressure SEM (HPSEM). In addition, specimens air-dried at formaldehyde atmosphere (AD/FA) were introduced as an SEM procedure. Hydrated specimen observation by ESEM was the best way to get information from T. stricta leaves. If any artifacts were introduced by AD/FA, they were indiscernible from those caused by CPD. Leaf anatomy was always well preserved. PM density was determined on adaxial and abaxial leaf epidermis for each of the SEM proceedings. When compared with ESEM, particle extraction varied from 0 to 20% in air-dried leaves while 23-78% of particles deposited on leaves surfaces were extracted by CPD procedures. ESEM was obviously the best choice over other methods but morphological artifacts increased in function of operation time while HPSEM operation time was without limit. AD/FA avoided the shrinkage observed in the air-dried leaves and particle extraction was low when compared with CPD. Structural and particle density results suggest AD/FA as an important methodological approach to air pollution biomonitoring that can be widely used in all electron microscopy labs. Otherwise, previous PM assessments using terrestrial plants as biomonitors and performed by conventional SEM could have underestimated airborne particulate matter concentration.

  7. A robust and accurate formulation of molecular and colloidal electrostatics.

    PubMed

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y C

    2016-08-07

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  8. A robust and accurate formulation of molecular and colloidal electrostatics

    NASA Astrophysics Data System (ADS)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  9. Evaluation of numerical models by FerryBox and fixed platform in situ data in the southern North Sea

    NASA Astrophysics Data System (ADS)

    Haller, M.; Janssen, F.; Siddorn, J.; Petersen, W.; Dick, S.

    2015-11-01

    For understanding and forecasting of hydrodynamics in coastal regions, numerical models have served as an important tool for many years. In order to assess the model performance, we compared simulations to observational data of water temperature and salinity. Observations were available from FerryBox transects in the southern North Sea and, additionally, from a fixed platform of the MARNET network. More detailed analyses have been made at three different stations, located off the English eastern coast, at the Oyster Ground and in the German Bight. FerryBoxes installed on ships of opportunity (SoO) provide high-frequency surface measurements along selected tracks on a regular basis. The results of two operational hydrodynamic models have been evaluated for two different time periods: BSHcmod v4 (January 2009 to April 2012) and FOAM AMM7 NEMO (April 2011 to April 2012). While they adequately simulate temperature, both models underestimate salinity, especially near the coast in the southern North Sea. Statistical errors differ between the two models and between the measured parameters. The root mean square error (RMSE) of water temperatures amounts to 0.72 °C (BSHcmod v4) and 0.44 °C (AMM7), while for salinity the performance of BSHcmod is slightly better (0.68 compared to 1.1). The study results reveal weaknesses in both models, in terms of variability, absolute levels and limited spatial resolution. Simulation of the transition zone between the coasts and the open sea is still a demanding task for operational modelling. Thus, FerryBox data, combined with other observations with differing temporal and spatial scales, can serve as an invaluable tool not only for model evaluation, but also for model optimization by assimilation of such high-frequency observations.

  10. Evaluation of the Electroporation Efficiency of a Grid Electrode for Electrochemotherapy: From Numerical Model to In Vitro Tests.

    PubMed

    Ongaro, A; Campana, L G; De Mattei, M; Dughiero, F; Forzan, M; Pellati, A; Rossi, C R; Sieni, E

    2016-04-01

    Electrochemotherapy (ECT) is a local anticancer treatment based on the combination of chemotherapy and short, tumor-permeabilizing, voltage pulses delivered using needle electrodes or plate electrodes. The application of ECT to large skin surface tumors is time consuming due to technical limitations of currently available voltage applicators. The availability of large pulse applicators with few and more spaced needle electrodes could be useful in the clinic, since they could allow managing large and spread tumors while limiting the duration and the invasiveness of the procedure. In this article, a grid electrode with 2-cm spaced needles has been studied by means of numerical models. The electroporation efficiency has been assessed on human osteosarcoma cell line MG63 cultured in monolayer. The computational results show the distribution of the electric field in a model of the treated tissue. These results are helpful to evaluate the effect of the needle distance on the electric field distribution. Furthermore, the in vitro tests showed that the grid electrode proposed is suitable to electropore, by a single application, a cell culture covering an area of 55 cm(2). In conclusion, our data might represent substantial improvement in ECT in order to achieve a more homogeneous and time-saving treatment, with benefits for patients with cancer.

  11. Numerical Evaluation of P-Multigrid Method for the Solution of Discontinuous Galerkin Discretizations of Diffusive Equations

    NASA Technical Reports Server (NTRS)

    Atkins, H. L.; Helenbrook, B. T.

    2005-01-01

    This paper describes numerical experiments with P-multigrid to corroborate analysis, validate the present implementation, and to examine issues that arise in the implementations of the various combinations of relaxation schemes, discretizations and P-multigrid methods. The two approaches to implement P-multigrid presented here are equivalent for most high-order discretization methods such as spectral element, SUPG, and discontinuous Galerkin applied to advection; however it is discovered that the approach that mimics the common geometric multigrid implementation is less robust, and frequently unstable when applied to discontinuous Galerkin discretizations of di usion. Gauss-Seidel relaxation converges 40% faster than block Jacobi, as predicted by analysis; however, the implementation of Gauss-Seidel is considerably more expensive that one would expect because gradients in most neighboring elements must be updated. A compromise quasi Gauss-Seidel relaxation method that evaluates the gradient in each element twice per iteration converges at rates similar to those predicted for true Gauss-Seidel.

  12. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    EPA Pesticide Factsheets

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  13. Sensitivity kernels for coda-wave interferometry and scattering tomography: theory and numerical evaluation in two-dimensional anisotropically scattering media

    NASA Astrophysics Data System (ADS)

    Margerin, Ludovic; Planès, Thomas; Mayor, Jessie; Calvet, Marie

    2016-01-01

    Coda-wave interferometry is a technique which exploits tiny waveform changes in the coda to detect temporal variations of seismic properties in evolving media. Observed waveform changes are of two kinds: traveltime perturbations and distortion of seismograms. In the last 10 yr, various theories have been published to relate either background velocity changes to traveltime perturbations, or changes in the scattering properties of the medium to waveform decorrelation. These theories have been limited by assumptions pertaining to the scattering process itself-in particular isotropic scattering, or to the propagation regime-single-scattering and/or diffusion. In this manuscript, we unify and extend previous results from the literature using a radiative transfer approach. This theory allows us to incorporate the effect of anisotropic scattering and to cover a broad range of propagation regimes, including the contribution of coherent, singly scattered and multiply scattered waves. Using basic physical reasoning, we show that two different sensitivity kernels are required to describe traveltime perturbations and waveform decorrelation, respectively, a distinction which has not been well appreciated so far. Previous results from the literature are recovered as limiting cases of our general approach. To evaluate numerically the sensitivity functions, we introduce an improved version of a spectral technique known as the method of `rotated coordinate frames', which allows global evaluation of the Green's function of the radiative transfer equation in a finite domain. The method is validated through direct pointwise comparison with Green's functions obtained by the Monte Carlo method. To illustrate the theory, we consider a series of scattering media displaying increasing levels of scattering anisotropy and discuss the impact on the traveltime and decorrelation kernels. We also consider the related problem of imaging variations of scattering properties based on intensity

  14. Modelling Study at Kutlular Copper FIELD with Spat This Study, Evaluation Steps of Copper Mine Field SP Data Are Shown How to Reach More Accurate Results for SP Inversion Method.

    NASA Astrophysics Data System (ADS)

    Sahin, O. K.; Asci, M.

    2014-12-01

    At this study, determination of theoretical parameters for inversion process of Trabzon-Sürmene-Kutlular ore bed anomalies was examined. Making a decision of which model equation can be used for inversion is the most important step for the beginning. It is thought that will give a chance to get more accurate results. So, sections were evaluated with sphere-cylinder nomogram. After that, same sections were analyzed with cylinder-dike nomogram to determine the theoretical parameters for inversion process for every single model equations. After comparison of results, we saw that only one of them was more close to parameters of nomogram evaluations. But, other inversion result parameters were different from their nomogram parameters.

  15. Accurate spectral color measurements

    NASA Astrophysics Data System (ADS)

    Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.

    1999-08-01

    Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.

  16. Numerical simulation of heat exchanger

    SciTech Connect

    Sha, W.T.

    1985-01-01

    Accurate and detailed knowledge of the fluid flow field and thermal distribution inside a heat exchanger becomes invaluable as a large, efficient, and reliable unit is sought. This information is needed to provide proper evaluation of the thermal and structural performance characteristics of a heat exchanger. It is to be noted that an analytical prediction method, when properly validated, will greatly reduce the need for model testing, facilitate interpolating and extrapolating test data, aid in optimizing heat-exchanger design and performance, and provide scaling capability. Thus tremendous savings of cost and time are realized. With the advent of large digital computers and advances in the development of computational fluid mechanics, it has become possible to predict analytically, through numerical solution, the conservation equations of mass, momentum, and energy for both the shellside and tubeside fluids. The numerical modeling technique will be a valuable, cost-effective design tool for development of advanced heat exchangers.

  17. How accurate are sphygmomanometers?

    PubMed

    Mion, D; Pierin, A M

    1998-04-01

    The objective of this study was to assess the accuracy and reliability of mercury and aneroid sphygmomanometers. Measurement of accuracy of calibration and evaluation of physical conditions were carried out in 524 sphygmomanometers, 351 from a hospital setting, and 173 from private medical offices. Mercury sphygmomanometers were considered inaccurate if the meniscus was not '0' at rest. Aneroid sphygmomanometers were tested against a properly calibrated mercury manometer, and were considered calibrated when the error was < or =3 mm Hg. Both types of sphygmomanometers were evaluated for conditions of cuff/bladder, bulb, pump and valve. Of the mercury sphygmomanometers tested 21 % were found to be inaccurate. Of this group, unreliability was noted due to: excessive bouncing (14%), illegibility of the gauge (7%), blockage of the filter (6%), and lack of mercury in the reservoir (3%). Bladder damage was noted in 10% of the hospital devices and in 6% of private medical practices. Rubber aging occurred in 34% and 25%, leaks/holes in 19% and 18%, and leaks in the pump bulb in 16% and 30% of hospital devices and private practice devices, respectively. Of the aneroid sphygmomanometers tested, 44% in the hospital setting and 61% in private medical practices were found to be inaccurate. Of these, the magnitude of inaccuracy was 4-6 mm Hg in 32%, 7-12 mm Hg in 19% and > 13 mm Hg in 7%. In summary, most of the mercury and aneroid sphygmomanometers showed inaccuracy (21% vs 58%) and unreliability (64% vs 70%).

  18. Evaluation and Reduction of Machine Difference in Press Working with Utilization of Dedicated Die Support Structure and Numerical Methodologies

    NASA Astrophysics Data System (ADS)

    Ohashi, Takahiro

    2011-05-01

    In this study, support structures of a die for press working are discussed to solve the machine difference problems amongst presses. The developed multi-point die support structures are not only utilized for adjusting elastic deformation of a die, but also for in-process sensing of the behavior of a die. The structures have multiple support cells between a die and the slide of a press machine. The cell, known as `a support unit,' has the strain gauges attached on its side, and works in both ways as a kind of spring and a load and displacement sensor. The cell contacts on the die with a ball-contact, therefore it transmits only the vertical force at each support point. The isolation of a momentum and horizontal load at each support point contributes for a simple numerical model; it helps us to know the practical boundary condition at the points under an actual production. In addition, the momentum and horizontal forces at the points are useless for press working; the isolation of these forces contributes to reduce a jolt and related machine differences. The horizontal distribution of support units is changed to reduce elastic deformation of a die; it contributes to reduce a jolt, alignment errors of a die and geometrical errors of a product. The validity of those adjustments are confirmed with evaluating a product shape of a deep drawing and measuring jolts between upper and lower stamping dies. Furthermore, die deformation in a process is analyzed with using elastic FE analysis with actual bearing loads compiled from each support unit.

  19. Simple numerical evaluation of modified Bessel functions Kν( x) of fractional order and the integral ʃ x∞K ν(η) dη

    NASA Astrophysics Data System (ADS)

    Kostroun, Vaclav O.

    1980-05-01

    Theoretical expressions for the angular and spectral distributions of synchrotron radiation involve modified Bessel functions of fractional order and the integral ʃ x∞K ν(η) dη . A simple series expressions for these quantities which can be evaluated numerically with hand-held programmable calculators is presented.

  20. Numerical evaluation of the limit of concentration of colloidal samples for their study with digital lensless holographic microscopy.

    PubMed

    Restrepo, John F; Garcia-Sucerquia, Jorge

    2013-01-01

    The number of colloidal particles per unit of volume that can be imaged correctly with digital lensless holographic microscopy (DLHM) is determined numerically. Typical in-line DLHM holograms with controlled concentration are modeled and reconstructed numerically. By quantifying the ratio of the retrieved particles from the reconstructed hologram to the number of the seeding particles in the modeled intensity, the limit of concentration of the colloidal suspensions up to which DLHM can operate successfully is found numerically. A new shadow density parameter for spherical illumination is defined. The limit of performance of DLHM is determined from a graph of the shadow density versus the efficiency of the microscope.

  1. Numerical evaluation and optimization of depth-oriented temperature measurements for the investigation of thermal influences on groundwater

    NASA Astrophysics Data System (ADS)

    Köhler, Mandy; Haendel, Falk; Epting, Jannis; Binder, Martin; Müller, Matthias; Huggenberger, Peter; Liedl, Rudolf

    2015-04-01

    Increasing groundwater temperatures have been observed in many urban areas such as London (UK), Tokyo (Japan) and also in Basel (Switzerland). Elevated groundwater temperatures are a result of different direct and indirect thermal impacts. Groundwater heat pumps, building structures located within the groundwater and district heating pipes, among others, can be addressed to direct impacts, whereas indirect impacts result from the change in climate in urban regions (i.e. reduced wind, diffuse heat sources). A better understanding of the thermal processes within the subsurface is urgently needed for decision makers as a basis for the selection of appropriate measures to reduce the ongoing increase of groundwater temperatures. However, often only limited temperature data is available that derives from measurements in conventional boreholes, which differ in construction and instrumental setup resulting in measurements that are often biased and not comparable. For three locations in the City of Basel models were implemented to study selected thermal processes and to investigate if heat-transport models can reproduce thermal measurements. Therefore, and to overcome the limitations of conventional borehole measurements, high-resolution depth-oriented temperature measurement systems have been introduced in the urban area of Basel. In total seven devices were installed with up to 16 sensors which are located in the unsaturated and saturated zone (0.5 to 1 m separation distance). Measurements were performed over a period of 4 years (ongoing) and provide sufficient data to set up and calibrate high-resolution local numerical heat transport models which allow studying selected local thermal processes. In a first setup two- and three-dimensional models were created to evaluate the impact of the atmosphere boundary on groundwater temperatures (see EGU Poster EGU2013-9230: Modelling Strategies for the Thermal Management of Shallow Rural and Urban Groundwater bodies). For Basel

  2. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation.

  3. Evaluation of gas production potential from gas hydrate deposits in National Petroleum Reserve Alaska using numerical simulations

    USGS Publications Warehouse

    Nandanwar, Manish S.; Anderson, Brian J.; Ajayi, Taiwo; Collett, Timothy S.; Zyrianova, Margarita V.

    2016-01-01

    An evaluation of the gas production potential of Sunlight Peak gas hydrate accumulation in the eastern portion of the National Petroleum Reserve Alaska (NPRA) of Alaska North Slope (ANS) is conducted using numerical simulations, as part of the U.S. Geological Survey (USGS) gas hydrate Life Cycle Assessment program. A field scale reservoir model for Sunlight Peak is developed using Advanced Processes & Thermal Reservoir Simulator (STARS) that approximates the production design and response of this gas hydrate field. The reservoir characterization is based on available structural maps and the seismic-derived hydrate saturation map of the study region. A 3D reservoir model, with heterogeneous distribution of the reservoir properties (such as porosity, permeability and vertical hydrate saturation), is developed by correlating the data from the Mount Elbert well logs. Production simulations showed that the Sunlight Peak prospect has the potential of producing 1.53 × 109 ST m3 of gas in 30 years by depressurization with a peak production rate of around 19.4 × 104 ST m3/day through a single horizontal well. To determine the effect of uncertainty in reservoir properties on the gas production, an uncertainty analysis is carried out. It is observed that for the range of data considered, the overall cumulative production from the Sunlight Peak will always be within the range of ±4.6% error from the overall mean value of 1.43 × 109 ST m3. A sensitivity analysis study showed that the proximity of the reservoir from the base of permafrost and the base of hydrate stability zone (BHSZ) has significant effect on gas production rates. The gas production rates decrease with the increase in the depth of the permafrost and the depth of BHSZ. From the overall analysis of the results it is concluded that Sunlight Peak gas hydrate accumulation behaves differently than other Class III reservoirs (Class III reservoirs are composed of a single layer of hydrate with no

  4. Multiplatform Observations from DYNAMO and Deployment of a Comprehensive Dataset for Numerical Model Evaluation and other Applications

    NASA Astrophysics Data System (ADS)

    Guy, N.; Chen, S. S.; Zhang, C.

    2014-12-01

    A large number of observations were collected during the DYNAMO (Dynamics of the Madden-Julian Oscillation) field campaign in the tropical Indian Ocean during 2011. These data ranged from in-situ measurements of individual hydrometeors to regional precipitation distribution to large-scale precipitation and wind fields. Many scientific findings have been reported in the three years since project completion, leading to a better physical understanding of the Madden-Julian Oscillation (MJO) initiation and providing insight to a roadmap to better predictability. The NOAA P-3 instrumented aircraft was deployed from 11 November - 13 December 2011, embarking on 12 flights. This mobile platform provided high resolution, high quality in-situ and remotely sensed observations of the meso-γ to meso-α scale environment and offered coherent cloud dynamic and microphysical data in convective cloud systems where surface-based instruments were unable to reach. Measurements included cloud and precipitation microphysical observations via the Particle Measuring System 2D cloud and precipitation probes, aircraft altitude flux measurements, dropsonde vertical thermodynamic profiles, and 3D precipitation and wind field observations from the tail-mounted Doppler X-band weather radar. Existing satellite (infrared, visible, and water vapor) data allowed the characterization of the large-scale environment. These comprehensive data have been combined into an easily accesible product with special attention paid to comparing observations to future numerical simulations. The P-3 and French Falcon aircraft flew a coordinated mission, above and below the melting level, respectively, near Gan Island on 8 December 2011, acquiring coincident cloud microphysical and dynamics data. The Falcon aircraft is instrumented with vertically pointing W-band radar, with a focus on ice microphysical properties. We present this case in greater detail to show the optimal coincident measurements. Additional

  5. Accuracy evaluation of numerical methods used in state-of-the-art simulators for spiking neural networks.

    PubMed

    Henker, Stephan; Partzsch, Johannes; Schüffny, René

    2012-04-01

    With the various simulators for spiking neural networks developed in recent years, a variety of numerical solution methods for the underlying differential equations are available. In this article, we introduce an approach to systematically assess the accuracy of these methods. In contrast to previous investigations, our approach focuses on a completely deterministic comparison and uses an analytically solved model as a reference. This enables the identification of typical sources of numerical inaccuracies in state-of-the-art simulation methods. In particular, with our approach we can separate the error of the numerical integration from the timing error of spike detection and propagation, the latter being prominent in simulations with fixed timestep. To verify the correctness of the testing procedure, we relate the numerical deviations to theoretical predictions for the employed numerical methods. Finally, we give an example of the influence of simulation artefacts on network behaviour and spike-timing-dependent plasticity (STDP), underlining the importance of spike-time accuracy for the simulation of STDP.

  6. Analytical and numerical evaluation of crack-tip plasticity of an axisymmetrically loaded penny-shaped crack

    NASA Astrophysics Data System (ADS)

    Chaiyat, Sumitra; Jin, Xiaoqing; Keer, Leon M.; Kiattikomol, Kraiwood

    2008-01-01

    Analytical and numerical approaches are used to solve an axisymmetric crack problem with a refined Barenblatt-Dugdale approach. The analytical method utilizes potential theory in classical linear elasticity, where a suitable potential is selected for the treatment of the mixed boundary problem. The closed-form solution for the problem with constant pressure applied near the tip of a penny-shaped crack is studied to illustrate the methodology of the analysis and also to provide a fundamental solution for the numerical approach. Taking advantage of the superposition principle, an exact solution is derived to predict the extent of the plastic zone where a Tresca yield condition is imposed, which also provides a useful benchmark for the numerical study presented in the second part. For an axisymmetric crack, the numerical discretization is required only in the radial direction, which renders the programming work efficient. Through an iterative scheme, the numerical method is able to determine the size of the crack tip plasticity, which is governed by the nonlinear von Mises criterion. The relationships between the applied load and the length of the plastic zone are compared for three different yielding conditions. To cite this article: S. Chaiyat et al., C. R. Mecanique 336 (2008).

  7. Experimental and numerical evaluation of freely spacing-tunable multiwavelength fiber laser based on two seeding light signals

    SciTech Connect

    Yuan, Yijun; Yao, Yong Guo, Bo; Yang, Yanfu; Tian, JiaJun; Yi, Miao

    2015-03-28

    A model of multiwavelength erbium-doped fiber laser (MEFL), which takes into account the impact of fiber attenuation on the four-wave-mixing (FWM), is proposed. Using this model, we numerically study the output characteristics of the MEFL based on FWM in a dispersion shift fiber with two seeding light signals (TSLS) and experimentally verify these characteristics. The numerical and experimental results show that the number of output channels can be increased with the increase of the erbium-doped fiber pump power. In addition, by decreasing the spacing of TSLS and increasing the power of TSLS, the number of output channels can be increased. However, when the power of TSLS exceeds a critical value, the number of output channels decreases. The results by numerical simulation are consistent with experimental observations from the MEFL.

  8. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

    SciTech Connect

    Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; Jornada, Felipe H. da; Deslippe, Jack; Yang, Chao; and others

    2015-04-01

    We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.

  9. Accurate Anisotropic Fast Marching for Diffusion-Based Geodesic Tractography

    PubMed Central

    Jbabdi, S.; Bellec, P.; Toro, R.; Daunizeau, J.; Pélégrini-Issac, M.; Benali, H.

    2008-01-01

    Using geodesics for inferring white matter fibre tracts from diffusion-weighted MR data is an attractive method for at least two reasons: (i) the method optimises a global criterion, and hence is less sensitive to local perturbations such as noise or partial volume effects, and (ii) the method is fast, allowing to infer on a large number of connexions in a reasonable computational time. Here, we propose an improved fast marching algorithm to infer on geodesic paths. Specifically, this procedure is designed to achieve accurate front propagation in an anisotropic elliptic medium, such as DTI data. We evaluate the numerical performance of this approach on simulated datasets, as well as its robustness to local perturbation induced by fiber crossing. On real data, we demonstrate the feasibility of extracting geodesics to connect an extended set of brain regions. PMID:18299703

  10. Evaluation of a Numeracy Intervention Program Focusing on Basic Numerical Knowledge and Conceptual Knowledge: A Pilot Study.

    ERIC Educational Resources Information Center

    Kaufmann, Liane; Handl, Pia; Thony, Brigitte

    2003-01-01

    In this study, six elementary grade children with developmental dyscalculia were trained individually and in small group settings with a one-semester program stressing basic numerical knowledge and conceptual knowledge. All the children showed considerable and partly significant performance increases on all calculation components. Results suggest…

  11. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  12. Numerical simulation of gas-dynamic, thermal processes and evaluation of the stress-strain state in the modeling compressor of the gas-distributing unit

    NASA Astrophysics Data System (ADS)

    Shmakov, A. F.; Modorskii, V. Ya.

    2016-10-01

    This paper presents the results of numerical modeling of gas-dynamic processes occurring in the flow path, thermal analysis and evaluation of the stress-strain state of a three-stage design of the compressor gas pumping unit. Physical and mathematical models of the processes developed. Numerical simulation was carried out in the engineering software ANSYS 13. The problem is solved in a coupled statement, in which the results of the gas-dynamic calculation transferred as boundary conditions for the evaluation of the thermal and stress-strain state of a three-stage design of the compressor gas pumping unit. The basic parameters, which affect the stress-strain state of the housing and changing gaps of labyrinth seals in construction. The method of analysis of the pumped gas flow influence on the strain of construction was developed.

  13. Experimental-numerical evaluation of a new butterfly specimen for fracture characterisation of AHSS in a wide range of stress states

    NASA Astrophysics Data System (ADS)

    Peshekhodov, I.; Jiang, S.; Vucetic, M.; Bouguecha, A.; Berhens, B.-A.

    2016-11-01

    Results of an experimental-numerical evaluation of a new butterfly specimen for fracture characterisation of AHHS sheets in a wide range of stress states are presented. The test on the new butterfly specimen is performed in a uniaxial tensile machine and provides sufficient data for calibration of common fracture models. In the first part, results of a numerical specimen evaluation are presented, which was performed with a material model of a dual-phase steel DP600 taken from literature with plastic flow and fracture descriptions. In the second part, results of an experimental-numerical specimen evaluation are shown, which was conducted on another dual-phase steel DP600, which was available with a description of plastic flow only and whose fracture behaviour was characterised in the frame of this work. The overall performance of the new butterfly specimen at different load cases with regard to characterisation of the fracture behaviour of AHSS was investigated. The dependency of the fracture strain on the stress triaxiality and Lode angle as well as space resolution is quantified. A parametrised CrachFEM ductile shear fracture model and modified Mohr-Coloumb ductile shear fracture model are presented as a result of this quantification. The test procedure and results analysis are believed to contribute to current discussions on requirements to AHSS fracture characterisation.

  14. A numerical model for CO effect evaluation in HT-PEMFCs: Part 2 - Application to different membranes

    NASA Astrophysics Data System (ADS)

    Cozzolino, R.; Chiappini, D.; Tribioli, L.

    2016-06-01

    In this paper, a self-made numerical model of a high temperature polymer electrolyte membrane fuel cell is presented. In particular, we focus on the impact of CO poisoning on fuel cell performance and its influence on electrochemical modelling. More specifically, the aim of this work is to demonstrate the effectiveness of our zero-dimensional electrochemical model of HT-PEMFCs, by comparing numerical and experimental results, obtained from two different commercial membranes electrode assemblies: the first one is based on polybenzimidazole (PBI) doped with phosphoric acid, while the second one uses a PBI electrolyte with aromatic polyether polymers/copolymers bearing pyridine units, always doped with H3PO4. The analysis has been carried out considering both the effect of CO poisoning and operating temperature for the two membranes above mentioned.

  15. Evaluation of the Faraday angle by numerical methods and comparison with the Tore Supra and JET polarimeter electronics.

    PubMed

    Brault, C; Gil, C; Boboc, A; Spuig, P

    2011-04-01

    On the Tore Supra tokamak, a far infrared polarimeter diagnostic has been routinely used for diagnosing the current density by measuring the Faraday rotation angle. A high precision of measurement is needed to correctly reconstruct the current profile. To reach this precision, electronics used to compute the phase and the amplitude of the detected signals must have a good resilience to the noise in the measurement. In this article, the analogue card's response to the noise coming from the detectors and their impact on the Faraday angle measurements are analyzed, and we present numerical methods to calculate the phase and the amplitude. These validations have been done using real signals acquired by Tore Supra and JET experiments. These methods have been developed to be used in real-time in the future numerical cards that will replace the Tore Supra present analogue ones.

  16. Numerical evaluation of passive control of shock wave/boundary layer interaction on NACA0012 airfoil using jagged wall

    NASA Astrophysics Data System (ADS)

    Dehghan Manshadi, Mojtaba; Rabani, Ramin

    2016-10-01

    Shock formation due to flow compressibility and its interaction with boundary layers has adverse effects on aerodynamic characteristics, such as drag increase and flow separation. The objective of this paper is to appraise the practicability of weakening shock waves and, hence, reducing the wave drag in transonic flight regime using a two-dimensional jagged wall and thereby to gain an appropriate jagged wall shape for future empirical study. Different shapes of the jagged wall, including rectangular, circular, and triangular shapes, were employed. The numerical method was validated by experimental and numerical studies involving transonic flow over the NACA0012 airfoil, and the results presented here closely match previous experimental and numerical results. The impact of parameters, including shape and the length-to-spacing ratio of a jagged wall, was studied on aerodynamic forces and flow field. The results revealed that applying a jagged wall method on the upper surface of an airfoil changes the shock structure significantly and disintegrates it, which in turn leads to a decrease in wave drag. It was also found that the maximum drag coefficient decrease of around 17 % occurs with a triangular shape, while the maximum increase in aerodynamic efficiency (lift-to-drag ratio) of around 10 % happens with a rectangular shape at an angle of attack of 2.26°.

  17. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  18. A Method to Calculate and Analyze Residents' Evaluations by Using a Microcomputer Data-Base Management System.

    ERIC Educational Resources Information Center

    Mills, Myron L.

    1988-01-01

    A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)

  19. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  20. Application of 2D numerical model to unsteady performance evaluation of vertical-axis tidal current turbine

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Qu, Hengliang; Shi, Hongda; Hu, Gexing; Hyun, Beom-Soo

    2016-12-01

    Tidal current energy is renewable and sustainable, which is a promising alternative energy resource for the future electricity supply. The straight-bladed vertical-axis turbine is regarded as a useful tool to capture the tidal current energy especially under low-speed conditions. A 2D unsteady numerical model based on Ansys-Fluent 12.0 is established to conduct the numerical simulation, which is validated by the corresponding experimental data. For the unsteady calculations, the SST model, 2×105 and 0.01 s are selected as the proper turbulence model, mesh number, and time step, respectively. Detailed contours of the velocity distributions around the rotor blade foils have been provided for a flow field analysis. The tip speed ratio (TSR) determines the azimuth angle of the appearance of the torque peak, which occurs once for a blade in a single revolution. It is also found that simply increasing the incident flow velocity could not improve the turbine performance accordingly. The peaks of the averaged power and torque coefficients appear at TSRs of 2.1 and 1.8, respectively. Furthermore, several shapes of the duct augmentation are proposed to improve the turbine performance by contracting the flow path gradually from the open mouth of the duct to the rotor. The duct augmentation can significantly enhance the power and torque output. Furthermore, the elliptic shape enables the best performance of the turbine. The numerical results prove the capability of the present 2D model for the unsteady hydrodynamics and an operating performance analysis of the vertical tidal stream turbine.

  1. Graphical arterial blood gas visualization tool supports rapid and accurate data interpretation.

    PubMed

    Doig, Alexa K; Albert, Robert W; Syroid, Noah D; Moon, Shaun; Agutter, Jim A

    2011-04-01

    A visualization tool that integrates numeric information from an arterial blood gas report with novel graphics was designed for the purpose of promoting rapid and accurate interpretation of acid-base data. A study compared data interpretation performance when arterial blood gas results were presented in a traditional numerical list versus the graphical visualization tool. Critical-care nurses (n = 15) and nursing students (n = 15) were significantly more accurate identifying acid-base states and assessing trends in acid-base data when using the graphical visualization tool. Critical-care nurses and nursing students using traditional numerical data had an average accuracy of 69% and 74%, respectively. Using the visualization tool, average accuracy improved to 83% for critical-care nurses and 93% for nursing students. Analysis of response times demonstrated that the visualization tool might help nurses overcome the "speed/accuracy trade-off" during high-stress situations when rapid decisions must be rendered. Perceived mental workload was significantly reduced for nursing students when they used the graphical visualization tool. In this study, the effects of implementing the graphical visualization were greater for nursing students than for critical-care nurses, which may indicate that the experienced nurses needed more training and use of the new technology prior to testing to show similar gains. Results of the objective and subjective evaluations support the integration of this graphical visualization tool into clinical environments that require accurate and timely interpretation of arterial blood gas data.

  2. Urban pluvial flood prediction: a case study evaluating radar rainfall nowcasts and numerical weather prediction models as model inputs.

    PubMed

    Thorndahl, Søren; Nielsen, Jesper Ellerbæk; Jensen, David Getreuer

    2016-12-01

    Flooding produced by high-intensive local rainfall and drainage system capacity exceedance can have severe impacts in cities. In order to prepare cities for these types of flood events - especially in the future climate - it is valuable to be able to simulate these events numerically, both historically and in real-time. There is a rather untested potential in real-time prediction of urban floods. In this paper, radar data observations with different spatial and temporal resolution, radar nowcasts of 0-2 h leadtime, and numerical weather models with leadtimes up to 24 h are used as inputs to an integrated flood and drainage systems model in order to investigate the relative difference between different inputs in predicting future floods. The system is tested on the small town of Lystrup in Denmark, which was flooded in 2012 and 2014. Results show it is possible to generate detailed flood maps in real-time with high resolution radar rainfall data, but rather limited forecast performance in predicting floods with leadtimes more than half an hour.

  3. Numerical evaluation of moiré pattern in touch sensor module with electrode mesh structure in oblique view

    NASA Astrophysics Data System (ADS)

    Pournoury, M.; Zamiri, A.; Kim, T. Y.; Yurlov, V.; Oh, K.

    2016-03-01

    Capacitive touch sensor screen with the metal materials has recently become qualified for substitution of ITO; however several obstacles still have to be solved. One of the most important issues is moiré phenomenon. The visibility problem of the metal-mesh, in touch sensor module (TSM) is numerically considered in this paper. Based on human eye contract sensitivity function (CSF), moiré pattern of TSM electrode mesh structure is simulated with MATLAB software for 8 inch screen display in oblique view. Standard deviation of the generated moiré by the superposition of electrode mesh and screen image is calculated to find the optimal parameters which provide the minimum moiré visibility. To create the screen pixel array and mesh electrode, rectangular function is used. The filtered image, in frequency domain, is obtained by multiplication of Fourier transform of the finite mesh pattern (product of screen pixel and mesh electrode) with the calculated CSF function for three different observer distances (L=200, 300 and 400 mm). It is observed that the discrepancy between analytical and numerical results is less than 0.6% for 400 mm viewer distance. Moreover, in the case of oblique view due to considering the thickness of the finite film between mesh electrodes and screen, different points of minimum standard deviation of moiré pattern are predicted compared to normal view.

  4. Differences in acoustic impedance of fresh and embedded human trabecular bone samples-Scanning acoustic microscopy and numerical evaluation.

    PubMed

    Ojanen, Xiaowei; Töyräs, Juha; Inkinen, Satu I; Malo, Markus K H; Isaksson, Hanna; Jurvelin, Jukka S

    2016-09-01

    Trabecular bone samples are traditionally embedded and polished for scanning acoustic microscopy (SAM). The effect of sample processing, including dehydration, on the acoustic impedance of bone is unknown. In this study, acoustic impedance of human trabecular bone samples (n = 8) was experimentally assessed before (fresh) and after embedding using SAM and two-dimensional (2-D) finite-difference time domain simulations. Fresh samples were polished with sandpapers of different grit (P1000, P2500, and P4000). Experimental results indicated that acoustic impedance of samples increased significantly after embedding [mean values 3.7 MRayl (fresh), 6.1 MRayl (embedded), p < 0.001]. After polishing with different papers, no significant changes in acoustic impedance were found, even though higher mean values were detected after polishing with finer (P2500 and P4000) papers. A linear correlation (r = 0.854, p < 0.05) was found between the acoustic impedance values of embedded and fresh bone samples polished using P2500 SiC paper. In numerical simulations dehydration increased the acoustic impedance of trabecular bone (38%), whereas changes in surface roughness of bone had a minor effect on the acoustic impedance (-1.56%/0.1 μm). Thereby, the numerical simulations corroborated the experimental findings. In conclusion, acoustic impedance measurement of fresh trabecular bone is possible and may provide realistic material values similar to those of living bone.

  5. Evaluation of ground-penetrating radar to detect free-phase hydrocarbons in fractured rocks - Results of numerical modeling and physical experiments

    USGS Publications Warehouse

    Lane, J.W.; Buursink, M.L.; Haeni, F.P.; Versteeg, R.J.

    2000-01-01

    The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons

  6. Numerical and experimental evaluation of laser forming process for the shape correction in ultra high strength steels

    SciTech Connect

    Song, J. H.; Lee, J.; Lee, S.; Kim, E. Z.; Lee, N. K.; Lee, G. A.; Park, S. J.; Chu, A.

    2013-12-16

    In this paper, laser forming characteristics in ultra high strength steel with ultimate strength of 1200MPa are investigated numerically and experimentally. FE simulation is conducted to identify the response related to deformation and characterize the effect of laser power, beam diameter and scanning speed with respect to the bending angle for a square sheet part. The thermo-mechanical behaviors during the straight-line heating process are presented in terms of temperature, stress and strain. An experimental setup including a fiber laser with maximum mean power of 3.0 KW is used in the experiments. From the results in this work, it would be easily adjustment the laser power and the scanning speed by controlling the line energy for a bending operation of CP1180 steel sheets.

  7. Evaluation of Soft Tissue Sarcoma Tumors Electrical Conductivity Anisotropy Using Diffusion Tensor Imaging for Numerical Modeling on Electroporation

    PubMed Central

    Ghazikhanlou-sani, K.; Firoozabadi, S. M. P.; Agha-ghazvini, L.; Mahmoodzadeh, H.

    2016-01-01

    Introduction There is many ways to assessing the electrical conductivity anisotropy of a tumor. Applying the values of tissue electrical conductivity anisotropy is crucial in numerical modeling of the electric and thermal field distribution in electroporation treatments. This study aims to calculate the tissues electrical conductivity anisotropy in patients with sarcoma tumors using diffusion tensor imaging technique. Materials and Method A total of 3 subjects were involved in this study. All of patients had clinically apparent sarcoma tumors at the extremities. The T1, T2 and DTI images were performed using a 3-Tesla multi-coil, multi-channel MRI system. The fractional anisotropy (FA) maps were performed using the FSL (FMRI software library) software regarding the DTI images. The 3D matrix of the FA maps of each area (tumor, normal soft tissue and bone/s) was reconstructed and the anisotropy matrix was calculated regarding to the FA values. Result The mean FA values in direction of main axis in sarcoma tumors were ranged between 0.475–0.690.  With assumption of isotropy of the electrical conductivity, the FA value of electrical conductivity at each X, Y and Z coordinate axes would be equal to 0.577. The gathered results showed that there is a mean error band of 20% in electrical conductivity, if the electrical conductivity anisotropy not concluded at the calculations. The comparison of FA values showed that there is a significant statistical difference between the mean FA value of tumor and normal soft tissues (P<0.05). Conclusion DTI is a feasible technique for the assessment of electrical conductivity anisotropy of tissues.  It is crucial to quantify the electrical conductivity anisotropy data of tissues for numerical modeling of electroporation treatments. PMID:27672627

  8. An optimal scheme for numerical evaluation of Eshelby tensors and its implementation in a MATLAB package for simulating the motion of viscous ellipsoids in slow flows

    NASA Astrophysics Data System (ADS)

    Qu, Mengmeng; Jiang, Dazhi; Lu, Lucy X.

    2016-11-01

    To address the multiscale deformation and fabric development in Earth's ductile lithosphere, micromechanics-based self-consistent homogenization is commonly used to obtain macroscale rheological properties from properties of constituent elements. The homogenization is heavily based on the solution of an Eshelby viscous inclusion in a linear viscous medium and the extension of the solution to nonlinear viscous materials. The homogenization requires repeated numerical evaluation of Eshelby tensors for constituent elements and becomes ever more computationally challenging as the elements are deformed to more elongate or flattened shapes. In this paper, we develop an optimal scheme for evaluating Eshelby tensors, using a combination of a product Gaussian quadrature and the Lebedev quadrature. We first establish, through numerical experiments, an empirical relationship between the inclusion shape and the computational time it takes to evaluate its Eshelby tensors. We then use the relationship to develop an optimal scheme for selecting the most efficient quadrature to obtain the Eshelby tensors. The optimal scheme is applicable to general homogenizations. In this paper, it is implemented in a MATLAB package for investigating the evolution of solitary rigid or deformable inclusions and the development of shape preferred orientations in multi-inclusion systems during deformation. The MATLAB package, upgrading an earlier effort written in MathCad, can be downloaded online.

  9. Subjective evaluation of the combined influence of satellite temperature sounding data and increased model resolution on numerical weather forecasting

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Halem, M.; Ghil, M.

    1979-01-01

    The present evaluation is concerned with (1) the significance of prognostic differences resulting from the inclusion of satellite-derived temperature soundings, (2) how specific differences between the SAT and NOSAT prognoses evolve, and (3) comparison of two experiments using the Goddard Laboratory for Atmospheric Sciences general circulation model. The subjective evaluation indicates that the beneficial impact of sounding data is enhanced with increased resolution. It is suggested that satellite sounding data posses valuable information content which at times can correct gross analysis errors in data sparse regions.

  10. A numerical algorithm to evaluate the transient response for a synchronous scanning streak camera using a time-domain Baum-Liu-Tesche equation

    NASA Astrophysics Data System (ADS)

    Pei, Chengquan; Tian, Jinshou; Wu, Shengli; He, Jiai; Liu, Zhen

    2016-10-01

    The transient response is of great influence on the electromagnetic compatibility of synchronous scanning streak cameras (SSSCs). In this paper we propose a numerical method to evaluate the transient response of the scanning deflection plate (SDP). First, we created a simplified circuit model for the SDP used in an SSSC, and then derived the Baum-Liu-Tesche (BLT) equation in the frequency domain. From the frequency-domain BLT equation, its transient counterpart was derived. These parameters, together with the transient-BLT equation, were used to compute the transient load voltage and load current, and then a novel numerical method to fulfill the continuity equation was used. Several numerical simulations were conducted to verify this proposed method. The computed results were then compared with transient responses obtained by a frequency-domain/fast Fourier transform (FFT) method, and the accordance was excellent for highly conducting cables. The benefit of deriving the BLT equation in the time domain is that it may be used with slight modifications to calculate the transient response and the error can be controlled by a computer program. The result showed that the transient voltage was up to 1000 V and the transient current was approximately 10 A, so some protective measures should be taken to improve the electromagnetic compatibility.

  11. Evaluation of backward Lagrangian stochastic (bLS) model to estimate gas emissions from complex sources based on numerical simulations

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Liu, Wenqing; Zhang, Tianshu; Ren, Manyan

    2013-03-01

    The focus of the paper is application of an inverse-dispersion technique based on a backward Lagrangian stochastic (bLS) model in order to calculate gas-emission rates from industrial complexes. While the bLS technique is attractive for these types of sources, the bLS calculation must assume a spatial configuration for the source. Therefore, results are presented herein of numerical simulations designed to study the sensitivity of emissions calculations to the assumption of source configuration for complex industrial sources. We discuss how measurement fetch, concentration sensor height, and optical path length influence the accuracy of emission estimation. Through simulations, we identify an improved sensor configuration in order to reduce emission-calculation errors caused by an incorrect source-configuration assumption. It is concluded that, with respect to our defined source, the optimal measurement fetch may be between 200 m and 300 m; also, the ideal measurement height is probably between 2.0 m and 2.5 m. With choices within these two ranges, a path length of about 200 m is adequate, and greater path lengths, above 200 m, result in no substantial improvement in emission calculations.

  12. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  13. Hindi Numerals.

    ERIC Educational Resources Information Center

    Bright, William

    In most languages encountered by linguists, the numerals, considered as a paradigmatic set, constitute a morpho-syntactic problem of only moderate complexity. The Indo-Aryan language family of North India, however, presents a curious contrast. The relatively regular numeral system of Sanskrit, as it has developed historically into the modern…

  14. New Rapid Evaluation for Long-Term Behavior in Deep Geological Repository by Geotechnical Centrifuge—Part 2: Numerical Simulation of Model Tests in Isothermal Condition

    NASA Astrophysics Data System (ADS)

    Sawada, Masataka; Nishimoto, Soshi; Okada, Tetsuji

    2017-01-01

    In high-level radioactive waste disposal repositories, there are long-term complex thermal, hydraulic, and mechanical (T-H-M) phenomena that involve the generation of heat from the waste, the infiltration of ground water, and swelling of the bentonite buffer. The ability to model such coupled phenomena is of particular importance to the repository design and assessments of its safety. We have developed a T-H-M-coupled analysis program that evaluates the long-term behavior around the repository (called "near-field"). We have also conducted centrifugal model tests that model the long-term T-H-M-coupled behavior in the near-field. In this study, we conduct H-M-coupled numerical simulations of the centrifugal near-field model tests. We compare numerical results with each other and with results obtained from the centrifugal model tests. From the comparison, we deduce that: (1) in the numerical simulation, water infiltration in the rock mass was in agreement with the experimental observation. (2) The constant-stress boundary condition in the centrifugal model tests may cause a larger expansion of the rock mass than in the in situ condition, but the mechanical boundary condition did not affect the buffer behavior in the deposition hole. (3) The numerical simulation broadly reproduced the measured bentonite pressure and the overpack displacement, but did not reproduce the decreasing trend of the bentonite pressure after 100 equivalent years. This indicates the effect of the time-dependent characteristics of the surrounding rock mass. Further investigations are needed to determine the effect of initial heterogeneity in the deposition hole and the time-dependent behavior of the surrounding rock mass.

  15. Storm and fair-weather driven sediment-transport within Poverty Bay, New Zealand, evaluated using coupled numerical models

    NASA Astrophysics Data System (ADS)

    Bever, Aaron J.; Harris, Courtney K.

    2014-09-01

    The Waipaoa River Sedimentary System in New Zealand, a focus site of the MARGINS Source-to-Sink program, contains both a terrestrial and marine component. Poverty Bay serves as the interface between the fluvial and oceanic portions of this dispersal system. This study used a three-dimensional hydrodynamic and sediment-transport numerical model, the Regional Ocean Modeling System (ROMS), coupled to the Simulated WAves Nearshore (SWAN) wave model to investigate sediment-transport dynamics within Poverty Bay and the mechanisms by which sediment travels from the Waipaoa River to the continental shelf. Two sets of model calculations were analyzed; the first represented a winter storm season, January-September, 2006; and the second an approximately 40 year recurrence interval storm that occurred on 21-23 October 2005. Model results indicated that hydrodynamics and sediment-transport pathways within Poverty Bay differed during wet storms that included river runoff and locally generated waves, compared to dry storms driven by oceanic swell. During wet storms the model estimated significant deposition within Poverty Bay, although much of the discharged sediment was exported from the Bay during the discharge pulse. Later resuspension events generated by Southern Ocean swell reworked and modified the initial deposit, providing subsequent pulses of sediment from the Bay to the continental shelf. In this manner, transit through Poverty Bay modified the input fluvial signal, so that the sediment characteristics and timing of export to the continental shelf differed from the Waipaoa River discharge. Sensitivity studies showed that feedback mechanisms between sediment-transport, currents, and waves were important within the model calculations.

  16. Numerical evaluation of static-chamber measurements of soil-atmospheric gas exchange--Identification of physical processes

    USGS Publications Warehouse

    Healy, Richard W.; Striegl, Robert G.; Russell, Thomas F.; Hutchinson, Gordon L.; Livingston, Gerald P.

    1996-01-01

    The exchange of gases between soil and atmosphere is an important process that affects atmospheric chemistry and therefore climate. The static-chamber method is the most commonly used technique for estimating the rate of that exchange. We examined the method under hypothetical field conditions where diffusion was the only mechanism for gas transport and the atmosphere outside the chamber was maintained at a fixed concentration. Analytical and numerical solutions to the soil gas diffusion equation in one and three dimensions demonstrated that gas flux density to a static chamber deployed on the soil surface was less in magnitude than the ambient exchange rate in the absence of the chamber. This discrepancy, which increased with chamber deployment time and air-filled porosity of soil, is attributed to two physical factors: distortion of the soil gas concentration gradient (the magnitude was decreased in the vertical component and increased in the radial component) and the slow transport rate of diffusion relative to mixing within the chamber. Instantaneous flux density to a chamber decreased continuously with time; steepest decreases occurred so quickly following deployment and in response to such slight changes in mean chamber headspace concentration that they would likely go undetected by most field procedures. Adverse influences of these factors were reduced by mixing the chamber headspace, minimizing deployment time, maximizing the height and radius of the chamber, and pushing the rim of the chamber into the soil. Nonlinear models were superior to a linear regression model for estimating flux densities from mean headspace concentrations, suggesting that linearity of headspace concentration with time was not necessarily a good indicator of measurement accuracy.

  17. Evaluation of cloud prediction and determination of critical relative humidity for a mesoscale numerical weather prediction model

    SciTech Connect

    Seaman, N.L.; Guo, Z.; Ackerman, T.P.

    1996-04-01

    Predictions of cloud occurrence and vertical location from the Pennsylvannia State University/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) were evaluated statistically using cloud observations obtained at Coffeyville, Kansas, as part of the Second International satellite Cloud Climatology Project Regional Experiment campaign. Seventeen cases were selected for simulation during a November-December 1991 field study. MM5 was used to produce two sets of 36-km simulations, one with and one without four-dimensional data assimilation (FDDA), and a set of 12-km simulations without FDDA, but nested within the 36-km FDDA runs.

  18. The spectral element method on variable resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE PAGES

    Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...

    2014-06-25

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  19. The spectral element method (SEM) on variable-resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE PAGES

    Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...

    2014-11-27

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  20. Evaluating the Impacts of NASA/SPoRT Daily Greenness Vegetation Fraction on Land Surface Model and Numerical Weather Forecasts

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Case, Jonathan L.; Molthan, Andrew L.

    2011-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center develops new products and techniques that can be used in operational meteorology. The majority of these products are derived from NASA polar-orbiting satellite imagery from the Earth Observing System (EOS) platforms. One such product is a Greenness Vegetation Fraction (GVF) dataset, which is produced from Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA EOS Aqua and Terra satellites. NASA SPoRT began generating daily real-time GVF composites at 1-km resolution over the Continental United States (CONUS) on 1 June 2010. The purpose of this study is to compare the National Centers for Environmental Prediction (NCEP) climatology GVF product (currently used in operational weather models) to the SPoRT-MODIS GVF during June to October 2010. The NASA Land Information System (LIS) was employed to study the impacts of the new SPoRT-MODIS GVF dataset on land surface models apart from a full numerical weather prediction (NWP) model. For the 2010 warm season, the SPoRT GVF in the western portion of the CONUS was generally higher than the NCEP climatology. The eastern CONUS GVF had variations both above and below the climatology during the period of study. These variations in GVF led to direct impacts on the rates of heating and evaporation from the land surface. The second phase of the project is to examine the impacts of the SPoRT GVF dataset on NWP using the Weather Research and Forecasting (WRF) model. Two separate WRF model simulations were made for individual severe weather case days using the NCEP GVF (control) and SPoRT GVF (experimental), with all other model parameters remaining the same. Based on the sensitivity results in these case studies, regions with higher GVF in the SPoRT model runs had higher evapotranspiration and lower direct surface heating, which typically resulted in lower (higher) predicted 2-m temperatures (2-m dewpoint temperatures). The opposite was true

  1. Evaluating the Impacts of NASA/SPoRT Daily Greenness Vegetation Fraction on Land Surface Model and Numerical Weather Forecasts

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Case, Jonathan L.; LaFontaine, Frank J.; Kumar, Sujay V.

    2012-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed a Greenness Vegetation Fraction (GVF) dataset, which is updated daily using swaths of Normalized Difference Vegetation Index data from the Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA EOS Aqua and Terra satellites. NASA SPoRT began generating daily real-time GVF composites at 1-km resolution over the Continental United States (CONUS) on 1 June 2010. The purpose of this study is to compare the National Centers for Environmental Prediction (NCEP) climatology GVF product (currently used in operational weather models) to the SPoRT-MODIS GVF during June to October 2010. The NASA Land Information System (LIS) was employed to study the impacts of the SPoRT-MODIS GVF dataset on a land surface model (LSM) apart from a full numerical weather prediction (NWP) model. For the 2010 warm season, the SPoRT GVF in the western portion of the CONUS was generally higher than the NCEP climatology. The eastern CONUS GVF had variations both above and below the climatology during the period of study. These variations in GVF led to direct impacts on the rates of heating and evaporation from the land surface. In the West, higher latent heat fluxes prevailed, which enhanced the rates of evapotranspiration and soil moisture depletion in the LSM. By late Summer and Autumn, both the average sensible and latent heat fluxes increased in the West as a result of the more rapid soil drying and higher coverage of GVF. The impacts of the SPoRT GVF dataset on NWP was also examined for a single severe weather case study using the Weather Research and Forecasting (WRF) model. Two separate coupled LIS/WRF model simulations were made for the 17 July 2010 severe weather event in the Upper Midwest using the NCEP and SPoRT GVFs, with all other model parameters remaining the same. Based on the sensitivity results, regions with higher GVF in the SPoRT model runs had higher evapotranspiration and

  2. Evaluation of the occurrence and biodegradation of parabens and halogenated by-products in wastewater by accurate-mass liquid chromatography-quadrupole-time-of-flight-mass spectrometry (LC-QTOF-MS).

    PubMed

    González-Mariño, Iria; Quintana, José Benito; Rodríguez, Isaac; Cela, Rafael

    2011-12-15

    An assessment of the sewage occurrence and biodegradability of seven parabens and three halogenated derivatives of methyl paraben (MeP) is presented. Several wastewater samples were collected at three different wastewater treatment plants (WWTPs) during April and May 2010, concentrated by solid-phase extraction (SPE) and analysed by liquid chromatography-electrospray-quadrupole-time-of-flight mass spectrometry (LC-QTOF-MS). The performance of the QTOF system proved to be comparable to triple-quadrupole instruments in terms of quantitative capabilities, with good linearity (R(2) > 0.99 in the 5-500 ng mL(-1) range), repeatability (RSD < 5.6%) and LODs (0.3-4.0 ng L(-1) after SPE). MeP and n-propyl paraben (n-PrP) were the most frequently detected and the most abundant analytes in raw wastewater (0.3-10 μg L(-1)), in accordance with the data displayed in the bibliography and reflecting their wider use in cosmetic formulations. Samples were also evaluated in search for potential halogenated by-products of parabens, formed as a result of their reaction with residual chlorine contained in tap water. Monochloro- and dichloro-methyl paraben (ClMeP and Cl(2)MeP) were found and quantified in raw wastewater at levels between 0.01 and 0.1 μg L(-1). Halogenated derivatives of n-PrP could not be quantified due to the lack of standards; nevertheless, the monochlorinated species (ClPrP) was identified in several samples from its accurate precursor and product ions mass/charge ratios (m/z). Removal efficiencies of parabens and MeP chlorinated by-products in WWTPs exceeded 90%, with the lowest percentages corresponding to the latter species. This trend was confirmed by an activated sludge biodegradation batch test, where non-halogenated parabens had half-lives lower than 4 days, whereas halogenated derivatives of MeP turned out to be more persistent, with up to 10 days of half-life in the case of dihalogenated derivatives. A further stability test performed with raw wastewater

  3. The use of available potential energy to evaluate the impact of satellite data on numerical model analysis during FGGE

    NASA Technical Reports Server (NTRS)

    Horn, Lyle H.; Koehler, Thomas L.; Whittaker, Linda M.

    1988-01-01

    To evaluate the effect of the FGGE satellite observing system, the following two data sets were compared by examining the available potential energy (APE) and extratropical cyclone activity within the entire global domain during the first Special Observing Period: (1) the complete FGGE IIIb set, which incorporates satellite soundings, and (2) a NOSAT set which incorporates only conventional data. The time series of the daily total APEs indicate that NOSAT values are larger than the FGGE values, although in the Northern Hemisphere the differences are negligible. Analyses of cyclone scale features revealed only minor differences between the Northern Hemisphere FGGE and NOSAT analyses. On the other hand, substantial differences were revealed in the two Southern Hemisphere analyses, where the satellite soundings apparently add detail to the FGGE set.

  4. Evaluation of numerical models by FerryBox and Fixed Platform in-situ data in the southern North Sea

    NASA Astrophysics Data System (ADS)

    Haller, M.; Janssen, F.; Siddorn, J.; Petersen, W.; Dick, S.

    2015-02-01

    FerryBoxes installed on ships of opportunity (SoO) provide high-frequency surface biogeochemical measurements along selected tracks on a regular basis. Within the European FerryBox Community, several FerryBoxes are operated by different institutions. Here we present a comparison of model simulations applied to the North Sea with FerryBox temperature and salinity data from a transect along the southern North Sea and a more detailed analysis at three different positions located off the English East coast, at the Oyster Ground and in the German Bight. In addition to the FerryBox data, data from a Fixed Platform of the MARNET network are applied. Two operational hydrodynamic models have been evaluated for different time periods: results of BSHcmod v4 are analysed for 2009-2012, while simulations of FOAM AMM7 NEMO have been available from MyOcean data base for 2011 and 2012. The simulation of water temperatures is satisfying; however, limitations of the models exist, especially near the coast in the southern North Sea, where both models are underestimating salinity. Statistical errors differ between the models and the measured parameters, as the root mean square error (rmse) accounts for BSHcmod v4 to 0.92 K, for AMM7 only to 0.44 K. For salinity, BSHcmod is slightly better than AMM7 (0.98 and 1.1 psu, respectively). The study results reveal weaknesses of both models, in terms of variability, absolute levels and limited spatial resolution. In coastal areas, where the simulation of the transition zone between the coasts and the open ocean is still a demanding task for operational modelling, FerryBox data, combined with other observations with differing temporal and spatial scales serve as an invaluable tool for model evaluation and optimization. The optimization of hydrodynamical models with high frequency regional datasets, like the FerryBox data, is beneficial for their subsequent integration in ecosystem modelling.

  5. Calibration and Evaluation of a Flood Forecasting System: Utility of Numerical Weather Prediction Model, Data Assimilation and Satellite-based Rainfall

    NASA Astrophysics Data System (ADS)

    Yucel, Ismail; Onen, Alper; Yilmaz, Koray; Gochis, David

    2015-04-01

    A fully-distributed, multi-physics, multi-scale hydrologic and hydraulic modeling system, WRF-Hydro, is used to assess the potential for skillful flood forecasting based on precipitation inputs derived from the Weather Research and Forecasting (WRF) model and the EUMETSAT Multi-sensor Precipitation Estimates (MPEs). Similar to past studies it was found that WRF model precipitation forecast errors related to model initial conditions are reduced when the three dimensional atmospheric data assimilation (3DVAR) scheme in the WRF model simulations is used. The study then undertook a comparative evaluation of the impact of MPE versus WRF precipitation estimates, both with and without data assimilation, in driving WRF-Hydro simulated streamflow. Several flood events that occurred in the Black Sea region were used for testing and evaluation. Following model calibration, the WRF-Hydro system was capable of skillfully reproducing observed flood hydrographs in terms of the volume of the runoff produced and the overall shape of the hydrograph. Streamflow simulation skill was significantly improved for those WRF model simulations where storm precipitation was accurately depicted with respect to timing, location and amount. Accurate streamflow simulations were more evident in WRF model simulations where the 3DVAR scheme was used compared to when it was not used. Because of substantial dry bias feature of MPE, streamflow derived using this precipitation product is in general very poor. Overall, root mean squared errors for runoff were reduced by 22.2% when hydrological model calibration is performed with WRF precipitation. Errors were reduced by 36.9% (above uncalibrated model performance) when both WRF model data assimilation and hydrological model calibration was utilized. Our results also indicated that when assimilated precipitation and model calibration is performed jointly, the calibrated parameters at the gauged sites could be transferred to ungauged neighboring basins

  6. Evaluation of a coupled model for numerical simulation of a multiphase flow system in a porous medium and a surface fluid.

    PubMed

    Hibi, Yoshihiko; Tomigashi, Akira

    2015-09-01

    Numerical simulations that couple flow in a surface fluid with that in a porous medium are useful for examining problems of pollution that involve interactions among atmosphere, water, and groundwater, including saltwater intrusion along coasts. Coupled numerical simulations of such problems must consider both vertical flow between the surface fluid and the porous medium and complicated boundary conditions at their interface. In this study, a numerical simulation method coupling Navier-Stokes equations for surface fluid flow and Darcy equations for flow in a porous medium was developed. Then, the basic ability of the coupled model to reproduce (1) the drawdown of a surface fluid observed in square-pillar experiments, using pillars filled with only fluid or with fluid and a porous medium and (2) the migration of saltwater (salt concentration 0.5%) in the porous medium using the pillar filled with fluid and a porous medium was evaluated. Simulations that assumed slippery walls reproduced well the results with drawdowns of 10-30 cm when the pillars were filled with packed sand, gas, and water. Moreover, in the simulation of saltwater infiltration by the method developed in this study, velocity was precisely reproduced because the experimental salt concentration in the porous medium after saltwater infiltration was similar to that obtained in the simulation. Furthermore, conditions across the boundary between the porous medium and the surface fluid were satisfied in these numerical simulations of square-pillar experiments in which vertical flow predominated. Similarly, the velocity obtained by the simulation for a system coupling flow in surface fluid with that in a porous medium when horizontal flow predominated satisfied the conditions across the boundary. Finally, it was confirmed that the present simulation method was able to simulate a practical-scale surface fluid and porous medium system. All of these numerical simulations, however, required a great deal of

  7. Reliable numerical computation in an optimal output-feedback design

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.

  8. Sheet Hydroforming Process Numerical Model Improvement Through Experimental Results Analysis

    NASA Astrophysics Data System (ADS)

    Gabriele, Papadia; Antonio, Del Prete; Alfredo, Anglani

    2010-06-01

    The increasing application of numerical simulation in metal forming field has helped engineers to solve problems one after another to manufacture a qualified formed product reducing the required time [1]. Accurate simulation results are fundamental for the tooling and the product designs. The wide application of numerical simulation is encouraging the development of highly accurate simulation procedures to meet industrial requirements. Many factors can influence the final simulation results and many studies have been carried out about materials [2], yield criteria [3] and plastic deformation [4,5], process parameters [6] and their optimization. In order to develop a reliable hydromechanical deep drawing (HDD) numerical model the authors have been worked out specific activities based on the evaluation of the effective stiffness of the blankholder structure [7]. In this paper after an appropriate tuning phase of the blankholder force distribution, the experimental activity has been taken into account to improve the accuracy of the numerical model. In the first phase, the effective capability of the blankholder structure to transfer the applied load given by hydraulic actuators to the blank has been explored. This phase ended with the definition of an appropriate subdivision of the blankholder active surface in order to take into account the effective pressure map obtained for the given loads configuration. In the second phase the numerical results obtained with the developed subdivision have been compared with the experimental data of the studied model. The numerical model has been then improved, finding the best solution for the blankholder force distribution.

  9. Evaluation of wind-induced internal pressure in low-rise buildings: A multi scale experimental and numerical approach

    NASA Astrophysics Data System (ADS)

    Tecle, Amanuel Sebhatu

    Hurricane is one of the most destructive and costly natural hazard to the built environment and its impact on low-rise buildings, particularity, is beyond acceptable. The major objective of this research was to perform a parametric evaluation of internal pressure (IP) for wind-resistant design of low-rise buildings and wind-driven natural ventilation applications. For this purpose, a multi-scale experimental, i.e. full-scale at Wall of Wind (WoW) and small-scale at Boundary Layer Wind Tunnel (BLWT), and a Computational Fluid Dynamics (CFD) approach was adopted. This provided new capability to assess wind pressures realistically on internal volumes ranging from small spaces formed between roof tiles and its deck to attic to room partitions. Effects of sudden breaching, existing dominant openings on building envelopes as well as compartmentalization of building interior on the IP were systematically investigated. Results of this research indicated: (i) for sudden breaching of dominant openings, the transient overshooting response was lower than the subsequent steady state peak IP and internal volume correction for low-wind-speed testing facilities was necessary. For example a building without volume correction experienced a response four times faster and exhibited 30--40% lower mean and peak IP; (ii) for existing openings, vent openings uniformly distributed along the roof alleviated, whereas one sided openings aggravated the IP; (iii) larger dominant openings exhibited a higher IP on the building envelope, and an off-center opening on the wall exhibited (30--40%) higher IP than center located openings; (iv) compartmentalization amplified the intensity of IP and; (v) significant underneath pressure was measured for field tiles, warranting its consideration during net pressure evaluations. The study aimed at wind driven natural ventilation indicated: (i) the IP due to cross ventilation was 1.5 to 2.5 times higher for Ainlet/Aoutlet>1 compared to cases where Ainlet

  10. Hydrogeologic evaluation and numerical simulation of the Death Valley regional ground-water flow system, Nevada and California

    SciTech Connect

    D`Agnese, F.A.; Faunt, C.C.; Turner, A.K.; Hill, M.C.

    1997-12-31

    Yucca Mountain is being studied as a potential site for a high-level radioactive waste repository. In cooperation with the U.S. Department of Energy, the U.S. Geological Survey is evaluating the geologic and hydrologic characteristics of the ground-water system. The study area covers approximately 100,000 square kilometers between lat 35{degrees}N., long 115{degrees}W and lat 38{degrees}N., long 118{degrees}W and encompasses the Death Valley regional ground-water flow system. Hydrology in the region is a result of both the and climatic conditions and the complex described as dominated by interbasinal flow and may be conceptualized as having two main components: a series of relatively shallow and localized flow paths that are superimposed on deeper regional flow paths. A significant component of the regional ground-water flow is through a thick Paleozoic carbonate rock sequence. Throughout the regional flow system, ground-water flow is probably controlled by extensive and prevalent structural features that result from regional faulting and fracturing. Hydrogeologic investigations over a large and hydrogeologically complex area impose severe demands on data management. This study utilized geographic information systems and geoscientific information systems to develop, store, manipulate, and analyze regional hydrogeologic data sets describing various components of the ground-water flow system.

  11. Numerical evaluation of oxide growth in metallic support microstructures of Solid Oxide Fuel Cells and its influence on mass transport

    NASA Astrophysics Data System (ADS)

    Reiss, Georg; Frandsen, Henrik Lund; Persson, Åsa Helen; Weiß, Christian; Brandstätter, Wilhelm

    2015-11-01

    Metal-supported Solid Oxide Fuel Cells (SOFCs) are developed as a durable and cost-effective alternative to the state-of-the-art cermet SOFCs. This novel technology offers new opportunities but also new challenges. One of them is corrosion of the metallic support, which will decrease the long-term performance of the SOFCs. In order to understand the implications of the corrosion on the mass-transport through the metallic support, a corrosion model is developed that is capable of determining the change of the porous microstructure due to oxide scale growth. The model is based on high-temperature corrosion theory, and the required model parameters can be retrieved by standard corrosion weight gain measurements. The microstructure is reconstructed from X-ray computed tomography, and converted into a computational grid. The influence of the changing microstructure on the fuel cell performance is evaluated by determining an effective diffusion coefficient and the equivalent electrical area specific resistance (ASR) due to diffusion over time. It is thus possible to assess the applicability (in terms of corrosion behaviour) of potential metallic supports without costly long-term experiments. In addition to that an analytical frame-work is proposed, which is capable of estimating the porosity, tortuosity and the corresponding ASR based on weight gain measurements.

  12. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  13. High order accurate finite difference schemes based on symmetry preservation

    NASA Astrophysics Data System (ADS)

    Ozbenli, Ersin; Vedula, Prakash

    2016-11-01

    A new algorithm for development of high order accurate finite difference schemes for numerical solution of partial differential equations using Lie symmetries is presented. Considering applicable symmetry groups (such as those relevant to space/time translations, Galilean transformation, scaling, rotation and projection) of a partial differential equation, invariant numerical schemes are constructed based on the notions of moving frames and modified equations. Several strategies for construction of invariant numerical schemes with a desired order of accuracy are analyzed. Performance of the proposed algorithm is demonstrated using analysis of one-dimensional partial differential equations, such as linear advection diffusion equations inviscid Burgers equation and viscous Burgers equation, as our test cases. Through numerical simulations based on these examples, the expected improvement in accuracy of invariant numerical schemes (up to fourth order) is demonstrated. Advantages due to implementation and enhanced computational efficiency inherent in our proposed algorithm are presented. Extension of the basic framework to multidimensional partial differential equations is also discussed.

  14. Using numerical analysis to develop and evaluate the method of high temperature sous-vide to soften carrot texture in different-sized packages.

    PubMed

    Hong, Yoon-Ki; Uhm, Joo-Tae; Yoon, Won Byong

    2014-04-01

    The high-temperature sous-vide (HTSV) method was developed to prepare carrots with a soft texture at the appropriate degree of pasteurization. The effect of heating conditions, such as temperature and time, was investigated on various package sizes. Heating temperatures of 70, 80, and 90 °C and heating times of 10 and 20 min were used to evaluate the HTSV method. A 3-dimensional conduction model and numerical simulations were used to estimate the temperature distribution and the rate of heat transfer to samples with various geometries. Four different-sized packages were prepared by stacking carrot sticks of identical size (9.6 × 9.6 × 90 mm) in a row. The sizes of the packages used were as follows: (1) 9.6 × 86.4 × 90, (2) 19.2 × 163.2 × 90, (3) 28.8 × 86.4 × 90, and (4) 38.4 × 86.4 × 90 mm. Although only a moderate change in color (L*, a*, and b*) was observed following HTSV cooking, there was a significant decrease in carrot hardness. The geometry of the package and the heating conditions significantly influenced the degree of pasteurization and the final texture of the carrots. Numerical simulations successfully described the effect of geometry on samples at different heating conditions.

  15. Theoretical and numerical evaluation of polarimeter using counter-circularly-polarized-probing-laser under the coupling between Faraday and Cotton-Mouton effect

    NASA Astrophysics Data System (ADS)

    Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi

    2016-04-01

    This study evaluated an effect of an coupling between the Faraday and Cotton-Mouton effect to a measurement signal of the Dodel-Kunz method which uses counter-circular-polarized probing-laser for measuring the Faraday effect. When the coupling is small (the Faraday effect is dominant and the characteristic eigenmodes are approximately circularly polarized), the measurement signal can be algebraically expressed and it is shown that the finite effect of the coupling is still significant. When the Faraday effect is not dominant, a numerical calculation is necessary. The numerical calculation under an ITER-like condition (Bt = 5.3 T, Ip = 15 MA, a = 2 m, ne = 1020 m-3 and λ = 119 μm) showed that difference between the pure Faraday rotation and the measurement signal of the Dodel-Kunz method was an order of one degree, which exceeds allowable error of ITER poloidal polarimeter. In conclusion, similar to other polarimeter techniques, the Dodel-Kunz method is not free from the coupling between the Faraday and Cotton-Mouton effect.

  16. Theoretical and numerical evaluation of polarimeter using counter-circularly-polarized-probing-laser under the coupling between Faraday and Cotton-Mouton effect.

    PubMed

    Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi

    2016-04-01

    This study evaluated an effect of an coupling between the Faraday and Cotton-Mouton effect to a measurement signal of the Dodel-Kunz method which uses counter-circular-polarized probing-laser for measuring the Faraday effect. When the coupling is small (the Faraday effect is dominant and the characteristic eigenmodes are approximately circularly polarized), the measurement signal can be algebraically expressed and it is shown that the finite effect of the coupling is still significant. When the Faraday effect is not dominant, a numerical calculation is necessary. The numerical calculation under an ITER-like condition (Bt = 5.3 T, Ip = 15 MA, a = 2 m, ne = 10(20) m(-3) and λ = 119 μm) showed that difference between the pure Faraday rotation and the measurement signal of the Dodel-Kunz method was an order of one degree, which exceeds allowable error of ITER poloidal polarimeter. In conclusion, similar to other polarimeter techniques, the Dodel-Kunz method is not free from the coupling between the Faraday and Cotton-Mouton effect.

  17. Fracture toughness evaluation of 20MnMoNi55 pressure vessel steel in the ductile to brittle transition regime: Experiment & numerical simulations

    NASA Astrophysics Data System (ADS)

    Gopalan, Avinash; Samal, M. K.; Chakravartty, J. K.

    2015-10-01

    In this work, fracture behaviour of 20MnMoNi55 reactor pressure vessel (RPV) steel in the ductile to brittle transition regime (DBTT) is characterised. Compact tension (CT) and single edged notched bend (SENB) specimens of two different sizes were tested in the DBTT regime. Reference temperature 'T0' was evaluated according to the ASTM E1921 standard. The effect of size and geometry on the T0 was studied and T0 was found to be lower for SENB geometry. In order to understand the fracture behaviour numerically, finite element (FE) simulations were performed using Beremin's model for cleavage and Rousselier's model for ductile failure mechanisms. The simulated fracture behaviour was found to be in good agreement with the experiment.

  18. 3D numerical test objects for the evaluation of a software used for an automatic analysis of a linear accelerator mechanical stability

    NASA Astrophysics Data System (ADS)

    Torfeh, Tarraf; Beaumont, Stéphane; Guédon, Jeanpierre; Benhdech, Yassine

    2010-04-01

    Mechanical stability of a medical LINear ACcelerator (LINAC), particularly the quality of the gantry, collimator and table rotations and the accuracy of the isocenter position, are crucial for the radiation therapy process, especially in stereotactic radio surgery and in Image Guided Radiation Therapy (IGRT) where this mechanical stability is perturbed due to the additional weight the kV x-ray tube and detector. In this paper, we present a new method to evaluate a software which is used to perform an automatic measurement of the "size" (flex map) and the location of the kV and the MV isocenters of the linear accelerator. The method consists of developing a complete numerical 3D simulation of a LINAC and physical phantoms in order to produce Electronic Portal Imaging Device (EPID) images including calibrated distortions of the mechanical movement of the gantry and isocenter misalignments.

  19. Calibration and evaluation of a flood forecasting system: Utility of numerical weather prediction model, data assimilation and satellite-based rainfall

    NASA Astrophysics Data System (ADS)

    Yucel, I.; Onen, A.; Yilmaz, K. K.; Gochis, D. J.

    2015-04-01

    A fully-distributed, multi-physics, multi-scale hydrologic and hydraulic modeling system, WRF-Hydro, is used to assess the potential for skillful flood forecasting based on precipitation inputs derived from the Weather Research and Forecasting (WRF) model and the EUMETSAT Multi-sensor Precipitation Estimates (MPEs). Similar to past studies it was found that WRF model precipitation forecast errors related to model initial conditions are reduced when the three dimensional atmospheric data assimilation (3DVAR) scheme in the WRF model simulations is used. A comparative evaluation of the impact of MPE versus WRF precipitation estimates, both with and without data assimilation, in driving WRF-Hydro simulated streamflow is then made. The ten rainfall-runoff events that occurred in the Black Sea Region were used for testing and evaluation. With the availability of streamflow data across rainfall-runoff events, the calibration is only performed on the Bartin sub-basin using two events and the calibrated parameters are then transferred to other neighboring three ungauged sub-basins in the study area. The rest of the events from all sub-basins are then used to evaluate the performance of the WRF-Hydro system with the calibrated parameters. Following model calibration, the WRF-Hydro system was capable of skillfully reproducing observed flood hydrographs in terms of the volume of the runoff produced and the overall shape of the hydrograph. Streamflow simulation skill was significantly improved for those WRF model simulations where storm precipitation was accurately depicted with respect to timing, location and amount. Accurate streamflow simulations were more evident in WRF model simulations where the 3DVAR scheme was used compared to when it was not used. Because of substantial dry bias feature of MPE, as compared with surface rain gauges, streamflow derived using this precipitation product is in general very poor. Overall, root mean squared errors for runoff were reduced by

  20. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  1. Numerical Prediction of Cold Season Fog Events over Complex Terrain: the Performance of the WRF Model During MATERHORN-Fog and Early Evaluation

    NASA Astrophysics Data System (ADS)

    Pu, Zhaoxia; Chachere, Catherine N.; Hoch, Sebastian W.; Pardyjak, Eric; Gultepe, Ismail

    2016-09-01

    A field campaign to study cold season fog in complex terrain was conducted as a component of the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program from 07 January to 01 February 2015 in Salt Lake City and Heber City, Utah, United States. To support the field campaign, an advanced research version of the Weather Research and Forecasting (WRF) model was used to produce real-time forecasts and model evaluation. This paper summarizes the model performance and preliminary evaluation of the model against the observations. Results indicate that accurately forecasting fog is challenging for the WRF model, which produces large errors in the near-surface variables, such as relative humidity, temperature, and wind fields in the model forecasts. Specifically, compared with observations, the WRF model overpredicted fog events with extended duration in Salt Lake City because it produced higher moisture, lower wind speeds, and colder temperatures near the surface. In contrast, the WRF model missed all fog events in Heber City, as it reproduced lower moisture, higher wind speeds, and warmer temperatures against observations at the near-surface level. The inability of the model to produce proper levels of near-surface atmospheric conditions under fog conditions reflects uncertainties in model physical parameterizations, such as the surface layer, boundary layer, and microphysical schemes.

  2. WAIS-IV reliable digit span is no more accurate than age corrected scaled score as an indicator of invalid performance in a veteran sample undergoing evaluation for mTBI.

    PubMed

    Spencer, Robert J; Axelrod, Bradley N; Drag, Lauren L; Waldron-Perrine, Brigid; Pangilinan, Percival H; Bieliauskas, Linas A

    2013-01-01

    Reliable Digit Span (RDS) is a measure of effort derived from the Digit Span subtest of the Wechsler intelligence scales. Some authors have suggested that the age-corrected scaled score provides a more accurate measure of effort than RDS. This study examined the relative diagnostic accuracy of the traditional RDS, an extended RDS including the new Sequencing task from the Wechsler Adult Intelligence Scale-IV, and the age-corrected scaled score, relative to performance validity as determined by the Test of Memory Malingering. Data were collected from 138 Veterans seen in a traumatic brain injury clinic. The traditional RDS (≤ 7), revised RDS (≤ 11), and Digit Span age-corrected scaled score ( ≤ 6) had respective sensitivities of 39%, 39%, and 33%, and respective specificities of 82%, 89%, and 91%. Of these indices, revised RDS and the Digit Span age-corrected scaled score provide the most accurate measure of performance validity among the three measures.

  3. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  4. NUMERICAL CALCULATION OF MAGNETOBREMSSTRAHLUNG EMISSION AND ABSORPTION COEFFICIENTS

    SciTech Connect

    Leung, Po Kin; Gammie, Charles F.; Noble, Scott C. E-mail: gammie@illinois.edu

    2011-08-10

    Magnetobremsstrahlung (MBS) emission and absorption play a role in many astronomical systems. We describe a general numerical scheme for evaluating MBS emission and absorption coefficients for both polarized and unpolarized light in a plasma with a general distribution function. Along the way we provide an accurate scheme for evaluating Bessel functions of high order. We use our scheme to evaluate the accuracy of earlier fitting formulae and approximations. We also provide an accurate fitting formula for mildly relativistic (kT/(m{sub e}c{sup 2}) {approx}> 0.5) thermal electron emission (and therefore absorption). Our scheme is too slow, at present, for direct use in radiative transfer calculations but will be useful for anyone seeking to fit emission or absorption coefficients in a particular regime.

  5. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  6. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  7. Evaluation of coal-mining impacts using numerical classification of benthic invertebrate data from streams draining a heavily mined basin in eastern Tennessee

    SciTech Connect

    Bradfield, A.D.

    1986-01-01

    Coal-mining impacts on Smoky Creek, eastern Tennessee were evaluated using water quality and benthic invertebrate data. Data from mined sites were also compared with water quality and invertebrate fauna found at Crabapple Branch, an undisturbed stream in a nearby basin. Although differences in water quality constituent concentrations and physical habitat conditions at sampling sites were apparent, commonly used measures of benthic invertebrate sample data such as number of taxa, sample diversity, number of organisms, and biomass were inadequate for determining differences in stream environments. Clustering algorithms were more useful in determining differences in benthic invertebrate community structure and composition. When data from a single season were examined, sites on tributary streams generally clustered separately from sites on Smoky Creek. These analyses compared with differences in water quality, stream size, and substrate characteristics between tributary sites and the more degraded main stem sites, indicated that numerical classification of invertebrate data can provide discharge-independent information useful in rapid evaluations of in-stream environmental conditions. 25 refs., 14 figs., 22 tabs.

  8. Evaluation of coal-mining impacts using numerical classification of benthic invertebrate data from streams draining a heavily mined basin in eastern Tennessee

    USGS Publications Warehouse

    Bradfield, A.D.

    1986-01-01

    Coal-mining impacts on Smoky Creek, eastern Tennessee were evaluated using water quality and benthic invertebrate data. Data from mined sites were also compared with water quality and invertebrate fauna found at Crabapple Branch, an undisturbed stream in a nearby basin. Although differences in water quality constituent concentrations and physical habitat conditions at sampling sites were apparent, commonly used measures of benthic invertebrate sample data such as number of taxa, sample diversity, number of organisms, and biomass were inadequate for determining differences in stream environments. Clustering algorithms were more useful in determining differences in benthic invertebrate community structure and composition. Normal (collections) and inverse (species) analyses based on presence-absence data of species of Ephemeroptera, Plecoptera, and Tricoptera were compared using constancy, fidelity, and relative abundance of species found at stations with similar fauna. These analyses identified differences in benthic community composition due to seasonal variations in invertebrate life histories. When data from a single season were examined, sites on tributary streams generally clustered separately from sites on Smoky Creek. These analyses compared with differences in water quality, stream size, and substrate characteristics between tributary sites and the more degraded main stem sites, indicated that numerical classification of invertebrate data can provide discharge-independent information useful in rapid evaluations of in-stream environmental conditions. (Author 's abstract)

  9. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  10. Accurately measuring dynamic coefficient of friction in ultraform finishing

    NASA Astrophysics Data System (ADS)

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  11. An Evaluation of a Numerical Prediction Method for Electric Field Strength of Low Frequency Radio Waves based on Wave-Hop Ionospheric Propagation

    NASA Astrophysics Data System (ADS)

    Kitauchi, H.; Nozaki, K.; Ito, H.; Kondo, T.; Tsuchiya, S.; Imamura, K.; Nagatsuma, T.; Ishii, M.

    2014-12-01

    We present our recent efforts on an evaluation of the numerical prediction method of electric field strength for ionospheric propagation of low frequency (LF) radio waves based on a wave-hop propagation theory described in Section 2.4 of Recommendation ITU-R P.684-6 (2012), "Prediction of field strength at frequencies below about 150 kHz," made by International Telecommunication Union Radiocommunication Sector (ITU-R). As part of the Japanese Antarctic Research Expedition (JARE), we conduct on-board measurements of the electric field strengths and phases of LF 40 kHz and 60 kHz of radio signals (call sign JJY) continuously along both the ways between Tokyo, Japan and Syowa Station, the Japanese Antarctic station, at 69° 00' S, 39° 35' E on East Ongul Island, Lützow-Holm Bay, East Antarctica. The measurements are made by a newly developed, highly sensitive receiving system installed on board the Japanese Antarctic research vessel (RV) Shirase. We obtained new data sets of the electric field strength up to approximately 13,000-14,000 km propagation of LF JJY 40 kHz and 60 kHz radio waves by utilizing a newly developed, highly sensitive receiving system, comprised of an orthogonally crossed double-loop antenna and digital-signal-processing lock-in amplifiers, on board RV Shirase during the 55th JARE from November 2013 to April 2014. We have made comparisons between those on-board measurements and the numerical predictions of field strength for long-range propagation of low frequency radio waves based on a wave-hop propagation theory described in Section 2.4 of Recommendation ITU-R P.684-6 (2012) to show that our results qualitatively support the recommended wave-hop theory for the great-circle paths approximately 7,000-8,000 km and 13,000-14,000 km propagations.

  12. Simultaneous detection of multiple debris via a cascade of numerical evaluations and a voting scheme for lines in an image sequence

    NASA Astrophysics Data System (ADS)

    Fujita, Koki; Ichimura, Naoyuki; Hanada, Toshiya

    2017-04-01

    This paper presents a novel method to simultaneously detect multiple trajectories of space debris in an observation image sequence to establish a reliable model for space debris environment in Geosynchronous Earth Orbit (GEO). The debris in GEO often appear faintly in image sequences due to the high altitude. A simple but steady way to detect such faint debris is to decrease a threshold value of binarization applied to an image sequence during preprocessing. However, a low threshold value of binarization leads to extracting a large number of objects other than debris that become obstacles to detect debris trajectories. In order to detect debris from binarized image frames with massive obstacles, this work proposes a method that utilizes a cascade of numerical evaluations and a voting scheme to evaluate characteristics of the line segments obtained by connecting two image objects in different image frames, which are the candidates of debris trajectories. In the proposed method, the line segments corresponding to objects other than debris are filtered out using three types of characteristics, namely displacement, direction, and continuity. First, the displacement and direction of debris motion are evaluated to remove irrelevant trajectories. Then, the continuity of the remaining line segments is checked to find debris by counting the number of image objects appearing on or close to the line segments. Since checking the continuity can be regarded as a voting scheme, the proposed cascade algorithm can take advantage of the properties of voting method such as the Hough transform, i.e., the robustness against heavy noises and clutters, and ability of detecting multiple trajectories simultaneously. The experimental tests using real image sequences obtained in a past observation campaign demonstrate the effectiveness of the proposed method.

  13. Numerical Optimization

    DTIC Science & Technology

    1992-12-01

    fisica matematica . ABSTRACT - We consider a new method for the numerical solution both of non- linear systems of equations and of cornplementauity... Matematica , Serie V11 Volume 9 , Roma (1989), 521-543 An Inexact Continuous Method for the Solution of Large Systems of Equations and Complementarity...34 - 00185 Roma - Italy APPENDIX 2 A Quadratically Convergent Method for Unear Programming’ Stefano Herzel Dipartimento di Matematica -G. Castelnuovo

  14. Numerical modeling of the thermal-hydraulic behavior of wire-on-tube condensers operating with HFC-134a using homogeneous equilibrium model: evaluation of some void fraction correlations

    NASA Astrophysics Data System (ADS)

    Guzella, Matheus dos Santos; Cabezas-Gómez, Luben; da Silva, José Antônio; Maia, Cristiana Brasil; Hanriot, Sérgio de Morais

    2016-02-01

    This study presents a numerical evaluation of the influence of some void fraction correlations over the thermal-hydraulic behavior of wire-on-tube condensers operating with HFC-134a. The numerical model is based on finite volume method considering the homogeneous equilibrium model. Empirical correlations are applied to provide closure relations. Results show that the choice of void fraction correlation influences the refrigerant charge and pressure drop calculations, while no influences the heat transfer rate.

  15. Accurate stress resultants equations for laminated composite deep thick shells

    SciTech Connect

    Qatu, M.S.

    1995-11-01

    This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.

  16. Accurate ab Initio Spin Densities.

    PubMed

    Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus

    2012-06-12

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].

  17. Determining the Numerical Stability of Quantum Chemistry Algorithms.

    PubMed

    Knizia, Gerald; Li, Wenbin; Simon, Sven; Werner, Hans-Joachim

    2011-08-09

    We present a simple, broadly applicable method for determining the numerical properties of quantum chemistry algorithms. The method deliberately introduces random numerical noise into computations, which is of the same order of magnitude as the floating point precision. Accordingly, repeated runs of an algorithm give slightly different results, which can be analyzed statistically to obtain precise estimates of its numerical stability. This noise is produced by automatic code injection into regular compiler output, so that no substantial programming effort is required, only a recompilation of the affected program sections. The method is applied to investigate: (i) the numerical stability of the three-center Obara-Saika integral evaluation scheme for high angular momenta, (ii) if coupled cluster perturbative triples can be evaluated with single precision arithmetic, (iii) how to implement the density fitting approximation in Møller-Plesset perturbation theory (MP2) most accurately, and (iv) which parts of density fitted MP2 can be safely evaluated with single precision arithmetic. In the integral case, we find a numerical instability in an equation that is used in almost all integral programs. Due to the results of (ii) and (iv), we conjecture that single precision arithmetic can be applied whenever a calculation is done in an orthogonal basis set and excessively long linear sums are avoided.

  18. Why Breast Cancer Risk by the Numbers Is Not Enough: Evaluation of a Decision Aid in Multi-Ethnic, Low-Numerate Women

    PubMed Central

    Yi, Haeseung; Xiao, Tong; Thomas, Parijatham; Aguirre, Alejandra; Smalletz, Cindy; David, Raven; Crew, Katherine

    2015-01-01

    Background Breast cancer risk assessment including genetic testing can be used to classify people into different risk groups with screening and preventive interventions tailored to the needs of each group, yet the implementation of risk-stratified breast cancer prevention in primary care settings is complex. Objective To address barriers to breast cancer risk assessment, risk communication, and prevention strategies in primary care settings, we developed a Web-based decision aid, RealRisks, that aims to improve preference-based decision-making for breast cancer prevention, particularly in low-numerate women. Methods RealRisks incorporates experience-based dynamic interfaces to communicate risk aimed at reducing inaccurate risk perceptions, with modules on breast cancer risk, genetic testing, and chemoprevention that are tailored. To begin, participants learn about risk by interacting with two games of experience-based risk interfaces, demonstrating average 5-year and lifetime breast cancer risk. We conducted four focus groups in English-speaking women (age ≥18 years), a questionnaire completed before and after interacting with the decision aid, and a semistructured group discussion. We employed a mixed-methods approach to assess accuracy of perceived breast cancer risk and acceptability of RealRisks. The qualitative analysis of the semistructured discussions assessed understanding of risk, risk models, and risk appropriate prevention strategies. Results Among 34 participants, mean age was 53.4 years, 62% (21/34) were Hispanic, and 41% (14/34) demonstrated low numeracy. According to the Gail breast cancer risk assessment tool (BCRAT), the mean 5-year and lifetime breast cancer risk were 1.11% (SD 0.77) and 7.46% (SD 2.87), respectively. After interacting with RealRisks, the difference in perceived and estimated breast cancer risk according to BCRAT improved for 5-year risk (P=.008). In the qualitative analysis, we identified potential barriers to adopting risk

  19. Accurate thermoplasmonic simulation of metallic nanoparticles

    NASA Astrophysics Data System (ADS)

    Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing

    2017-01-01

    Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.

  20. Advances in the numerical investigation of the immersion quenching process

    NASA Astrophysics Data System (ADS)

    Zhang, D. S.; Kopun, R.; Kosir, N.; Edelbauer, W.

    2017-01-01

    A numerical investigation of the immersion quenching process is presented in this paper. Immersion quenching is recognized as one of the common ways to achieve the desirable microstructure, and to improve the mechanical properties after thermal treatment. Furthermore it is important to prevent distortion and cracking of the cast parts. Accurate prediction of all three boiling regimes and the heat transfer inside the structure during quenching are important to finally evaluate the residual stresses and deformations of thermally treated parts. Numerical details focus on the handling of the enthalpy with variable specific heat capacity in the solid. For two application cases, comparison between measured and simulated temperatures at different monitoring positions shows very good agreement. The study demonstrates the capability of the present model to overcome the numerical challenges occurring during immersion quenching and it is capable of predicting the complex physics with good accuracy.

  1. Numerical and experimental evaluation of the relationship between porous electrode structure and effective conductivity of ions and electrons in lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Inoue, Gen; Kawase, Motoaki

    2017-02-01

    This study aims to develop a correlation equation between a porous electrode structure and the effective conductivity so as to design an optimal structure for a thick electrode layer of a high-capacity battery. We carried out a three-dimensional reconstruction of a lithium cobalt oxide and graphite electrode based on the cross-sectional images obtained via focused ion beam-scanning electron microscopy (FIB-SEM). The Li ion and electron conductivities are evaluated based on the effective conductive path determined from simulation and these values are compared with the experimental results obtained by electrochemical impedance spectroscopy carries out with a symmetric cell and the direct conductivity measurement under compression. Moreover, the amount of binder and the diameter of the active material particles are increased and decreased numerically using an actual reconstructed electrode structure, and the effect of those structures on the effective conductivity is examined. The most dominant factors that degrade ionic conductivity are the binder distribution and the particle morphology, respectively, in the cathode and anode, and a correlation equation with the function of porosity is obtained. These values are compared with those obtained by theoretical model equations, and the difference between the current effective ionic conductivity and the physical limiting value is determined.

  2. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-07

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics.

  3. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    PubMed

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop

  4. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis

    NASA Astrophysics Data System (ADS)

    Xu, Z. N.

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop

  5. Accurate Scientific Visualization in Research and Physics Teaching

    NASA Astrophysics Data System (ADS)

    Wendler, Tim

    2011-10-01

    Accurate visualization is key in the expression and comprehension of physical principles. Many 3D animation software packages come with built-in numerical methods for a variety of fundamental classical systems. Scripting languages give access to low-level computational functionality, thereby revealing a virtual physics laboratory for teaching and research. Specific examples will be presented: Galilean relativistic hair, energy conservation in complex systems, scattering from a central force, and energy transfer in bi-molecular reactions.

  6. Accurate identification and quantification of 11-nor-delta(9)-tetrahydrocannabinol-9-carboxylic acid in urine drug testing: evaluation of a direct high efficiency liquid chromatographic-mass spectrometric method.

    PubMed

    Stephanson, Nikolai; Josefsson, Martin; Kronstrand, Robert; Beck, Olof

    2008-08-01

    A direct liquid chromatographic-tandem mass spectrometric (LC-MS/MS) method for measurement of urinary Delta(9)-tetrahydrocannabinol carboxylic acid (THCA) was developed. The method involved dilution of the urine sample with water containing (2)H(9)-deuterated analogue as internal standard, hydrolysis with ammonia, reversed phase chromatography using a Waters ultra-performance liquid chromatography (UPLC) equipment with gradient elution, negative electrospray ionization, and monitoring of two product ions in selected reaction monitoring mode. The measuring range was 2-1000 ng/mL for THCA, and the intra- and inter-assay imprecision, expressed as the coefficient of variation, was below 5%. Influence from urine matrix on ionization efficiency was noted in infusion experiments, but was compensated for by the internal standard. Comparison with established gas chromatography-mass spectrometry and liquid chromatography-mass spectrometry methods in authentic patient samples demonstrated accuracy in both qualitative and quantitative results. A small difference in mean ratios (~15%) may be explained by the use of different hydrolysis procedures between methods. In conclusion, the high efficiency LC-MS/MS method was capable of accurately identify and quantify THCA in urine with a capacity of 14 samples per hour.

  7. Numerical modeling of nonintrusive inspection systems

    SciTech Connect

    Hall, J.; Morgan, J.; Sale, K.

    1992-12-01

    A wide variety of nonintrusive inspection systems have been proposed in the past several years for the detection of hidden contraband in airline luggage and shipping containers. The majority of these proposed techniques depend on the interaction of radiation with matter to produce a signature specific to the contraband of interest, whether drugs or explosives. In the authors` role as diagnostic specialists in the Underground Test Program over the past forty years, L-Division of the Lawrence Livermore National Laboratory has developed a technique expertise in the combined numerical and experimental modeling of these types of system. Based on their experience, they are convinced that detailed numerical modeling provides a much more accurate estimate of the actual performance of complex experiments than simple analytical modeling. Furthermore, the construction of detailed numerical prototypes allows experimenters to explore the entire region of parameter space available to them before committing their ideas to hardware. This sort of systematic analysis has often led to improved experimental designs and reductions in fielding costs. L-Division has developed an extensive suite of computer codes to model proposed experiments and possible background interactions. These codes allow one to simulate complex radiation sources, model 3-dimensional system geometries with {open_quotes}real world{close_quotes} complexity, specify detailed elemental distributions, and predict the response of almost any type of detector. In this work several examples are presented illustrating the use of these codes in modeling experimental systems at LLNL and their potential usefulness in evaluating nonintrusive inspection systems is discussed.

  8. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  9. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  10. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  11. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  12. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  13. Evaluation of comprehensive two-dimensional gas chromatography with accurate mass time-of-flight mass spectrometry for the metabolic profiling of plant-fungus interaction in Aquilaria malaccensis.

    PubMed

    Wong, Yong Foo; Chin, Sung-Tong; Perlmutter, Patrick; Marriott, Philip J

    2015-03-27

    To explore the possible obligate interactions between the phytopathogenic fungus and Aquilaria malaccensis which result in generation of a complex array of secondary metabolites, we describe a comprehensive two-dimensional gas chromatography (GC × GC) method, coupled to accurate mass time-of-flight mass spectrometry (TOFMS) for the untargeted and comprehensive metabolic profiling of essential oils from naturally infected A. malaccensis trees. A polar/non-polar column configuration was employed, offering an improved separation pattern of components when compared to other column sets. Four different grades of the oils displayed quite different metabolic patterns, suggesting the evolution of a signalling relationship between the host tree (emergence of various phytoalexins) and fungi (activation of biotransformation). In total, ca. 550 peaks/metabolites were detected, of which tentative identification of 155 of these compounds was reported, representing between 20.1% and 53.0% of the total ion count. These are distributed over the chemical families of monoterpenic and sesquiterpenic hydrocarbons, oxygenated monoterpenes and sesquiterpenes (comprised of ketone, aldehyde, oxide, alcohol, lactone, keto-alcohol and diol), norterpenoids, diterpenoids, short chain glycols, carboxylic acids and others. The large number of metabolites detected, combined with the ease with which they are located in the 2D separation space, emphasises the importance of a comprehensive analytical approach for the phytochemical analysis of plant metabolomes. Furthermore, the potential of this methodology in grading agarwood oils by comparing the obtained metabolic profiles (pattern recognition for unique metabolite chemical families) is discussed. The phytocomplexity of the agarwood oils signified the production of a multitude of plant-fungus mediated secondary metabolites as chemical signals for natural ecological communication. To the best of our knowledge, this is the most complete

  14. Evaluation of a landscape evolution model to simulate stream piracies: Insights from multivariable numerical tests using the example of the Meuse basin, France

    NASA Astrophysics Data System (ADS)

    Benaïchouche, Abed; Stab, Olivier; Tessier, Bruno; Cojan, Isabelle

    2016-01-01

    In landscapes dominated by fluvial erosion, the landscape morphology is closely related to the hydrographic network system. In this paper, we investigate the hydrographic network reorganization caused by a headward piracy mechanism between two drainage basins in France, the Meuse and the Moselle. Several piracies occurred in the Meuse basin during the past one million years, and the basin's current characteristics are favorable to new piracies by the Moselle river network. This study evaluates the consequences over the next several million years of a relative lowering of the Moselle River (and thus of its basin) with respect to the Meuse River. The problem is addressed with a numerical modeling approach (landscape evolution model, hereafter LEM) that requires empirical determinations of parameters and threshold values. Classically, fitting of the parameters is based on analysis of the relationship between the slope and the drainage area and is conducted under the hypothesis of equilibrium. Application of this conventional approach to the capture issue yields incomplete results that have been consolidated by a parametric sensitivity analysis. The LEM equations give a six-dimensional parameter space that was explored with over 15,000 simulations using the landscape evolution model GOLEM. The results demonstrate that stream piracies occur in only four locations in the studied reach near the city of Toul. The locations are mainly controlled by the local topography and are model-independent. Nevertheless, the chronology of the captures depends on two parameters: the river concavity (given by the fluvial advection equation) and the hillslope erosion factor. Thus, the simulations lead to three different scenarios that are explained by a phenomenon of exclusion or a string of events.

  15. Noninvasive hemoglobin monitoring: how accurate is enough?

    PubMed

    Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E

    2013-10-01

    Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.

  16. Numerical Investigation of Boiling

    NASA Astrophysics Data System (ADS)

    Sagan, Michael; Tanguy, Sebastien; Colin, Catherine

    2012-11-01

    In this work, boiling is numerically investigated, using two phase flow direct numerical simulation based on a level set / Ghost Fluid method. Nucleate boiling implies both thermal issue and multiphase dynamics issues at different scales and at different stages of bubble growth. As a result, the different phenomena are investigated separately, considering their nature and the scale at which they occur. First, boiling of a static bubble immersed in an overheated liquid is analysed. Numerical simulations have been performed at different Jakob numbers in the case of strong density discontinuity through the interface. The results show a good agreement on bubble radius evolution between the theoretical evolution and numerical simulation. After the validation of the code for the Scriven test case, interaction of a bubble with a wall is studied. A numerical method taking into account contact angle is evaluated by comparing simulations of the spreading of a liquid droplet impacting on a plate, with experimental data. Then the heat transfer near the contact line is investigated, and simulations of nucleate boiling are performed considering different contact angles values. Finally, the relevance of including a model to take into account the evaporation of the micro layer is discussed.

  17. On the accurate simulation of tsunami wave propagation

    NASA Astrophysics Data System (ADS)

    Castro, C. E.; Käser, M.; Toro, E. F.

    2009-04-01

    A very important part of any tsunami early warning system is the numerical simulation of the wave propagation in the open sea and close to geometrically complex coastlines respecting bathymetric variations. Here we are interested in improving the numerical tools available to accurately simulate tsunami wave propagation on a Mediterranean basin scale. To this end, we need to accomplish some targets, such as: high-order numerical simulation in space and time, preserve steady state conditions to avoid spurious oscillations and describe complex geometries due to bathymetry and coastlines. We use the Arbitrary accuracy DERivatives Riemann problem method together with Finite Volume method (ADER-FV) over non-structured triangular meshes. The novelty of this method is the improvement of the ADER-FV scheme, introducing the well-balanced property when geometrical sources are considered for unstructured meshes and arbitrary high-order accuracy. In a previous work from Castro and Toro [1], the authors mention that ADER-FV schemes approach asymptotically the well-balanced condition, which was true for the test case mentioned in [1]. However, new evidence[2] shows that for real scale problems as the Mediterranean basin, and considering realistic bathymetry as ETOPO-2[3], this asymptotic behavior is not enough. Under these realistic conditions the standard ADER-FV scheme fails to accurately describe the propagation of gravity waves without being contaminated with spurious oscillations, also known as numerical waves. The main problem here is that at discrete level, i.e. from a numerical point of view, the numerical scheme does not correctly balance the influence of the fluxes and the sources. Numerical schemes that retain this balance are said to satisfy the well-balanced property or the exact C-property. This unbalance reduces, as we refine the spatial discretization or increase the order of the numerical method. However, the computational cost increases considerably this way

  18. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  19. Accurate reservoir evaluation from borehole imaging techniques and thin bed log analysis: Case studies in shaly sands and complex lithologies in Lower Eocene Sands, Block III, Lake Maracaibo, Venezuela

    SciTech Connect

    Coll, C.; Rondon, L.

    1996-08-01

    Computer-aided signal processing in combination with different types of quantitative log evaluation techniques is very useful for predicting reservoir quality in complex lithologies and will help to increase the confidence level to complete and produce a reservoir. The Lower Eocene Sands in Block III are one of the largest reservoirs in Block III and it has produced light oil since 1960. Analysis of Borehole Images shows the reservoir heterogeneity by the presence of massive sands with very few shale laminations and thinnly bedded sands with a lot of laminations. The effect of these shales is a low resistivity that has been interpreted in most of the cases as water bearing sands. A reduction of the porosity due to diagenetic processes has produced a high-resistivity behaviour. The presence of bed boundaries and shales is detected by the microconductivity curves of the Borehole Imaging Tools allowing the estimation of the percentage of shale on these sands. Interactive computer-aided analysis and various image processing techniques are used to aid in log interpretation for estimating formation properties. Integration between these results, core information and production data was used for evaluating producibility of the reservoirs and to predict reservoir quality. A new estimation of the net pay thickness using this new technique is presented with the consequent improvement on the expectation of additional recovery. This methodology was successfully applied in a case by case study showing consistency in the area.

  20. First evaluation of automated specimen inoculation for wound swab samples by use of the Previ Isola system compared to manual inoculation in a routine laboratory: finding a cost-effective and accurate approach.

    PubMed

    Mischnik, Alexander; Mieth, Markus; Busch, Cornelius J; Hofer, Stefan; Zimmermann, Stefan

    2012-08-01

    Automation of plate streaking is ongoing in clinical microbiological laboratories, but evaluation for routine use is mostly open. In the present study, the recovery of microorganisms from the Previ Isola system plated polyurethane (PU) swab samples is compared to manually plated control viscose swab samples from wounds according to the CLSI procedure M40-A (quality control of microbiological transport systems). One hundred twelve paired samples (224 swabs) were analyzed. In 80/112 samples (71%), concordant culture results were obtained with the two methods. In 32/112 samples (29%), CFU recovery of microorganisms from the two methods was discordant. In 24 (75%) of the 32 paired samples with a discordant result, Previ Isola plated PU swabs were superior. In 8 (25%) of the 32 paired samples with a discordant result, control viscose swabs were superior. The quality of colony growth on culture media for further investigations was superior with Previ Isola inoculated plates compared to manual plating techniques. Gram stain results were concordant between the two methods in 62/112 samples (55%). In 50/112 samples (45%), the results of Gram staining were discordant between the two methods. In 34 (68%) of the 50 paired samples with discordant results, Gram staining of PU swabs was superior to that of control viscose swabs. In 16 (32%) of the 50 paired samples, Gram staining of control viscose swabs was superior to that of PU swabs. We report the first clinical evaluation of Previ Isola automated specimen inoculation for wound swab samples. This study suggests that use of an automated specimen inoculation system has good results with regard to CFU recovery, quality of Gram staining, and accuracy of diagnosis.

  1. Evaluation and validation of an accurate mass screening method for the analysis of pesticides in fruits and vegetables using liquid chromatography-quadrupole-time of flight-mass spectrometry with automated detection.

    PubMed

    López, Mónica García; Fussell, Richard J; Stead, Sara L; Roberts, Dominic; McCullagh, Mike; Rao, Ramesh

    2014-12-19

    This study reports the development and validation of a screening method for the detection of pesticides in 11 different fruit and vegetable commodities. The method was based on ultra performance liquid chromatography-quadrupole-time of flight-mass spectrometry (UPLC-QTOF-MS). The objective was to validate the method in accordance with the SANCO guidance document (12571/2013) on analytical quality control and validation procedures for pesticide residues analysis in food and feed. Samples were spiked with 199 pesticides, each at two different concentrations (0.01 and 0.05 mg kg(-1)) and extracted using the QuEChERS approach. Extracts were analysed by UPLC-QTOF-MS using generic acquisition parameters. Automated detection and data filtering were performed using the UNIFI™ software and the peaks detected evaluated against a proprietary scientific library containing information for 504 pesticides. The results obtained using different data processing parameters were evaluated for 4378 pesticide/commodities combinations at 0.01 and 0.05 mg kg(-1). Using mass accuracy (± 5 ppm) with retention time (± 0.2 min) and a low response threshold (100 counts) the validated Screening Detection Limits (SDLs) were 0.01 mg kg(-1) and 0.05 mg kg(-1) for 57% and 79% of the compounds tested, respectively, with an average of 10 false detects per sample analysis. Excluding the most complex matrices (onion and leek) the detection rates increased to 69% and 87%, respectively. The use of additional parameters such as isotopic pattern and fragmentation information further reduced the number of false detects but compromised the detection rates, particularly at lower residue concentrations. The challenges associated with the validation and subsequent implementation of a pesticide multi-residue screening method are also discussed.

  2. Fast and accurate implementation of Fourier spectral approximations of nonlocal diffusion operators and its applications

    NASA Astrophysics Data System (ADS)

    Du, Qiang; Yang, Jiang

    2017-03-01

    This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simple ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge-Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge-Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen-Cahn equations, nonlocal Cahn-Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.

  3. A numerical testbed for remote sensing of aerosols, and its demonstration for evaluating retrieval synergy from a geostationary satellite constellation of GEO-CAPE and GOES-R

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Xu, Xiaoguang; Ding, Shouguo; Zeng, Jing; Spurr, Robert; Liu, Xiong; Chance, Kelly; Mishchenko, Michael

    2014-10-01

    We present a numerical testbed for remote sensing of aerosols, together with a demonstration for evaluating retrieval synergy from a geostationary satellite constellation. The testbed combines inverse (optimal-estimation) software with a forward model containing linearized code for computing particle scattering (for both spherical and non-spherical particles), a kernel-based (land and ocean) surface bi-directional reflectance facility, and a linearized radiative transfer model for polarized radiance. Calculation of gas absorption spectra uses the HITRAN (HIgh-resolution TRANsmission molecular absorption) database of spectroscopic line parameters and other trace species cross-sections. The outputs of the testbed include not only the Stokes 4-vector elements and their sensitivities (Jacobians) with respect to the aerosol single scattering and physical parameters (such as size and shape parameters, refractive index, and plume height), but also DFS (Degree of Freedom for Signal) values for retrieval of these parameters. This testbed can be used as a tool to provide an objective assessment of aerosol information content that can be retrieved for any constellation of (planned or real) satellite sensors and for any combination of algorithm design factors (in terms of wavelengths, viewing angles, radiance and/or polarization to be measured or used). We summarize the components of the testbed, including the derivation and validation of analytical formulae for Jacobian calculations. Benchmark calculations from the forward model are documented. In the context of NASAs Decadal Survey Mission GEO-CAPE (GEOstationary Coastal and Air Pollution Events), we demonstrate the use of the testbed to conduct a feasibility study of using polarization measurements in and around the O2A band for the retrieval of aerosol height information from space, as well as an to assess potential improvement in the retrieval of aerosol fine and coarse mode aerosol optical depth (AOD) through the

  4. A Numerical Testbed for Remote Sensing of Aerosols, and its Demonstration for Evaluating Retrieval Synergy from a Geostationary Satellite Constellation of GEO-CAPE and GOES-R

    NASA Technical Reports Server (NTRS)

    Wang, Jun; Xu, Xiaoguang; Ding, Shouguo; Zeng, Jing; Spurr, Robert; Liu, Xiong; Chance, Kelly; Mishchenko, Michael I.

    2014-01-01

    We present a numerical testbed for remote sensing of aerosols, together with a demonstration for evaluating retrieval synergy from a geostationary satellite constellation. The testbed combines inverse (optimal-estimation) software with a forward model containing linearized code for computing particle scattering (for both spherical and non-spherical particles), a kernel-based (land and ocean) surface bi-directional reflectance facility, and a linearized radiative transfer model for polarized radiance. Calculation of gas absorption spectra uses the HITRAN (HIgh-resolution TRANsmission molecular absorption) database of spectroscopic line parameters and other trace species cross-sections. The outputs of the testbed include not only the Stokes 4-vector elements and their sensitivities (Jacobians) with respect to the aerosol single scattering and physical parameters (such as size and shape parameters, refractive index, and plume height), but also DFS (Degree of Freedom for Signal) values for retrieval of these parameters. This testbed can be used as a tool to provide an objective assessment of aerosol information content that can be retrieved for any constellation of (planned or real) satellite sensors and for any combination of algorithm design factors (in terms of wavelengths, viewing angles, radiance and/or polarization to be measured or used). We summarize the components of the testbed, including the derivation and validation of analytical formulae for Jacobian calculations. Benchmark calculations from the forward model are documented. In the context of NASA's Decadal Survey Mission GEOCAPE (GEOstationary Coastal and Air Pollution Events), we demonstrate the use of the testbed to conduct a feasibility study of using polarization measurements in and around the O2 A band for the retrieval of aerosol height information from space, as well as an to assess potential improvement in the retrieval of aerosol fine and coarse mode aerosol optical depth (AOD) through the

  5. Review of the significance of fibre size in fibre-related lung disease: a centrifuge cell for preparing accurate microscope-evaluation specimens from slurries used in inoculation studies.

    PubMed

    Timbrell, V

    1989-01-01

    Intratracheal, intrapleural and intraperitoneal inoculation studies in animals are widely used for identifying important factors in the pathogenicity of fine fibrous particles and estimating the potential of new materials to produce human pulmonary disease. Evidence on the significance of fibre size is reviewed, with emphasis on direct data derived from airborne fibres in asbestos mines and fibres retained in the mineworkers' lungs. This evidence indicates a need in mesothelioma-related inoculation experiments for means of measuring fibres down to 0.01 microns in diameter. A test cell, developed for preparing microscope-evaluation specimens from injection slurries, has a sector-shaped sedimentation chamber and is used in a swing-rotor centrifuge. To minimize re-formation of aggregates that are dispersed by shearing forces during sedimentation, a sample of the slurry is diluted beforehand to a degree indicated by the length of the longest fibres seen in the light microscope. Fibres and other particles are collected as a uniform deposit on a collodion film enveloping a microscope cover-glass. Current techniques are used to prepare specimens from films for light microscopy, scanning electron microscopy and the transmission electron microscopy which is so necessary for measurement of very fine fibres. Applications of the cell to fibre samples from other sources are outlined.

  6. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  7. On the use of advanced numerical models for the evaluation of dosimetric parameters and the verification of exposure limits at workplaces.

    PubMed

    Catarinucci, L; Tarricone, L

    2009-12-01

    With the next transposition of the 2004/40/EC Directive, employers will become responsible for the electromagnetic field level at the workplace. To make this task easier, the scientific community is compiling practical guidelines to be followed. This work aims at enriching such guidelines, especially for the dosimetric issues. More specifically, some critical aspects related to the application of numerical dosimetric techniques for the verification of the safety limit compliance have been highlighted. In particular, three different aspects have been considered: the dosimetric parameter dependence on the shape and the inner characterisation of the exposed subject as well as on the numerical algorithm used, and the correlation between reference limits and basic restriction. Results and discussions demonstrate how, even by using sophisticated numerical techniques, in some cases a complex interpretation of the result is mandatory.

  8. Efficient and accurate computation of the incomplete Airy functions

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1993-01-01

    The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.

  9. Assessing Probabilistic Reasoning in Verbal-Numerical and Graphical-Pictorial Formats: An Evaluation of the Psychometric Properties of an Instrument

    ERIC Educational Resources Information Center

    Agus, Mirian; Penna, Maria Pietronilla; Peró-Cebollero, Maribel; Guàrdia-Olmos, Joan

    2016-01-01

    Research on the graphical facilitation of probabilistic reasoning has been characterised by the effort expended to identify valid assessment tools. The authors developed an assessment instrument to compare reasoning performances when problems were presented in verbal-numerical and graphical-pictorial formats. A sample of undergraduate psychology…

  10. Evaluation of Model Complexity and Parameter Estimation: Indirect Inversion of a Numerical Model of Heat Conduction and Convection Using Subsurface Temperatures in Peat

    NASA Astrophysics Data System (ADS)

    Christensen, W.; Kamai, T.; Fogg, G. E.

    2012-12-01

    The presence of metal piezometers (thermal conductivity 16.0 W m-1 K-1) in peat (thermal conductivity 0.5 W m-1 K-1) can significantly influence temperatures recorded in the subsurface. Radially symmetrical 2D numerical models of heat conduction and convection that use a transient specified temperature boundary condition (Dirichlet) and explicitly account for the difference in thermal properties differ from the commonly used 1D analytical solution by as much as 2°C at 0.15m below ground surface. Field data from temperature loggers located inside and outside piezometers show similar differences, supporting the use of the more complex numerical model. In order to better simulate field data, an energy balance approach is used to calculate the temperature along the upper boundary using hourly radiation and air temperature data, along with daily average wind velocity and cloud cover data. Normally distributed random noise is added to recorded field data to address potential natural variation between conditions at the instrument site and the field site (piezometer). Five influential parameters are considered: albedo, crop coefficient, hydraulic conductivity, thermal diffusivity, and surface water depth. Ten sets of these five parameters are generated from a uniform random distribution and constrained by values reported in the literature or measured in the field. The ten parameter sets and noise are used to generate synthetic subsurface data in the numerical model. The synthetic temperature data is offset by a constant value determined from a uniform random distribution to represent potential offset in instrument accuracy (+/- 0.1 °C). The original parameter values are satisfactorily recovered by indirect inversion of the noise-free model using UCODE. Comparison of the parameter estimates from the homogeneous numerical model (equivalent to the analytical model) and the numerical model that explicitly models the metal piezometer are compared. The same inversion scheme is

  11. Numerical integration of diffraction integrals for a circular aperture

    NASA Astrophysics Data System (ADS)

    Cooper, I. J.; Sheppard, C. J. R.; Sharma, M.

    It is possible to obtain an accurate irradiance distribution for the diffracted wave field from an aperture by the numerical evaluation of the two-dimensional diffraction integrals using a product-integration method in which Simpson's 1/3 rule is applied twice. The calculations can be done quickly using a standard PC by utilizing matrix operations on complex numbers with Matlab. The diffracted wave field can be calculated from the plane of the aperture to the far field without introducing many of the standard approximations that are used to give Fresnel or Fraunhofer diffraction. The numerical method is used to compare the diffracted irradiance distribution from a circular aperture as predicted by Kirchhoff, Rayleigh-Sommerfeld 1 and Rayleigh-Sommerfeld 2 diffraction integrals.

  12. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  13. A numerical study of mixing in supersonic combustors with hypermixing injectors

    NASA Technical Reports Server (NTRS)

    Lee, J.

    1992-01-01

    A numerical study was conducted to evaluate the performance of wall mounted fuel-injectors designed for potential Supersonic Combustion Ramjet (SCRAM-jet) engine applications. The focus of this investigation was to numerically simulate existing combustor designs for the purpose of validating the numerical technique and the physical models developed. Three different injector designs of varying complexity were studied to fully understand the computational implications involved in accurate predictions. A dual transverse injection system and two streamwise injector designs were studied. The streamwise injectors were designed with swept ramps to enhance fuel-air mixing and combustion characteristics at supersonic speeds without the large flow blockage and drag contribution of the transverse injection system. For this study, the Mass-Averaged Navier-Stokes equations and the chemical species continuity equations were solved. The computations were performed using a finite-volume implicit numerical technique and multiple block structured grid system. The interfaces of the multiple block structured grid systems were numerically resolved using the flux-conservative technique. Detailed comparisons between the computations and existing experimental data are presented. These comparisons show that numerical predictions are in agreement with the experimental data. These comparisons also show that a number of turbulence model improvements are needed for accurate combustor flowfield predictions.

  14. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  15. Evaluating Large-Scale Studies to Accurately Appraise Children's Performance

    ERIC Educational Resources Information Center

    Ernest, James M.

    2012-01-01

    Educational policy is often developed using a top-down approach. Recently, there has been a concerted shift in policy for educators to develop programs and research proposals that evolve from "scientific" studies and focus less on their intuition, aided by professional wisdom. This article analyzes several national and international…

  16. Novel discretization schemes for the numerical simulation of membrane dynamics

    NASA Astrophysics Data System (ADS)

    Kolsti, Kyle F.

    Motivated by the demands of simulating flapping wings of Micro Air Vehicles, novel numerical methods were developed and evaluated for the dynamic simulation of membranes. For linear membranes, a mixed-form time-continuous Galerkin method was employed using trilinear space-time elements. Rather than time-marching, the entire space-time domain was discretized and solved simultaneously. Second-order rates of convergence in both space and time were observed in numerical studies. Slight high-frequency noise was filtered during post-processing. For geometrically nonlinear membranes, the model incorporated two new schemes that were independently developed and evaluated. Time marching was performed using quintic Hermite polynomials uniquely determined by end-point jerk constraints. The single-step, implicit scheme was significantly more accurate than the most common Newmark schemes. For a simple harmonic oscillator, the scheme was found to be symplectic, frequency-preserving, and conditionally stable. Time step size was limited by accuracy requirements rather than stability. The spatial discretization scheme employed a staggered grid, grouping of nonlinear terms, and polygon shape functions in a strong-form point collocation formulation. The observed rate of convergence was two for both displacement and strain. Validation against existing experimental data showed the method to be accurate until hyperelastic effects dominate.

  17. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  18. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    SciTech Connect

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  19. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  20. Accurate and efficient computation of nonlocal potentials based on Gaussian-sum approximation

    NASA Astrophysics Data System (ADS)

    Exl, Lukas; Mauser, Norbert J.; Zhang, Yong

    2016-12-01

    We introduce an accurate and efficient method for the numerical evaluation of nonlocal potentials, including the 3D/2D Coulomb, 2D Poisson and 3D dipole-dipole potentials. Our method is based on a Gaussian-sum approximation of the singular convolution kernel combined with a Taylor expansion of the density. Starting from the convolution formulation of the nonlocal potential, for smooth and fast decaying densities, we make a full use of the Fourier pseudospectral (plane wave) approximation of the density and a separable Gaussian-sum approximation of the kernel in an interval where the singularity (the origin) is excluded. The potential is separated into a regular integral and a near-field singular correction integral. The first is computed with the Fourier pseudospectral method, while the latter is well resolved utilizing a low-order Taylor expansion of the density. Both parts are accelerated by fast Fourier transforms (FFT). The method is accurate (14-16 digits), efficient (O (Nlog ⁡ N) complexity), low in storage, easily adaptable to other different kernels, applicable for anisotropic densities and highly parallelizable.

  1. Quantifying Numerical Model Accuracy and Variability

    NASA Astrophysics Data System (ADS)

    Montoya, L. H.; Lynett, P. J.

    2015-12-01

    The 2011 Tohoku tsunami event has changed the logic on how to evaluate tsunami hazard on coastal communities. Numerical models are a key component for methodologies used to estimate tsunami risk. Model predictions are essential for the development of Tsunami Hazard Assessments (THA). By better understanding model bias and uncertainties and if possible minimizing them, a more accurate and reliable THA will result. In this study we compare runup height, inundation lines and flow velocity field measurements between GeoClaw and the Method Of Splitting Tsunami (MOST) predictions in the Sendai plain. Runup elevation and average inundation distance was in general overpredicted by the models. However, both models agree relatively well with each other when predicting maximum sea surface elevation and maximum flow velocities. Furthermore, to explore the variability and uncertainties in numerical models, MOST is used to compare predictions from 4 different grid resolutions (30m, 20m, 15m and 12m). Our work shows that predictions of particular products (runup and inundation lines) do not require the use of high resolution (less than 30m) Digital Elevation Maps (DEMs). When predicting runup heights and inundation lines, numerical convergence was achieved using the 30m resolution grid. On the contrary, poor convergence was found in the flow velocity predictions, particularly the 1 meter depth maximum flow velocities. Also, runup height measurements and elevations from the DEM were used to estimate model bias. The results provided in this presentation will help understand the uncertainties in model predictions and locate possible sources of errors within a model.

  2. Strategies for Evaluation of Rys Roots and Weights.

    PubMed

    King, Harry F

    2016-11-23

    The Rys quadrature method for evaluating molecular integrals requires accurate numerical values of the nodes of a Rys polynomial and associated weight factors. The numerical value of a Rys polynomial for a specified value of its argument can be evaluated by three-term recursion using α and β coefficients. We review existing integration schemes for computing these recurrence parameters, discuss issues related to computational efficiency and numerical precision, and propose a slightly new integration method using Gauss-Rys quadrature. We discuss the advantages and disadvantages of using Golub's matrix method for the computation of roots and weights.

  3. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  4. Selecting MODFLOW cell sizes for accurate flow fields.

    PubMed

    Haitjema, H; Kelson, V; de Lange, W

    2001-01-01

    Contaminant transport models often use a velocity field derived from a MODFLOW flow field. Consequently, the accuracy of MODFLOW in representing a ground water flow field determines in part the accuracy of the transport predictions, particularly when advective transport is dominant. We compared MODFLOW ground water flow rates and MODPATH particle traces (advective transport) for a variety of conceptual models and different grid spacings to exact or approximate analytic solutions. All of our numerical experiments concerned flow in a single confined or semiconfined aquifer. While MODFLOW appeared robust in terms of both local and global water balance, we found that ground water flow rates, particle traces, and associated ground water travel times are accurate only when sufficiently small cells are used. For instance, a minimum of four or five cells are required to accurately model total ground water inflow in tributaries or other narrow surface water bodies that end inside the model domain. Also, about 50 cells are needed to represent zones of differing transmissivities or an incorrect flow field and (locally) inaccurate ground water travel times may result. Finally, to adequately represent leakage through aquitards or through the bottom of surface water bodies it was found that the maximum allowable cell dimensions should not exceed a characteristic leakage length lambda, which is defined as the square root of the aquifer transmissivity times the resistance of the aquitard or stream bottom. In some cases a cell size of one-tenth of lambda is necessary to obtain accurate results.

  5. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  6. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  7. Dynamical correction of control laws for marine ships' accurate steering

    NASA Astrophysics Data System (ADS)

    Veremey, Evgeny I.

    2014-06-01

    The objective of this work is the analytical synthesis problem for marine vehicles autopilots design. Despite numerous known methods for a solution, the mentioned problem is very complicated due to the presence of an extensive population of certain dynamical conditions, requirements and restrictions, which must be satisfied by the appropriate choice of a steering control law. The aim of this paper is to simplify the procedure of the synthesis, providing accurate steering with desirable dynamics of the control system. The approach proposed here is based on the usage of a special unified multipurpose control law structure that allows decoupling a synthesis into simpler particular optimization problems. In particular, this structure includes a dynamical corrector to support the desirable features for the vehicle's motion under the action of sea wave disturbances. As a result, a specialized new method for the corrector design is proposed to provide an accurate steering or a trade-off between accurate steering and economical steering of the ship. This method guaranties a certain flexibility of the control law with respect to an actual environment of the sailing; its corresponding turning can be realized in real time onboard.

  8. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  9. Triclinic Transpression in brittle shear zones evaluated via combined numerical and analogue modeling: the case of The Torcal de Antequera Massif, SE Spain.

    NASA Astrophysics Data System (ADS)

    Barcos, Leticia; Díaz-Azpiroz, Manuel; Faccenna, Claudio; Balanyá, Juan Carlos; Expósito, Inmaculada; Giménez-Bonilla, Alejandro

    2013-04-01

    Numerical kinematic models have been widely used to understand the parameters controlling the generation and evolution of ductile transpression zones. However, these models are based on continuum mechanics and therefore, are not as useful to analyse deformation partitioning and strain within brittle-ductile transpression zones. The combination of numerical and analogue models will potentially provide an effective approach for a better understanding of these processes and, to a broader extent, of high strain zones in general. In the present work, we follow a combined numerical and analogue approach to analyse a brittle dextral transpressive shear zone. The Torcal de Antequera Massif (TAM) is part of a roughly E-W oriented shear zone at the NE end of the Western Gibraltar Arc (Betic Cordillera). This shear zone presents, according to their structural and kinematic features, two types of domains i) Domain type 1 is located at both TAM margins, and is characterized by strike-slip structures subparallel to the main TAM boundaries (E-W). ii) Domain type 2 corresponds to the TAM inner part, and it presents SE-vergent open folds and reverse shear zones, as well as normal faults accommodating fold axis parallel extension. Both domains have been studied separately applying a model of triclinic transpression with inclined extrusion. The kinematic parameters obtained in this study (?, ? and Wk) allows us to constrain geometrical transpression parameters. As such, the angle of oblique convergence (α, the horizontal angle between the displacement vector and the strike of the shear zone) ranges between 10-17° (simple shear dominated) for domain type 1 and between 31-35° (coaxial dominated) for domain type 2. According to the results obtained from the numerical model and in order to validate its possible utility in brittle shear zones we develop two analogue models with α values representative of both domains defined in the TAM: 15° for type 1 and 30° for type 2. In the

  10. Personalized numerical observer

    NASA Astrophysics Data System (ADS)

    Brankov, Jovan G.; Pretorius, P. Hendrik

    2010-02-01

    It is widely accepted that medical image quality should be assessed using task-based criteria, such as humanobserver (HO) performance in a lesion-detection (scoring) task. HO studies are time consuming and cost prohibitive to be used for image quality assessment during development of either reconstruction methods or imaging systems. Therefore, a numerical observer (NO), a HO surrogate, is highly desirable. In the past, we have proposed and successfully tested a NO based on a supervised-learning approach (namely a support vector machine) for cardiac gated SPECT image quality assessment. In the supervised-learning approach, the goal is to identify the relationship between measured image features and HO myocardium defect likelihood scores. Thus far we have treated multiple HO readers by simply averaging or pooling their respective scores. Due to observer variability, this may be suboptimal and less accurate. Therefore, in this work, we are setting our goal to predict individual observer scores independently in the hope to better capture some relevant lesion-detection mechanism of the human observers. This is even more important as there are many ways to get equivalent observer performance (measured by area under receiver operating curve), and simply predicting some joint (average or pooled) score alone is not likely to succeed.

  11. Numerical Relativity and Astrophysics

    NASA Astrophysics Data System (ADS)

    Lehner, Luis; Pretorius, Frans

    2014-08-01

    Throughout the Universe many powerful events are driven by strong gravitational effects that require general relativity to fully describe them. These include compact binary mergers, black hole accretion, and stellar collapse, where velocities can approach the speed of light and extreme gravitational fields (ΦNewt/c2≃1) mediate the interactions. Many of these processes trigger emission across a broad range of the electromagnetic spectrum. Compact binaries further source strong gravitational wave emission that could directly be detected in the near future. This feat will open up a gravitational wave window into our Universe and revolutionize our understanding of it. Describing these phenomena requires general relativity, and—where dynamical effects strongly modify gravitational fields—the full Einstein equations coupled to matter sources. Numerical relativity is a field within general relativity concerned with studying such scenarios that cannot be accurately modeled via perturbative or analytical calculations. In this review, we examine results obtained within this discipline, with a focus on its impact in astrophysics.

  12. Experimental and numerical analyses of different extended surfaces

    NASA Astrophysics Data System (ADS)

    Diani, A.; Mancin, S.; Zilio, C.; Rossetto, L.

    2012-11-01

    Air is a cheap and safe fluid, widely used in electronic, aerospace and air conditioning applications. Because of its poor heat transfer properties, it always flows through extended surfaces, such as finned surfaces, to enhance the convective heat transfer. In this paper, experimental results are reviewed and numerical studies during air forced convection through extended surfaces are presented. The thermal and hydraulic behaviours of a reference trapezoidal finned surface, experimentally evaluated by present authors in an open-circuit wind tunnel, has been compared with numerical simulations carried out by using the commercial CFD software COMSOL Multiphysics. Once the model has been validated, numerical simulations have been extended to other rectangular finned configurations, in order to study the effects of the fin thickness, fin pitch and fin height on the thermo-hydraulic behaviour of the extended surfaces. Moreover, several pin fin surfaces have been simulated in the same range of operating conditions previously analyzed. Numerical results about heat transfer and pressure drop, for both plain finned and pin fin surfaces, have been compared with empirical correlations from the open literature, and more accurate equations have been developed, proposed, and validated.

  13. Toward an accurate and efficient semiclassical surface hopping procedure for nonadiabatic problems.

    PubMed

    Herman, Michael F

    2005-10-20

    The derivation of a semiclassical surface hopping procedure from a formally exact solution of the Schrodinger equation is discussed. The fact that the derivation proceeds from an exact solution guarantees that all phase terms are completely and accurately included. Numerical evidence shows the method to be highly accurate. A Monte Carlo implementation of this method is considered, and recent work to significantly improve the statistical accuracy of the Monte Carlo approach is discussed.

  14. Evaluation of Rainfall Impacts on Groundwater Flow and Land Deformation in an Unsaturated Heterogeneous Slope and Slope Stability Using a Fully Coupled Hydrogeomechanical Numerical Model

    NASA Astrophysics Data System (ADS)

    Kihm, J.; Kim, J.

    2006-12-01

    A series of numerical simulations using a fully coupled hydrogeomechanical numerical model, which is named COWADE123D, is performed to analyze groundwater flow and land deformation in an unsaturated heterogeneous slope and its stability under various rainfall rates. The slope is located along a dam lake in Republic of Korea. The slope consists of the Cretaceous granodiorite and can be subdivided into the four layers such as weathered soil, weathered rock, intermediate rock, and hard rock from its ground surface due to weathering process. The numerical simulation results show that both rainfall rate and heterogeneity play important roles in controlling groundwater flow and land deformation in the unsaturated slope. The slope becomes more saturated, and thus its overall hydrogeomechanical stability deteriorates, especially in the weathered rock and weathered soil layers, as the rainfall increases up to the maximum daily rainfall rate in the return period of one year. However, the slope becomes fully saturated, and thus its hydrogeomechanical responses are almost identical under more than such a critical rainfall rate. From the viewpoint of hydrogeology, the pressure head, and hence the hydraulic head increase as the rainfall rate increases. As a result, the groundwater table rises, the unsaturated zone reduces, the seepage face expands from the slope toe toward the slope crest, and the groundwater flow velocity increases along the seepage face. Particularly, the groundwater flow velocity increases significantly in the weathered soil and weathered rock layers as the rainfall rate increases. This is because their hydraulic conductivity is relatively higher than that of the intermediate rock and hard rock layers. From the viewpoint of geomechanics, the horizontal displacement increases, while the vertical displacement decreases toward the slope toe as the rainfall rate increases. This may result from the buoyancy effect associated with the groundwater table rise as the

  15. Numerical simulation and analysis of accurate blood oxygenation measurement by using optical resolution photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Yu, Tianhao; Li, Qian; Li, Lin; Zhou, Chuanqing

    2016-10-01

    Accuracy of photoacoustic signal is the crux on measurement of oxygen saturation in functional photoacoustic imaging, which is influenced by factors such as defocus of laser beam, curve shape of large vessels and nonlinear saturation effect of optical absorption in biological tissues. We apply Monte Carlo model to simulate energy deposition in tissues and obtain photoacoustic signals reaching a simulated focused surface detector to investigate corresponding influence of these factors. We also apply compensation on photoacoustic imaging of in vivo cat cerebral cortex blood vessels, in which signals from different lateral positions of vessels are corrected based on simulation results. And this process on photoacoustic images can improve the smoothness and accuracy of oxygen saturation results.

  16. Pre-Stall Behavior of a Transonic Axial Compressor Stage via Time-Accurate Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Chen, Jen-Ping; Hathaway, Michael D.; Herrick, Gregory P.

    2008-01-01

    CFD calculations using high-performance parallel computing were conducted to simulate the pre-stall flow of a transonic compressor stage, NASA compressor Stage 35. The simulations were run with a full-annulus grid that models the 3D, viscous, unsteady blade row interaction without the need for an artificial inlet distortion to induce stall. The simulation demonstrates the development of the rotating stall from the growth of instabilities. Pressure-rise performance and pressure traces are compared with published experimental data before the study of flow evolution prior to the rotating stall. Spatial FFT analysis of the flow indicates a rotating long-length disturbance of one rotor circumference, which is followed by a spike-type breakdown. The analysis also links the long-length wave disturbance with the initiation of the spike inception. The spike instabilities occur when the trajectory of the tip clearance flow becomes perpendicular to the axial direction. When approaching stall, the passage shock changes from a single oblique shock to a dual-shock, which distorts the perpendicular trajectory of the tip clearance vortex but shows no evidence of flow separation that may contribute to stall.

  17. An accurate, robust, and easy-to-implement method for integration over arbitrary polyhedra: Application to embedded interface methods

    NASA Astrophysics Data System (ADS)

    Sudhakar, Y.; Moitinho de Almeida, J. P.; Wall, Wolfgang A.

    2014-09-01

    We present an accurate method for the numerical integration of polynomials over arbitrary polyhedra. Using the divergence theorem, the method transforms the domain integral into integrals evaluated over the facets of the polyhedra. The necessity of performing symbolic computation during such transformation is eliminated by using one dimensional Gauss quadrature rule. The facet integrals are computed with the help of quadratures available for triangles and quadrilaterals. Numerical examples, in which the proposed method is used to integrate the weak form of the Navier-Stokes equations in an embedded interface method (EIM), are presented. The results show that our method is as accurate and generalized as the most widely used volume decomposition based methods. Moreover, since the method involves neither volume decomposition nor symbolic computations, it is much easier for computer implementation. Also, the present method is more efficient than other available integration methods based on the divergence theorem. Efficiency of the method is also compared with the volume decomposition based methods and moment fitting methods. To our knowledge, this is the first article that compares both accuracy and computational efficiency of methods relying on volume decomposition and those based on the divergence theorem.

  18. Evaluation of Analytical and Numerical Techniques for Defining the Radius of Influence for an Open-Loop Ground Source Heat Pump System

    SciTech Connect

    Freedman, Vicky L.; Mackley, Rob D.; Waichler, Scott R.; Horner, Jacob A.

    2013-09-26

    In an open-loop groundwater heat pump (GHP) system, groundwater is extracted, run through a heat exchanger, and injected back into the ground, resulting in no mass balance changes to the flow system. Although the groundwater use is non-consumptive, the withdrawal and injection of groundwater may cause negative hydraulic and thermal impacts to the flow system. Because GHP is a relatively new technology and regulatory guidelines for determining environmental impacts for GHPs may not exist, consumptive use metrics may need to be used for permit applications. For consumptive use permits, a radius of influence is often used, which is defined as the radius beyond which hydraulic impacts to the system are considered negligible. In this paper, the hydraulic radius of influence concept was examined using analytical and numerical methods for a non-consumptive GHP system in southeastern Washington State. At this location, the primary hydraulic concerns were impacts to nearby contaminant plumes and a water supply well field. The results of this study showed that the analytical techniques with idealized radial flow were generally unsuited because they over predicted the influence of the well system. The numerical techniques yielded more reasonable results because they could account for aquifer heterogeneities and flow boundaries. In particular, the use of a capture zone analysis was identified as the best method for determining potential changes in current contaminant plume trajectories. The capture zone analysis is a more quantitative and reliable tool for determining the radius of influence with a greater accuracy and better insight for a non-consumptive GHP assessment.

  19. Evaluation.

    ERIC Educational Resources Information Center

    McAnany, Emile G.; And Others

    1980-01-01

    Two lead articles set the theme for this issue devoted to evaluation as Emile G. McAnany examines the usefulness of evaluation and Robert C. Hornik addresses four widely accepted myths about evaluation. Additional articles include a report of a field evaluation done by the Accion Cultural Popular (ACPO); a study of the impact of that evaluation by…

  20. The use of FLO2D numerical code in lahar hazard evaluation at Popocatépetl volcano: a 2001-lahar scenario

    NASA Astrophysics Data System (ADS)

    Caballero, L.; Capra, L.

    2014-07-01

    Lahar modelling represents an excellent tool to design hazard maps. It allows the definition of potential inundation zones for different lahar magnitude scenarios and sediment concentrations. Here we present the results obtained for the 2001 syneruptive lahar at Popocatépetl volcano, based on simulations performed with FLO2D software. An accurate delineation of this event is needed since it is one of the possible scenarios considered during a volcanic crisis. One of the main issues for lahar simulation using FLO2D is the calibration of the input hydrograph and rheologic flow properties. Here we verified that geophone data can be properly calibrated by means of peak discharge calculations obtained by superelevation method. Simulation results clearly show the influence of concentration and rheologic properties on lahar depth and distribution. Modifying rheologic properties during lahar simulation strongly affect lahar distribution. More viscous lahars have a more restricted aerial distribution, thicker depths, and resulting velocities are noticeable smaller. FLO2D proved to be a very successful tool to delimitate lahar inundation zones as well as to generate different lahar scenarios not only related to lahar volume or magnitude but also to take into account different sediment concentrations and rheologies widely documented to influence lahar prone areas.

  1. Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2

    NASA Technical Reports Server (NTRS)

    Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.

    1988-01-01

    The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.

  2. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  3. Numerical simulations of cryogenic cavitating flows

    NASA Astrophysics Data System (ADS)

    Kim, Hyunji; Kim, Hyeongjun; Min, Daeho; Kim, Chongam

    2015-12-01

    The present study deals with a numerical method for cryogenic cavitating flows. Recently, we have developed an accurate and efficient baseline numerical scheme for all-speed water-gas two-phase flows. By extending such progress, we modify the numerical dissipations to be properly scaled so that it does not show any deficiencies in low Mach number regions. For dealing with cryogenic two-phase flows, previous EOS-dependent shock discontinuity sensing term is replaced with a newly designed EOS-free one. To validate the proposed numerical method, cryogenic cavitating flows around hydrofoil are computed and the pressure and temperature depression effect in cryogenic cavitation are demonstrated. Compared with Hord's experimental data, computed results are turned out to be satisfactory. Afterwards, numerical simulations of flow around KARI turbopump inducer in liquid rocket are carried out under various flow conditions with water and cryogenic fluids, and the difference in inducer flow physics depending on the working fluids are examined.

  4. Recent advances in numerical PDEs

    NASA Astrophysics Data System (ADS)

    Zuev, Julia Michelle

    In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the

  5. Design and numerical evaluation of full-authority flight control systems for conventional and thruster-augmented helicopters employed in NOE operations

    NASA Technical Reports Server (NTRS)

    Perri, Todd A.; Mckillip, R. M., Jr.; Curtiss, H. C., Jr.

    1987-01-01

    The development and methodology is presented for development of full-authority implicit model-following and explicit model-following optimal controllers for use on helicopters operating in the Nap-of-the Earth (NOE) environment. Pole placement, input-output frequency response, and step input response were used to evaluate handling qualities performance. The pilot was equipped with velocity-command inputs. A mathematical/computational trajectory optimization method was employed to evaluate the ability of each controller to fly NOE maneuvers. The method determines the optimal swashplate and thruster input histories from the helicopter's dynamics and the prescribed geometry and desired flying qualities of the maneuver. Three maneuvers were investigated for both the implicit and explicit controllers with and without auxiliary propulsion installed: pop-up/dash/descent, bob-up at 40 knots, and glideslope. The explicit controller proved to be superior to the implicit controller in performance and ease of design.

  6. Multimodal spatial calibration for accurately registering EEG sensor positions.

    PubMed

    Zhang, Jianhua; Chen, Jian; Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.

  7. Accurate van der Waals coefficients from density functional theory

    PubMed Central

    Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn

    2012-01-01

    The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765

  8. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  9. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  10. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  11. Numerical evaluation of the PERTH (PERiodic Tracer Hierarchy) method for estimating time-variable travel time distribution in variably saturated soils

    NASA Astrophysics Data System (ADS)

    Kim, M.; Harman, C. J.

    2013-12-01

    The distribution of water travel times is one of the crucial hydrologic characteristics of the catchment. Recently, it has been argued that a rigorous treatment of travel time distributions should allow for their variability in time because of the variable fluxes and partitioning of water in the water balance, and the consequent variable storage of a catchment. We would like to be able to observe the structure of the temporal variations in travel time distributions under controlled conditions, such as in a soil column or under irrigation experiments. However, time-variable travel time distributions are difficult to observe using typical active and passive tracer approaches. Time-variability implies that tracers introduced at different times will have different travel time distributions. The distribution may also vary during injection periods. Moreover, repeat application of a single tracer in a system with significant memory leads to overprinting of break-through curves, which makes it difficult to extract the original break-through curves, and the number of ideal tracers that can be applied is usually limited. Recognizing these difficulties, the PERTH (PERiodic Tracer Hierarchy) method has been developed. The method provides a way to estimate time-variable travel time distributions by tracer experiments under controlled conditions by employing a multi-tracer hierarchy under periodical hydrologic forcing inputs. The key assumption of the PERTH method is that as time gets sufficiently large relative to injection time, the average travel time distribution of two distinct ideal tracers injected during overlapping periods become approximately equal. Thus one can be used as a proxy for the other, and the breakthrough curves of tracers applied at different times in a periodic forcing condition can be separated from one another. In this study, we tested the PERTH method numerically for the case of infiltration at the plot scale using HYDRUS-1D and a particle

  12. Direct computation of parameters for accurate polarizable force fields

    SciTech Connect

    Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.

    2014-11-21

    We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.

  13. Numerical estimation of densities

    NASA Astrophysics Data System (ADS)

    Ascasibar, Y.; Binney, J.

    2005-01-01

    We present a novel technique, dubbed FIESTAS, to estimate the underlying density field from a discrete set of sample points in an arbitrary multidimensional space. FIESTAS assigns a volume to each point by means of a binary tree. Density is then computed by integrating over an adaptive kernel. As a first test, we construct several Monte Carlo realizations of a Hernquist profile and recover the particle density in both real and phase space. At a given point, Poisson noise causes the unsmoothed estimates to fluctuate by a factor of ~2 regardless of the number of particles. This spread can be reduced to about 1dex (~26 per cent) by our smoothing procedure. The density range over which the estimates are unbiased widens as the particle number increases. Our tests show that real-space densities obtained with an SPH kernel are significantly more biased than those yielded by FIESTAS. In phase space, about 10 times more particles are required in order to achieve a similar accuracy. As a second application we have estimated phase-space densities in a dark matter halo from a cosmological simulation. We confirm the results of Arad, Dekel & Klypin that the highest values of f are all associated with substructure rather than the main halo, and that the volume function v(f) ~f-2.5 over about four orders of magnitude in f. We show that a modified version of the toy model proposed by Arad et al. explains this result and suggests that the departures of v(f) from power-law form are not mere numerical artefacts. We conclude that our algorithm accurately measures the phase-space density up to the limit where discreteness effects render the simulation itself unreliable. Computationally, FIESTAS is orders of magnitude faster than the method based on Delaunay tessellation that Arad et al. employed, making it practicable to recover smoothed density estimates for sets of 109 points in six dimensions.

  14. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  15. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  16. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  17. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  18. A theoretical and numerical resolution of an acoustic multiple scattering problem in three-dimensional case

    NASA Astrophysics Data System (ADS)

    Amamou, Manel L.

    2016-05-01

    This paper develops an analytical solution for sound, electromagnetic or any other wave propagation described by the Helmholtz equation in three-dimensional case. First, a theoretical investigation based on multipole expansion method and spherical wave functions was established, through which we show that the resolution of the problem is reduced to solving an infinite, complex and large linear system. Second, we explain how to suitably truncate the last infinite dimensional system to get an accurate stable and fast numerical solution of the problem. Then, we evaluate numerically the theoretical solution of scattering problem by multiple ideal rigid spheres. Finally, we made a numerical study to present the "Head related transfer function" with respect to different physical and geometrical parameters of the problem.

  19. An Accurate, Simplified Model Intrabeam Scattering

    SciTech Connect

    Bane, Karl LF

    2002-05-23

    Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where {eta}{sub x,y}{sup 2}/{beta}{sub x,y} has been replaced by {Eta}{sub x,y}) asymptotically approaches the result of Bjorken-Mtingwa.

  20. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  1. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  2. Accurate Assessment--Compelling Evidence for Practice

    ERIC Educational Resources Information Center

    Flynn, Regina T.; Anderson, Ludmila; Martin, Nancy R.

    2010-01-01

    Childhood overweight and obesity is a public health concern not just because of its growing prevalence but also for its serious and lasting health consequences. Though height and weight measures are easy to obtain and New Hampshire Head Start sites measure height and weight of their enrollees, there are numerous challenges related to accurate…

  3. Accurate analysis of planar optical waveguide devices using higher-order FDTD scheme.

    PubMed

    Kong, Fanmin; Li, Kang; Liu, Xin

    2006-11-27

    A higher-order finite-difference time-domain (HO-FDTD) numerical method is proposed for the time-domain analysis of planar optical waveguide devices. The anisotropic perfectly matched layer (APML) absorbing boundary condition for the HO-FDTD scheme is implemented and the numerical dispersion of this scheme is studied. The numerical simulations for the parallel-slab directional coupler are presented and the computing results using this scheme are in highly accordance with analytical solutions. Compared with conventional FDTD method, this scheme can save considerable computational resource without sacrificing solution accuracy and especially could be applied in the accurate analysis of optical devices.

  4. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  5. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  6. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  7. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.

  8. A more accurate nonequilibrium air radiation code - NEQAIR second generation

    NASA Technical Reports Server (NTRS)

    Moreau, Stephane; Laux, Christophe O.; Chapman, Dean R.; Maccormack, Robert W.

    1992-01-01

    Two experiments, one an equilibrium flow in a plasma torch at Stanford, the other a nonequilibrium flow in a SDIO/IST Bow-Shock-Ultra-Violet missile flight, have provided the basis for modifying, enhancing, and testing the well-known radiation code, NEQAIR. The original code, herein termed NEQAIR1, lacked computational efficiency, accurate data for some species and the flexibility to handle a variety of species. The modified code, herein termed NEQAIR2, incorporates recent findings in the spectroscopic and radiation models. It can handle any number of species and radiative bands in a gas whose thermodynamic state can be described by up to four temperatures. It provides a new capability of computing very fine spectra in a reasonable CPU time, while including transport phenomena along the line of sight and the characteristics of instruments that were used in the measurements. Such a new tool should allow more accurate testing and diagnosis of the different physical models used in numerical simulations of radiating, low density, high energy flows.

  9. An accurate metric for the spacetime around rotating neutron stars.

    NASA Astrophysics Data System (ADS)

    Pappas, George

    2017-01-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parameterisation of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a 3-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  10. An accurate moving boundary formulation in cut-cell methods

    NASA Astrophysics Data System (ADS)

    Schneiders, Lennart; Hartmann, Daniel; Meinke, Matthias; Schröder, Wolfgang

    2013-02-01

    A cut-cell method for Cartesian meshes to simulate viscous compressible flows with moving boundaries is presented. We focus on eliminating unphysical oscillations occurring in Cartesian grid methods extended to moving-boundary problems. In these methods, cells either lie completely in the fluid or solid region or are intersected by the boundary. For the latter cells, the time dependent volume fraction lying in the fluid region can be so small that explicit time-integration schemes become unstable and a special treatment of these cells is necessary. When the boundary moves, a fluid cell may become a cut cell or a solid cell may become a small cell at the next time level. This causes an abrupt change in the discretization operator and a suddenly modified truncation error of the numerical scheme. This temporally discontinuous alteration is shown to act like an unphysical source term, which deteriorates the numerical solution, i.e., it generates unphysical oscillations in the hydrodynamic forces exerted on the moving boundary. We develop an accurate moving boundary formulation based on the varying discretization operators yielding a cut-cell method which avoids these discontinuities. Results for canonical two- and three-dimensional test cases evidence the accuracy and robustness of the newly developed scheme.

  11. Accurate and simple calibration of DLP projector systems

    NASA Astrophysics Data System (ADS)

    Wilm, Jakob; Olesen, Oline V.; Larsen, Rasmus

    2014-03-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of parameters including lens distortion. Our implementation acquires printed planar calibration scenes in less than 1s. This makes our method both fast and convenient. We evaluate our method in terms of reprojection errors and structured light image reconstruction quality.

  12. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  13. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2017-03-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  14. Power and sample size determination in the Rasch model: evaluation of the robustness of a numerical method to non-normality of the latent trait.

    PubMed

    Guilleux, Alice; Blanchin, Myriam; Hardouin, Jean-Benoit; Sébille, Véronique

    2014-01-01

    Patient-reported outcomes (PRO) have gained importance in clinical and epidemiological research and aim at assessing quality of life, anxiety or fatigue for instance. Item Response Theory (IRT) models are increasingly used to validate and analyse PRO. Such models relate observed variables to a latent variable (unobservable variable) which is commonly assumed to be normally distributed. A priori sample size determination is important to obtain adequately powered studies to determine clinically important changes in PRO. In previous developments, the Raschpower method has been proposed for the determination of the power of the test of group effect for the comparison of PRO in cross-sectional studies with an IRT model, the Rasch model. The objective of this work was to evaluate the robustness of this method (which assumes a normal distribution for the latent variable) to violations of distributional assumption. The statistical power of the test of group effect was estimated by the empirical rejection rate in data sets simulated using a non-normally distributed latent variable. It was compared to the power obtained with the Raschpower method. In both cases, the data were analyzed using a latent regression Rasch model including a binary covariate for group effect. For all situations, both methods gave comparable results whatever the deviations from the model assumptions. Given the results, the Raschpower method seems to be robust to the non-normality of the latent trait for determining the power of the test of group effect.

  15. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  16. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a

  17. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  18. Numerical Capture of Wing-tip Vortex Using Vorticity Confinement

    NASA Astrophysics Data System (ADS)

    Zhang, Baili; Lou, Jing; Kang, Chang Wei; Wilson, Alexander; Lundberg, Johan; Bensow, Rickard

    2012-11-01

    Tracking vortices accurately over large distances is very important in many areas of engineering, for instance flow over rotating helicopter blades, ship propeller blades and aircraft wings. However, due to the inherent numerical dissipation in the advection step of flow simulation, current Euler and RANS field solvers tend to damp these vortices too fast. One possible solution to reduce the unphysical decay of these vortices is the application of vorticity confinement methods. In this study, a vorticity confinement term is added to the momentum conservation equations which is a function of the local element size, the vorticity and the gradient of the absolute value of vorticity. The approach has been evaluated by a systematic numerical study on the tip vortex trailing from a rectangular NACA0012 half-wing. The simulated structure and development of the wing-tip vortex agree well with experiments both qualitatively and quantitatively without any adverse effects on the global flow field. It is shown that vorticity confinement can negate the effect of numerical dissipation, leading to a more or less constant vortex strength. This is an approximate method in that genuine viscous diffusion of the vortex is not modeled, but it can be appropriate for vortex dominant flows over short to medium length scales where viscous diffusion can be neglected.

  19. Quantitative comparisons of numerical models of brittle deformation

    NASA Astrophysics Data System (ADS)

    Buiter, S.

    2009-04-01

    Numerical modelling of brittle deformation in the uppermost crust can be challenging owing to the requirement of an accurate pressure calculation, the ability to achieve post-yield deformation and localisation, and the choice of rheology (plasticity law). One way to approach these issues is to conduct model comparisons that can evaluate the effects of different implementations of brittle behaviour in crustal deformation models. We present a comparison of three brittle shortening experiments for fourteen different numerical codes, which use finite element, finite difference, boundary element and distinct element techniques. Our aim is to constrain and quantify the variability among models in order to improve our understanding of causes leading to differences between model results. Our first experiment of translation of a stable sand-like wedge serves as a reference that allows for testing against analytical solutions (e.g., taper angle, root-mean-square velocity and gravitational rate of work). The next two experiments investigate an unstable wedge in a sandbox-like setup which deforms by inward translation of a mobile wall. All models accommodate shortening by in-sequence formation of forward shear zones. We analyse the location, dip angle and spacing of thrusts in detail as previous comparisons have shown that these can be highly variable in numerical and analogue models of crustal shortening and extension. We find that an accurate implementation of boundary friction is important for our models. Our results are encouraging in the overall agreement in their dynamic evolution, but show at the same time the effort that is needed to understand shear zone evolution. GeoMod2008 Team: Markus Albertz, Michele Cooke, Susan Ellis, Taras Gerya, Luke Hodkinson, Kristin Hughes, Katrin Huhn, Boris Kaus, Walter Landry, Bertrand Maillot, Christophe Pascal, Anton Popov, Guido Schreurs, Christopher Beaumont, Tony Crook, Mario Del Castello and Yves Leroy

  20. Accurate measurement of psoralen-crosslinked DNA: direct biochemical measurements and indirect measurement by hybridization

    SciTech Connect

    Matsuo, N.; Ross, P.M.

    1988-11-01

    This paper evaluates methods to measure crosslinkage due to psoralen plus light in total DNA and in specific sequences. DNA exposed in cells or in vitro to a bifunctional psoralen and near ultraviolet light accumulates interstrand crosslinks. Crosslinkage is the DNA mass fraction that is attached in both strands to a crosslink. We show here biochemical methods to measure psoralen photocrosslinkage accurately in total DNA. We also describe methods to measure photocrosslinkage indirectly, in specific sequences, by nucleic acid hybridization. We show that a single 4,5',8-trimethylpsoralen (TMP) crosslink causes at least 50 kbp of alkali-denatured DNA contiguous in both strands with it to snap back into the duplex form when the denatured preparation is returned to neutral pH. This process was so efficient that the DNA was not nicked by the single-strand nuclease S1 at 100-fold excess after snapping back. Uncrosslinked DNA was digested to acid-soluble material by the enzyme. Crosslinkage therefore equals the fraction of S1-resistant nucleotide in this kind of experiment. We alkali-denatured DNA samples crosslinked to varying degrees by varying TMP concentration at constant light exposure. We then measured crosslinkage by ethidium bromide (EtBr) fluorometry at pH 11.8; by EtBr fluorometry at neutral pH of S1 digests of the DNA; and by the fraction of radioactivity remaining acid insoluble in S1-digests of DNA labeled uniformly with (3H)deoxythymidine. These assays measure distinct physical properties of crosslinked DNA. Numerical agreement is expected only when all three measurements are accurate. Under optimum conditions, the three methods yielded identical results over the range of measurement. Using alkaline EtBr fluorescence in crude cell lysates, we detected crosslinks at frequencies in the range of 1.6 X 10(-7) per base pair.

  1. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic…

  2. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: 1) representing increasingly precisely the magnitudes of non-symbolic…

  3. Accurate vessel segmentation with constrained B-snake.

    PubMed

    Yuanzhi Cheng; Xin Hu; Ji Wang; Yadong Wang; Tamura, Shinichi

    2015-08-01

    We describe an active contour framework with accurate shape and size constraints on the vessel cross-sectional planes to produce the vessel segmentation. It starts with a multiscale vessel axis tracing in a 3D computed tomography (CT) data, followed by vessel boundary delineation on the cross-sectional planes derived from the extracted axis. The vessel boundary surface is deformed under constrained movements on the cross sections and is voxelized to produce the final vascular segmentation. The novelty of this paper lies in the accurate contour point detection of thin vessels based on the CT scanning model, in the efficient implementation of missing contour points in the problematic regions and in the active contour model with accurate shape and size constraints. The main advantage of our framework is that it avoids disconnected and incomplete segmentation of the vessels in the problematic regions that contain touching vessels (vessels in close proximity to each other), diseased portions (pathologic structure attached to a vessel), and thin vessels. It is particularly suitable for accurate segmentation of thin and low contrast vessels. Our method is evaluated and demonstrated on CT data sets from our partner site, and its results are compared with three related methods. Our method is also tested on two publicly available databases and its results are compared with the recently published method. The applicability of the proposed method to some challenging clinical problems, the segmentation of the vessels in the problematic regions, is demonstrated with good results on both quantitative and qualitative experimentations; our segmentation algorithm can delineate vessel boundaries that have level of variability similar to those obtained manually.

  4. Accurate taxonomic assignment of short pyrosequencing reads.

    PubMed

    Clemente, José C; Jansson, Jesper; Valiente, Gabriel

    2010-01-01

    Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.

  5. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  6. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  7. Sparse and accurate high resolution SAR imaging

    NASA Astrophysics Data System (ADS)

    Vu, Duc; Zhao, Kexin; Rowe, William; Li, Jian

    2012-05-01

    We investigate the usage of an adaptive method, the Iterative Adaptive Approach (IAA), in combination with a maximum a posteriori (MAP) estimate to reconstruct high resolution SAR images that are both sparse and accurate. IAA is a nonparametric weighted least squares algorithm that is robust and user parameter-free. IAA has been shown to reconstruct SAR images with excellent side lobes suppression and high resolution enhancement. We first reconstruct the SAR images using IAA, and then we enforce sparsity by using MAP with a sparsity inducing prior. By coupling these two methods, we can produce a sparse and accurate high resolution image that are conducive for feature extractions and target classification applications. In addition, we show how IAA can be made computationally efficient without sacrificing accuracies, a desirable property for SAR applications where the size of the problems is quite large. We demonstrate the success of our approach using the Air Force Research Lab's "Gotcha Volumetric SAR Data Set Version 1.0" challenge dataset. Via the widely used FFT, individual vehicles contained in the scene are barely recognizable due to the poor resolution and high side lobe nature of FFT. However with our approach clear edges, boundaries, and textures of the vehicles are obtained.

  8. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  9. Direct numerical simulation of scalar transport using unstructured finite-volume schemes

    NASA Astrophysics Data System (ADS)

    Rossi, Riccardo

    2009-03-01

    An unstructured finite-volume method for direct and large-eddy simulations of scalar transport in complex geometries is presented and investigated. The numerical technique is based on a three-level fully implicit time advancement scheme and central spatial interpolation operators. The scalar variable at cell faces is obtained by a symmetric central interpolation scheme, which is formally first-order accurate, or by further employing a high-order correction term which leads to formal second-order accuracy irrespective of the underlying grid. In this framework, deferred-correction and slope-limiter techniques are introduced in order to avoid numerical instabilities in the resulting algebraic transport equation. The accuracy and robustness of the code are initially evaluated by means of basic numerical experiments where the flow field is assigned a priori. A direct numerical simulation of turbulent scalar transport in a channel flow is finally performed to validate the numerical technique against a numerical dataset established by a spectral method. In spite of the linear character of the scalar transport equation, the computed statistics and spectra of the scalar field are found to be significantly affected by the spectral-properties of interpolation schemes. Although the results show an improved spectral-resolution and greater spatial-accuracy for the high-order operator in the analysis of basic scalar transport problems, the low-order central scheme is found superior for high-fidelity simulations of turbulent scalar transport.

  10. Numerical comparison between DHF and RHF methods

    NASA Astrophysics Data System (ADS)

    Kobus, J.; Jaskolski, W.

    1987-10-01

    A detailed numerical comparison of the Dirac-Hartree-Fock method and the relativistic Hartree-Fock (RHF) method of Cowan and Griffith (1976) is presented, considering the total energy, the orbital energies, and the one-electron and two-electron integrals. The RHF method is found to yield accurate values of the relativistic transition energies. Using accurate values of the correlation corrections for p-electron and d-electron systems, the usefulness of the RHF method in obtaining relativistic corrections to the differential term energies is demonstrated. Advantages of the method for positron scattering on heavy systems are also pointed out.

  11. The numerical dynamic for highly nonlinear partial differential equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1992-01-01

    Problems associated with the numerical computation of highly nonlinear equations in computational fluid dynamics are set forth and analyzed in terms of the potential ranges of spurious behaviors. A reaction-convection equation with a nonlinear source term is employed to evaluate the effects related to spatial and temporal discretizations. The discretization of the source term is described according to several methods, and the various techniques are shown to have a significant effect on the stability of the spurious solutions. Traditional linearized stability analyses cannot provide the level of confidence required for accurate fluid dynamics computations, and the incorporation of nonlinear analysis is proposed. Nonlinear analysis based on nonlinear dynamical systems complements the conventional linear approach and is valuable in the analysis of hypersonic aerodynamics and combustion phenomena.

  12. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.

    PubMed

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well.

  13. Time accurate simulations of compressible shear flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Steinberger, Craig J.; Vidoni, Thomas J.; Madnia, Cyrus K.

    1993-01-01

    The objectives of this research are to employ direct numerical simulation (DNS) to study the phenomenon of mixing (or lack thereof) in compressible free shear flows and to suggest new means of enhancing mixing in such flows. The shear flow configurations under investigation are those of parallel mixing layers and planar jets under both non-reacting and reacting nonpremixed conditions. During the three-years of this research program, several important issues regarding mixing and chemical reactions in compressible shear flows were investigated.

  14. Statistically accurate simulations for atmospheric flows

    NASA Astrophysics Data System (ADS)

    Dubinkina, S.

    2009-04-01

    A Hamiltonian particle-mesh method for quasi-geostrophic potential vorticity flow is proposed. The microscopic vorticity field at any time is an area- and energy-conserving rearrangement of the initial field. We construct a statistical mechanics theory to explain the long-time behavior of the numerical solution. The statistical theory correctly predicts the spatial distribution of particles as a function of their point vorticity. A nonlinear relation between the coarse grained mean stream function and mean vorticity fields is predicted, consistent with the preservation of higher moments of potential vorticity reported in [R. V. Abramov, A. J. Majda 2003, PNAS 100 3841--3846].

  15. Numerical simulation of electrospray in the cone-jet mode.

    PubMed

    Herrada, M A; López-Herrera, J M; Gañán-Calvo, A M; Vega, E J; Montanero, J M; Popinet, S

    2012-08-01

    We present a robust and computationally efficient numerical scheme for simulating steady electrohydrodynamic atomization processes (electrospray). The main simplification assumed in this scheme is that all the free electrical charges are distributed over the interface. A comparison of the results with those calculated with a volume-of-fluid method showed that the numerical scheme presented here accurately describes the flow pattern within the entire liquid domain. Experiments were performed to partially validate the numerical predictions. The simulations reproduced accurately the experimental shape of the liquid cone jet, providing correct values of the emitted electric current even for configurations very close to the cone-jet stability limit.

  16. Numerical investigation of the joint impact of thermophoresis and radiative losses in induction plasma synthesis of copper nanoparticles

    NASA Astrophysics Data System (ADS)

    Bianconi, S.; Boselli, M.; Gherardi, M.; Colombo, V.

    2017-04-01

    A numerical model for the simulation of the copper nanoparticles synthesis process by an induction thermal plasma system has been developed, taking into account the joint effects of radiative losses from the metallic vapour and thermophoretic transport of the synthetized nanoparticles on the process performance; the final aim of the work is that of setting up an accurate modelling tool for design-oriented simulation and optimization of the synthesis process. Radiative losses from copper vapour and thermophoresis have been taken into account for different reaction chamber geometries (a cylindrical one and one with a conical top section) combined with different quench gas injection strategies and different power levels, in order to evaluate the impact of these phenomena in process design strategies. The numerical results show that both these phenomena have a relevant impact on the process parameters for all the investigated cases, and that they should be taken into account in order to develop increasingly accurate models for process design and optimization.

  17. a Numerical Method for Scattering from Acoustically Soft and Hard Thin Bodies in Two Dimensions

    NASA Astrophysics Data System (ADS)

    YANG, S. A.

    2002-03-01

    This paper presents a numerical method for predicting the acoustic scattering from two-dimensional (2-D) thin bodies. Both the Dirichlet and Neumann problems are considered. Applying the thin-body formulation leads to the boundary integral equations involving weakly singular and hypersingular kernels. Completely regularizing these kinds of singular kernels is thus the main concern of this paper. The basic subtraction-addition technique is adopted. The purpose of incorporating a parametric representation of the boundary surface with the integral equations is two-fold. The first is to facilitate the numerical implementation for arbitrarily shaped bodies. The second one is to facilitate the expansion of the unknown function into a series of Chebyshev polynomials. Some of the resultant integrals are evaluated by using the Gauss-Chebyshev integration rules after moving the series coefficients to the outside of the integral sign; others are evaluated exactly, including the modified hypersingular integral. The numerical implementation basically includes only two parts, one for evaluating the ordinary integrals and the other for solving a system of algebraic equations. Thus, the current method is highly efficient and accurate because these two solution procedures are easy and straightforward. Numerical calculations consist of the acoustic scattering by flat and curved plates. Comparisons with analytical solutions for flat plates are made.

  18. New possibilities of accurate particle characterisation by applying direct boundary models to analytical centrifugation.

    PubMed

    Walter, Johannes; Thajudeen, Thaseem; Süss, Sebastian; Segets, Doris; Peukert, Wolfgang

    2015-04-21

    Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles.

  19. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  20. Extracting accurate strain measurements in bone mechanics: A critical review of current methods.

    PubMed

    Grassi, Lorenzo; Isaksson, Hanna

    2015-10-01

    Osteoporosis related fractures are a social burden that advocates for more accurate fracture prediction methods. Mechanistic methods, e.g. finite element models, have been proposed as a tool to better predict bone mechanical behaviour and strength. However, there is little consensus about the optimal constitutive law to describe bone as a material. Extracting reliable and relevant strain data from experimental tests is of fundamental importance to better understand bone mechanical properties, and to validate numerical models. Several techniques have been used to measure strain in experimental mechanics, with substantial differences in terms of accuracy, precision, time- and length-scale. Each technique presents upsides and downsides that must be carefully evaluated when designing the experiment. Moreover, additional complexities are often encountered when applying such strain measurement techniques to bone, due to its complex composite structure. This review of literature examined the four most commonly adopted methods for strain measurements (strain gauges, fibre Bragg grating sensors, digital image correlation, and digital volume correlation), with a focus on studies with bone as a substrate material, at the organ and tissue level. For each of them the working principles, a summary of the main applications to bone mechanics at the organ- and tissue-level, and a list of pros and cons are provided.

  1. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  2. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  3. LSM: perceptually accurate line segment merging

    NASA Astrophysics Data System (ADS)

    Hamid, Naila; Khan, Nazar

    2016-11-01

    Existing line segment detectors tend to break up perceptually distinct line segments into multiple segments. We propose an algorithm for merging such broken segments to recover the original perceptually accurate line segments. The algorithm proceeds by grouping line segments on the basis of angular and spatial proximity. Then those line segment pairs within each group that satisfy unique, adaptive mergeability criteria are successively merged to form a single line segment. This process is repeated until no more line segments can be merged. We also propose a method for quantitative comparison of line segment detection algorithms. Results on the York Urban dataset show that our merged line segments are closer to human-marked ground-truth line segments compared to state-of-the-art line segment detection algorithms.

  4. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  5. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  6. Obtaining accurate translations from expressed sequence tags.

    PubMed

    Wasmuth, James; Blaxter, Mark

    2009-01-01

    The genomes of an increasing number of species are being investigated through the generation of expressed sequence tags (ESTs). However, ESTs are prone to sequencing errors and typically define incomplete transcripts, making downstream annotation difficult. Annotation would be greatly improved with robust polypeptide translations. Many current solutions for EST translation require a large number of full-length gene sequences for training purposes, a resource that is not available for the majority of EST projects. As part of our ongoing EST programs investigating these "neglected" genomes, we have developed a polypeptide prediction pipeline, prot4EST. It incorporates freely available software to produce final translations that are more accurate than those derived from any single method. We describe how this integrated approach goes a long way to overcoming the deficit in training data.

  7. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  8. Accurate radio positions with the Tidbinbilla interferometer

    NASA Technical Reports Server (NTRS)

    Batty, M. J.; Gulkis, S.; Jauncey, D. L.; Rayner, P. T.

    1979-01-01

    The Tidbinbilla interferometer (Batty et al., 1977) is designed specifically to provide accurate radio position measurements of compact radio sources in the Southern Hemisphere with high sensitivity. The interferometer uses the 26-m and 64-m antennas of the Deep Space Network at Tidbinbilla, near Canberra. The two antennas are separated by 200 m on a north-south baseline. By utilizing the existing antennas and the low-noise traveling-wave masers at 2.29 GHz, it has been possible to produce a high-sensitivity instrument with a minimum of capital expenditure. The north-south baseline ensures that a good range of UV coverage is obtained, so that sources lying in the declination range between about -80 and +30 deg may be observed with nearly orthogonal projected baselines of no less than about 1000 lambda. The instrument also provides high-accuracy flux density measurements for compact radio sources.

  9. Magnetic ranging tool accurately guides replacement well

    SciTech Connect

    Lane, J.B.; Wesson, J.P. )

    1992-12-21

    This paper reports on magnetic ranging surveys and directional drilling technology which accurately guided a replacement well bore to intersect a leaking gas storage well with casing damage. The second well bore was then used to pump cement into the original leaking casing shoe. The repair well bore kicked off from the surface hole, bypassed casing damage in the middle of the well, and intersected the damaged well near the casing shoe. The repair well was subsequently completed in the gas storage zone near the original well bore, salvaging the valuable bottom hole location in the reservoir. This method would prevent the loss of storage gas, and it would prevent a potential underground blowout that could permanently damage the integrity of the storage field.

  10. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  11. Influence of the Numerical Scheme on the Solution Quality of the SWE for Tsunami Numerical Codes: The Tohoku-Oki, 2011Example.

    NASA Astrophysics Data System (ADS)

    Reis, C.; Clain, S.; Figueiredo, J.; Baptista, M. A.; Miranda, J. M. A.

    2015-12-01

    Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly

  12. Numerical computation of Pop plot

    SciTech Connect

    Menikoff, Ralph

    2015-03-23

    The Pop plot — distance-of-run to detonation versus initial shock pressure — is a key characterization of shock initiation in a heterogeneous explosive. Reactive burn models for high explosives (HE) must reproduce the experimental Pop plot to have any chance of accurately predicting shock initiation phenomena. This report describes a methodology for automating the computation of a Pop plot for a specific explosive with a given HE model. Illustrative examples of the computation are shown for PBX 9502 with three burn models (SURF, WSD and Forest Fire) utilizing the xRage code, which is the Eulerian ASC hydrocode at LANL. Comparison of the numerical and experimental Pop plot can be the basis for a validation test or as an aid in calibrating the burn rate of an HE model. Issues with calibration are discussed.

  13. Numerical reconstruction of optical surfaces.

    PubMed

    Nam, Jayoung; Rubinstein, Jacob

    2008-07-01

    There are several problems in optics that involve the reconstruction of surfaces such as wavefronts, reflectors, and lenses. The reconstruction problem often leads to a system of first-order differential equations for the unknown surface. We compare several numerical methods for integrating differential equations of this kind. One class of methods involves a direct integration. It is shown that such a technique often fails in practice. We thus consider one method that provides an approximate direct integration; we show that it is always converging and that it provides a stable, accurate solution even in the presence of measurement noise. In addition, we consider a number of methods that are based on converting the original equation into a minimization problem.

  14. Numerical experiments in homogeneous turbulence

    NASA Technical Reports Server (NTRS)

    Rogallo, R. S.

    1981-01-01

    The direct simulation methods developed by Orszag and Patternson (1972) for isotropic turbulence were extended to homogeneous turbulence in an incompressible fluid subjected to uniform deformation or rotation. The results of simulations for irrotational strain (plane and axisymmetric), shear, rotation, and relaxation toward isotropy following axisymmetric strain are compared with linear theory and experimental data. Emphasis is placed on the shear flow because of its importance and because of the availability of accurate and detailed experimental data. The computed results are used to assess the accuracy of two popular models used in the closure of the Reynolds-stress equations. Data from a variety of the computed fields and the details of the numerical methods used in the simulation are also presented.

  15. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  16. Accurate analytical approximation of asteroid deflection with constant tangential thrust

    NASA Astrophysics Data System (ADS)

    Bombardelli, Claudio; Baù, Giulio

    2012-11-01

    We present analytical formulas to estimate the variation of achieved deflection for an Earth-impacting asteroid following a continuous tangential low-thrust deflection strategy. Relatively simple analytical expressions are obtained with the aid of asymptotic theory and the use of Peláez orbital elements set, an approach that is particularly suitable to the asteroid deflection problem and is not limited to small eccentricities. The accuracy of the proposed formulas is evaluated numerically showing negligible error for both early and late deflection campaigns. The results will be of aid in planning future low-thrust asteroid deflection missions.

  17. Primordial Black Holes from First Principles (numerics)

    NASA Astrophysics Data System (ADS)

    Bloomfield, Jolyon; Moss, Zander; Lam, Casey; Russell, Megan; Face, Stephen; Guth, Alan

    2017-01-01

    In order to compute accurate number densities and mass spectra for primordial black holes from an inflationary power spectrum, one needs to perform Monte Carlo integration over field configurations. This requires a method of determining whether a black hole will form, and if so, what its mass will be, for each sampled configuration. In order for such an integral to converge within any reasonable time, this requires a highly efficient process for making these determinations. We present a numerical pipeline that is capable of making reasonably accurate predictions for black holes and masses at the rate of a few seconds per sample (including the sampling process), utilizing a fully-nonlinear numerical relativity code in 1+1 dimensions.

  18. The Impact of TRMM Data on Mesoscale Numerical Simulation of Super Typhoon Paka

    NASA Technical Reports Server (NTRS)

    Pu, Zhao-Xia; Tao, Wei-Kuo; Jia, Yi-Qin; Simpson, Joanne; Braun, Scott A.; Halverson, Jeffrey; Hou, Arthur; Olson, William; Starr, David OC. (Technical Monitor)

    2001-01-01

    Accurate measurement of the spatial and temporal variations of tropical rainfall around the globe had remained as a critical problem in meteorology until the recent launch of the Tropical Rainfall Measuring Mission (TRMM). TRMM offers a unique opportunity to improve understanding of tropical meteorology and also offers a great opportunity to evaluate the impact of rainfall data on tropical weather forecasts. This study assesses the impact of TRMM Microwave Imager (TMI) derived surface rainfall data on the numerical