Science.gov

Sample records for accurate numerical evaluation

  1. On the very accurate numerical evaluation of the Generalized Fermi-Dirac Integrals

    NASA Astrophysics Data System (ADS)

    Mohankumar, N.; Natarajan, A.

    2016-10-01

    We indicate a new and a very accurate algorithm for the evaluation of the Generalized Fermi-Dirac Integral with a relative error less than 10-20. The method involves Double Exponential, Trapezoidal and Gauss-Legendre quadratures. For the residue correction of the Gauss-Legendre scheme, a simple and precise continued fraction algorithm is used.

  2. Numerical evolution of multiple black holes with accurate initial data

    SciTech Connect

    Galaviz, Pablo; Bruegmann, Bernd; Cao Zhoujian

    2010-07-15

    We present numerical evolutions of three equal-mass black holes using the moving puncture approach. We calculate puncture initial data for three black holes solving the constraint equations by means of a high-order multigrid elliptic solver. Using these initial data, we show the results for three black hole evolutions with sixth-order waveform convergence. We compare results obtained with the BAM and AMSS-NCKU codes with previous results. The approximate analytic solution to the Hamiltonian constraint used in previous simulations of three black holes leads to different dynamics and waveforms. We present some numerical experiments showing the evolution of four black holes and the resulting gravitational waveform.

  3. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  4. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.

  5. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  6. Accurate numerical simulation of short fiber optical parametric amplifiers.

    PubMed

    Marhic, M E; Rieznik, A A; Kalogerakis, G; Braimiotis, C; Fragnito, H L; Kazovsky, L G

    2008-03-17

    We improve the accuracy of numerical simulations for short fiber optical parametric amplifiers (OPAs). Instead of using the usual coarse-step method, we adopt a model for birefringence and dispersion which uses fine-step variations of the parameters. We also improve the split-step Fourier method by exactly treating the nonlinear ellipse rotation terms. We find that results obtained this way for two-pump OPAs can be significantly different from those obtained by using the usual coarse-step fiber model, and/or neglecting ellipse rotation terms.

  7. Accurate numerical solution of compressible, linear stability equations

    NASA Technical Reports Server (NTRS)

    Malik, M. R.; Chuang, S.; Hussaini, M. Y.

    1982-01-01

    The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.

  8. Numerical evaluation of uniform beam modes.

    SciTech Connect

    Tang, Y.; Reactor Analysis and Engineering

    2003-12-01

    The equation for calculating the normal modes of a uniform beam under transverse free vibration involves the hyperbolic sine and cosine functions. These functions are exponential growing without bound. Tables for the natural frequencies and the corresponding normal modes are available for the numerical evaluation up to the 16th mode. For modes higher than the 16th, the accuracy of the numerical evaluation will be lost due to the round-off errors in the floating-point math imposed by the digital computers. Also, it is found that the functions of beam modes commonly presented in the structural dynamics books are not suitable for numerical evaluation. In this paper, these functions are rearranged and expressed in a different form. With these new equations, one can calculate the normal modes accurately up to at least the 100th mode. Mike's Arbitrary Precision Math, an arbitrary precision math library, is used in the paper to verify the accuracy.

  9. Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.

    PubMed

    Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique

    2013-06-01

    The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530

  10. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  11. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  12. Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques

    PubMed Central

    Petersen, Richard C.

    2014-01-01

    Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms

  13. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  14. Recommendations for Achieving Accurate Numerical Simulation of Tip Clearance Flows in Transonic Compressor Rotors

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.

  15. Towards numerically accurate many-body perturbation theory: Short-range correlation effects

    SciTech Connect

    Gulans, Andris

    2014-10-28

    The example of the uniform electron gas is used for showing that the short-range electron correlation is difficult to handle numerically, while it noticeably contributes to the self-energy. Nonetheless, in condensed-matter applications studied with advanced methods, such as the GW and random-phase approximations, it is common to neglect contributions due to high-momentum (large q) transfers. Then, the short-range correlation is poorly described, which leads to inaccurate correlation energies and quasiparticle spectra. To circumvent this problem, an accurate extrapolation scheme is proposed. It is based on an analytical derivation for the uniform electron gas presented in this paper, and it provides an explanation why accurate GW quasiparticle spectra are easy to obtain for some compounds and very difficult for others.

  16. Final Report for "Accurate Numerical Models of the Secondary Electron Yield from Grazing-incidence Collisions".

    SciTech Connect

    Seth A Veitzer

    2008-10-21

    Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.

  17. Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations

    SciTech Connect

    Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg

    2007-08-10

    In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.

  18. A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation

    NASA Astrophysics Data System (ADS)

    Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin

    2016-07-01

    In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.

  19. Accurate Evaluation Method of Molecular Binding Affinity from Fluctuation Frequency

    NASA Astrophysics Data System (ADS)

    Hoshino, Tyuji; Iwamoto, Koji; Ode, Hirotaka; Ohdomari, Iwao

    2008-05-01

    Exact estimation of the molecular binding affinity is significantly important for drug discovery. The energy calculation is a direct method to compute the strength of the interaction between two molecules. This energetic approach is, however, not accurate enough to evaluate a slight difference in binding affinity when distinguishing a prospective substance from dozens of candidates for medicine. Hence more accurate estimation of drug efficacy in a computer is currently demanded. Previously we proposed a concept of estimating molecular binding affinity, focusing on the fluctuation at an interface between two molecules. The aim of this paper is to demonstrate the compatibility between the proposed computational technique and experimental measurements, through several examples for computer simulations of an association of human immunodeficiency virus type-1 (HIV-1) protease and its inhibitor (an example for a drug-enzyme binding), a complexation of an antigen and its antibody (an example for a protein-protein binding), and a combination of estrogen receptor and its ligand chemicals (an example for a ligand-receptor binding). The proposed affinity estimation has proven to be a promising technique in the advanced stage of the discovery and the design of drugs.

  20. Keeping the edge: an accurate numerical method to solve the stream power law

    NASA Astrophysics Data System (ADS)

    Campforts, B.; Govers, G.

    2015-12-01

    Bedrock rivers set the base level of surrounding hill slopes and mediate the dynamic interplay between mountain building and denudation. The propensity of rivers to preserve pulses of increased tectonic uplift also allows to reconstruct long term uplift histories from longitudinal river profiles. An accurate reconstruction of river profile development at different timescales is therefore essential. Long term river development is typically modeled by means of the stream power law. Under specific conditions this equation can be solved analytically but numerical Finite Difference Methods (FDMs) are most frequently used. Nonetheless, FDMs suffer from numerical smearing, especially at knickpoint zones which are key to understand transient landscapes. Here, we solve the stream power law by means of a Finite Volume Method (FVM) which is Total Variation Diminishing (TVD). Total volume methods are designed to simulate sharp discontinuities making them very suitable to model river incision. In contrast to FDMs, the TVD_FVM is well capable of preserving knickpoints as illustrated for the fast propagating Niagara falls. Moreover, we show that the TVD_FVM performs much better when reconstructing uplift at timescales exceeding 100 Myr, using Eastern Australia as an example. Finally, uncertainty associated with parameter calibration is dramatically reduced when the TVD_FVM is applied. Therefore, the use of a TVD_FVM to understand long term landscape evolution is an important addition to the toolbox at the disposition of geomorphologists.

  1. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  2. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    PubMed

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  3. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    PubMed

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  4. Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, F

    2008-04-11

    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.

  5. Numerical evaluation of high energy particle effects in magnetohydrodynamics

    SciTech Connect

    White, R.B.; Wu, Y.

    1994-03-01

    The interaction of high energy ions with magnetohydrodynamic modes is analyzed. A numerical code is developed which evaluates the contribution of the high energy particles to mode stability using orbit averaging of motion in either analytic or numerically generated equilibria through Hamiltonian guiding center equations. A dispersion relation is then used to evaluate the effect of the particles on the linear mode. Generic behavior of the solutions of the dispersion relation is discussed and dominant contributions of different components of the particle distribution function are identified. Numerical convergence of Monte-Carlo simulations is analyzed. The resulting code ORBIT provides an accurate means of comparing experimental results with the predictions of kinetic magnetohydrodynamics. The method can be extended to include self consistent modification of the particle orbits by the mode, and hence the full nonlinear dynamics of the coupled system.

  6. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  7. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  8. Accurate numerical forward model for optimal retracking of SIRAL2 SAR echoes over open ocean

    NASA Astrophysics Data System (ADS)

    Phalippou, L.; Demeestere, F.

    2011-12-01

    The SAR mode of SIRAL-2 on board Cryosat-2 has been designed to measure primarily sea-ice and continental ice (Wingham et al. 2005). In 2005, K. Raney (KR, 2005) pointed out the improvements brought by SAR altimeter for open ocean. KR results were mostly based on 'rule of thumb' considerations on speckle noise reduction due to the higher PRF and to speckle decorrelation after SAR processing. In 2007, Phalippou and Enjolras (PE,2007) provided the theoretical background for optimal retracking of SAR echoes over ocean with a focus on the forward modelling of the power-waveforms. The accuracies of geophysical parameters (range, significant wave heights, and backscattering coefficient) retrieved from SAR altimeter data were derived accounting for SAR echo shape and speckle noise accurate modelling. The step forward to optimal retracking using numerical forward model (NFM) was also pointed out. NFM of the power waveform avoids analytical approximation, a warranty to minimise the geophysical dependent biases in the retrieval. NFM have been used for many years, in operational meteorology in particular, for retrieving temperature and humidity profiles from IR and microwave radiometers as the radiative transfer function is complex (Eyre, 1989). So far this technique was not used in the field of ocean conventional altimetry as analytical models (e.g. Brown's model for instance) were found to give sufficient accuracy. However, although NFM seems desirable even for conventional nadir altimetry, it becomes inevitable if one wish to process SAR altimeter data as the transfer function is too complex to be approximated by a simple analytical function. This was clearly demonstrated in PE 2007. The paper describes the background to SAR data retracking over open ocean. Since PE 2007 improvements have been brought to the forward model and it is shown that the altimeter on-ground and in flight characterisation (e.g antenna pattern range impulse response, azimuth impulse response

  9. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  10. Fast and accurate numerical method for predicting gas chromatography retention time.

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-08-01

    Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.

  11. Accurate evaluation of homogenous and nonhomogeneous gas emissivities

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Lee, K. P.

    1984-01-01

    Spectral transmittance and total band adsorptance of selected infrared bands of carbon dioxide and water vapor are calculated by using the line-by-line and quasi-random band models and these are compared with available experimental results to establish the validity of the quasi-random band model. Various wide-band model correlations are employed to calculate the total band absorptance and total emissivity of these two gases under homogeneous and nonhomogeneous conditions. These results are compared with available experimental results under identical conditions. From these comparisons, it is found that the quasi-random band model can provide quite accurate results and is quite suitable for most atmospheric applications.

  12. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  13. Spectrally accurate numerical solution of the single-particle Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Batcho, P. F.

    1998-06-01

    We have formulated a three-dimensional fully numerical (i.e., chemical basis-set free) method and applied it to the solution of the single-particle Schrödinger equation. The numerical method combines the rapid ``exponential'' convergence rates of spectral methods with the geometric flexibility of finite-element methods and can be viewed as an extension of the spectral element method. Singularities associated with multicenter systems are efficiently integrated by a Duffy transformation and the discrete operator is formulated by a variational statement. The method is applicable to molecular modeling for quantum chemical calculations on polyatomic systems. The complete system is shown to be efficiently inverted by the preconditioned conjugate gradient method and exponential convergence rates in numerical approximations are demonstrated for suitable benchmark problems including the hydrogenlike orbitals of nitrogen.

  14. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  15. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  16. Accurate polarimeter with multicapture fitting for plastic lens evaluation

    NASA Astrophysics Data System (ADS)

    Domínguez, Noemí; Mayershofer, Daniel; Garcia, Cristina; Arasa, Josep

    2016-02-01

    Due to their manufacturing process, plastic injection molded lenses do not achieve a constant density throughout their volume. This change of density introduces tensions in the material, inducing local birefringence, which in turn is translated into a variation of the ordinary and extraordinary refractive indices that can be expressed as a retardation phase plane using the Jones matrix notation. The detection and measurement of the value of the retardation of the phase plane are therefore very useful ways to evaluate the quality of plastic lenses. We introduce a polariscopic device to obtain two-dimensional maps of the tension distribution in the bulk of a lens, based on detection of the local birefringence. In addition to a description of the device and the mathematical approach used, a set of initial measurements is presented that confirms the validity of the developed system for the testing of the uniformity of plastic lenses.

  17. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  18. Towards more accurate numerical modeling of impedance based high frequency harmonic vibration

    NASA Astrophysics Data System (ADS)

    Lim, Yee Yan; Kiong Soh, Chee

    2014-03-01

    The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.

  19. The use of experimental bending tests to more accurate numerical description of TBC damage process

    NASA Astrophysics Data System (ADS)

    Sadowski, T.; Golewski, P.

    2016-04-01

    Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.

  20. An accurate numerical solution to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in rivers

    NASA Astrophysics Data System (ADS)

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2016-07-01

    We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].

  1. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  2. Estimation method of point spread function based on Kalman filter for accurately evaluating real optical properties of photonic crystal fibers.

    PubMed

    Shen, Yan; Lou, Shuqin; Wang, Xin

    2014-03-20

    The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters.

  3. Toward Accurate Measurement of Participation: Rethinking the Conceptualization and Operationalization of Participatory Evaluation

    ERIC Educational Resources Information Center

    Daigneault, Pierre-Marc; Jacob, Steve

    2009-01-01

    While participatory evaluation (PE) constitutes an important trend in the field of evaluation, its ontology has not been systematically analyzed. As a result, the concept of PE is ambiguous and inadequately theorized. Furthermore, no existing instrument accurately measures stakeholder participation. First, this article attempts to overcome these…

  4. Post-identification feedback to eyewitnesses impairs evaluators' abilities to discriminate between accurate and mistaken testimony.

    PubMed

    Smalarz, Laura; Wells, Gary L

    2014-04-01

    Giving confirming feedback to mistaken eyewitnesses has robust distorting effects on their retrospective judgments (e.g., how certain they were, their view, etc.). Does feedback harm evaluators' abilities to discriminate between accurate and mistaken identification testimony? Participant-witnesses to a simulated crime made accurate or mistaken identifications from a lineup and then received confirming feedback or no feedback. Each then gave videotaped testimony about their identification, and a new sample of participant-evaluators judged the accuracy and credibility of the testimonies. Among witnesses who were not given feedback, evaluators were significantly more likely to believe the testimony of accurate eyewitnesses than they were to believe the testimony of mistaken eyewitnesses, indicating significant discrimination. Among witnesses who were given confirming feedback, however, evaluators believed accurate and mistaken witnesses at nearly identical rates, indicating no ability to discriminate. Moreover, there was no evidence of overbelief in the absence of feedback whereas there was significant overbelief in the confirming feedback conditions. Results demonstrate that a simple comment following a witness' identification decision ("Good job, you got the suspect") can undermine fact-finders' abilities to discern whether the witness made an accurate or a mistaken identification. PMID:24341835

  5. Accurate Histological Techniques to Evaluate Critical Temperature Thresholds for Prostate In Vivo

    NASA Astrophysics Data System (ADS)

    Bronskill, Michael; Chopra, Rajiv; Boyes, Aaron; Tang, Kee; Sugar, Linda

    2007-05-01

    Various histological techniques have been compared to evaluate the boundaries of thermal damage produced by ultrasound in vivo in a canine model. When all images are accurately co-registered, H&E stained micrographs provide the best assessment of acute cellular damage. Estimates of the boundaries of 100% and 0% cell killing correspond to maximum temperature thresholds of 54.6 ± 1.7°C and 51.5 ± 1.9°C, respectively.

  6. Outcomes Evaluation in "Faith"-Based Social Services: Are We Evaluating "Faith" Accurately?

    ERIC Educational Resources Information Center

    Ferguson, Kristin M.; Wu, Qiaobing; Spruijt-Metz, Donna; Dyrness, Grace

    2007-01-01

    In response to a recent call for research on the effectiveness of faith-based organizations, this article synthesizes how effectiveness has been defined and measured in evaluation research of faith-based programs. Although evidence indicates that religion can have a positive impact on individuals' well-being, no prior comprehensive review exists…

  7. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments

    PubMed Central

    Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  8. Numerical models for the evaluation of geothermal systems

    SciTech Connect

    Bodvarsson, G.S.; Pruess, K.; Lippmann, M.J.

    1986-08-01

    We have carried out detailed simulations of various fields in the USA (Bada, New Mexico; Heber, California); Mexico (Cerro Prieto); Iceland (Krafla); and Kenya (Olkaria). These simulation studies have illustrated the usefulness of numerical models for the overall evaluation of geothermal systems. The methodology for modeling the behavior of geothermal systems, different approaches to geothermal reservoir modeling and how they can be applied in comprehensive evaluation work are discussed.

  9. Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam

    NASA Astrophysics Data System (ADS)

    Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad

    2015-05-01

    Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.

  10. Numerical parameter constraints for accurate PIC-DSMC simulation of breakdown from arc initiation to stable arcs

    NASA Astrophysics Data System (ADS)

    Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith

    2015-09-01

    Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  11. Numerical and optical evaluation of particle image velocimetry images

    NASA Astrophysics Data System (ADS)

    Farrell, Patrick V.

    1991-12-01

    Applications of particle image velocimetry (PIV) techniques for measurement of fluid velocities typically requires two steps. The first of these is the photography step, in which two exposures of a particle field, displaced between the exposures, are taken. The second step of the application is the evaluation of the double exposure particle pattern and production of appropriate particle velocities. Each of these steps involves optimization which is usually specific to the experiment being conducted and there is significant interaction between photographic parameters and evaluation characteristics. This paper will focus on the latter step, that of evaluation of the double exposure photograph. Among the various evaluation techniques suggested for analysis of PIV images is the evaluation of the scattered interference pattern (Young's fringes) by numerical Fourier transform. An alternative to the numerical calculation of the Fourier transform of the Young's fringes has been suggested, using a modified liquid crystal television as an optical correlator to allow the transform to be performed optically, thus speeding up the interrogation process. Both transform techniques are affected by the quality of the input function, specifically the Young's fringes. This paper will compare the performance of optical and numerical Fourier transform analysis of Young's fringes using speckle images. The repeatability and an estimate of the accuracy of the particle displacement will be shown for each method.

  12. Evaluation of new reference genes in papaya for accurate transcript normalization under different experimental conditions.

    PubMed

    Zhu, Xiaoyang; Li, Xueping; Chen, Weixin; Chen, Jianye; Lu, Wangjin; Chen, Lei; Fu, Danwen

    2012-01-01

    Real-time reverse transcription PCR (RT-qPCR) is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s) validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP) treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s) or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A), TBP1 (TATA binding protein 1) and TBP2 (TATA binding protein 2) genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2), 18S rRNA (18S ribosomal RNA) and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase) were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental conditions.

  13. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

    NASA Technical Reports Server (NTRS)

    Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.

    2007-01-01

    Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.

  14. Numerical simulation of pharyngeal airflow applied to obstructive sleep apnea: effect of the nasal cavity in anatomically accurate airway models.

    PubMed

    Cisonni, Julien; Lucey, Anthony D; King, Andrew J C; Islam, Syed Mohammed Shamsul; Lewis, Richard; Goonewardene, Mithran S

    2015-11-01

    Repetitive brief episodes of soft-tissue collapse within the upper airway during sleep characterize obstructive sleep apnea (OSA), an extremely common and disabling disorder. Failure to maintain the patency of the upper airway is caused by the combination of sleep-related loss of compensatory dilator muscle activity and aerodynamic forces promoting closure. The prediction of soft-tissue movement in patient-specific airway 3D mechanical models is emerging as a useful contribution to clinical understanding and decision making. Such modeling requires reliable estimations of the pharyngeal wall pressure forces. While nasal obstruction has been recognized as a risk factor for OSA, the need to include the nasal cavity in upper-airway models for OSA studies requires consideration, as it is most often omitted because of its complex shape. A quantitative analysis of the flow conditions generated by the nasal cavity and the sinuses during inspiration upstream of the pharynx is presented. Results show that adequate velocity boundary conditions and simple artificial extensions of the flow domain can reproduce the essential effects of the nasal cavity on the pharyngeal flow field. Therefore, the overall complexity and computational cost of accurate flow predictions can be reduced.

  15. Numerical evaluation of gas core length in free surface vortices

    NASA Astrophysics Data System (ADS)

    Cristofano, L.; Nobili, M.; Caruso, G.

    2014-11-01

    The formation and evolution of free surface vortices represent an important topic in many hydraulic intakes, since strong whirlpools introduce swirl flow at the intake, and could cause entrainment of floating matters and gas. In particular, gas entrainment phenomena are an important safety issue for Sodium cooled Fast Reactors, because the introduction of gas bubbles within the core causes dangerous reactivity fluctuation. In this paper, a numerical evaluation of the gas core length in free surface vortices is presented, according to two different approaches. In the first one, a prediction method, developed by the Japanese researcher Sakai and his team, has been applied. This method is based on the Burgers vortex model, and it is able to estimate the gas core length of a free surface vortex starting from two parameters calculated with single-phase CFD simulations. The two parameters are the circulation and the downward velocity gradient. The other approach consists in performing a two-phase CFD simulation of a free surface vortex, in order to numerically reproduce the gas- liquid interface deformation. Mapped convergent mesh is used to reduce numerical error and a VOF (Volume Of Fluid) method was selected to track the gas-liquid interface. Two different turbulence models have been tested and analyzed. Experimental measurements of free surface vortices gas core length have been executed, using optical methods, and numerical results have been compared with experimental measurements. The computational domain and the boundary conditions of the CFD simulations were set consistently with the experimental test conditions.

  16. Identification and Evaluation of Reference Genes for Accurate Transcription Normalization in Safflower under Different Experimental Conditions.

    PubMed

    Li, Dandan; Hu, Bo; Wang, Qing; Liu, Hongchang; Pan, Feng; Wu, Wei

    2015-01-01

    Safflower (Carthamus tinctorius L.) has received a significant amount of attention as a medicinal plant and oilseed crop. Gene expression studies provide a theoretical molecular biology foundation for improving new traits and developing new cultivars. Real-time quantitative PCR (RT-qPCR) has become a crucial approach for gene expression analysis. In addition, appropriate reference genes (RGs) are essential for accurate and rapid relative quantification analysis of gene expression. In this study, fifteen candidate RGs involved in multiple metabolic pathways of plants were finally selected and validated under different experimental treatments, at different seed development stages and in different cultivars and tissues for real-time PCR experiments. These genes were ABCS, 60SRPL10, RANBP1, UBCL, MFC, UBCE2, EIF5A, COA, EF1-β, EF1, GAPDH, ATPS, MBF1, GTPB and GST. The suitability evaluation was executed by the geNorm and NormFinder programs. Overall, EF1, UBCE2, EIF5A, ATPS and 60SRPL10 were the most stable genes, and MBF1, as well as MFC, were the most unstable genes by geNorm and NormFinder software in all experimental samples. To verify the validation of RGs selected by the two programs, the expression analysis of 7 CtFAD2 genes in safflower seeds at different developmental stages under cold stress was executed using different RGs in RT-qPCR experiments for normalization. The results showed similar expression patterns when the most stable RGs selected by geNorm or NormFinder software were used. However, the differences were detected using the most unstable reference genes. The most stable combination of genes selected in this study will help to achieve more accurate and reliable results in a wide variety of samples in safflower.

  17. Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans

    2015-03-01

    Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.

  18. Identification and Evaluation of Reference Genes for Accurate Transcription Normalization in Safflower under Different Experimental Conditions

    PubMed Central

    Li, Dandan; Hu, Bo; Wang, Qing; Liu, Hongchang; Pan, Feng; Wu, Wei

    2015-01-01

    Safflower (Carthamus tinctorius L.) has received a significant amount of attention as a medicinal plant and oilseed crop. Gene expression studies provide a theoretical molecular biology foundation for improving new traits and developing new cultivars. Real-time quantitative PCR (RT-qPCR) has become a crucial approach for gene expression analysis. In addition, appropriate reference genes (RGs) are essential for accurate and rapid relative quantification analysis of gene expression. In this study, fifteen candidate RGs involved in multiple metabolic pathways of plants were finally selected and validated under different experimental treatments, at different seed development stages and in different cultivars and tissues for real-time PCR experiments. These genes were ABCS, 60SRPL10, RANBP1, UBCL, MFC, UBCE2, EIF5A, COA, EF1-β, EF1, GAPDH, ATPS, MBF1, GTPB and GST. The suitability evaluation was executed by the geNorm and NormFinder programs. Overall, EF1, UBCE2, EIF5A, ATPS and 60SRPL10 were the most stable genes, and MBF1, as well as MFC, were the most unstable genes by geNorm and NormFinder software in all experimental samples. To verify the validation of RGs selected by the two programs, the expression analysis of 7 CtFAD2 genes in safflower seeds at different developmental stages under cold stress was executed using different RGs in RT-qPCR experiments for normalization. The results showed similar expression patterns when the most stable RGs selected by geNorm or NormFinder software were used. However, the differences were detected using the most unstable reference genes. The most stable combination of genes selected in this study will help to achieve more accurate and reliable results in a wide variety of samples in safflower. PMID:26457898

  19. Identification and Evaluation of Reference Genes for Accurate Transcription Normalization in Safflower under Different Experimental Conditions.

    PubMed

    Li, Dandan; Hu, Bo; Wang, Qing; Liu, Hongchang; Pan, Feng; Wu, Wei

    2015-01-01

    Safflower (Carthamus tinctorius L.) has received a significant amount of attention as a medicinal plant and oilseed crop. Gene expression studies provide a theoretical molecular biology foundation for improving new traits and developing new cultivars. Real-time quantitative PCR (RT-qPCR) has become a crucial approach for gene expression analysis. In addition, appropriate reference genes (RGs) are essential for accurate and rapid relative quantification analysis of gene expression. In this study, fifteen candidate RGs involved in multiple metabolic pathways of plants were finally selected and validated under different experimental treatments, at different seed development stages and in different cultivars and tissues for real-time PCR experiments. These genes were ABCS, 60SRPL10, RANBP1, UBCL, MFC, UBCE2, EIF5A, COA, EF1-β, EF1, GAPDH, ATPS, MBF1, GTPB and GST. The suitability evaluation was executed by the geNorm and NormFinder programs. Overall, EF1, UBCE2, EIF5A, ATPS and 60SRPL10 were the most stable genes, and MBF1, as well as MFC, were the most unstable genes by geNorm and NormFinder software in all experimental samples. To verify the validation of RGs selected by the two programs, the expression analysis of 7 CtFAD2 genes in safflower seeds at different developmental stages under cold stress was executed using different RGs in RT-qPCR experiments for normalization. The results showed similar expression patterns when the most stable RGs selected by geNorm or NormFinder software were used. However, the differences were detected using the most unstable reference genes. The most stable combination of genes selected in this study will help to achieve more accurate and reliable results in a wide variety of samples in safflower. PMID:26457898

  20. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    PubMed

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers. PMID:24586313

  1. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    PubMed

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers.

  2. Rapid and Accurate Evaluation of the Quality of Commercial Organic Fertilizers Using Near Infrared Spectroscopy

    PubMed Central

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers. PMID:24586313

  3. Study on Applicability of Numerical Simulation to Evaluation of Gas Entrainment From Free Surface

    SciTech Connect

    Kei Ito; Takaaki Sakai; Hiroyuki Ohshima

    2006-07-01

    An onset condition of gas entrainment (GE) due to free surface vortex has been studied to establish a design of fast breeder reactor with higher coolant velocity than conventional designs, because the GE might cause the reactor operation instability and therefore should be avoided. The onset condition of the GE has been investigated experimentally and theoretically, however, dependency of the vortex type GE on local geometry configuration of each experimental system and local velocity distribution has prevented researchers from formulating the universal onset condition of the vortex type GE. A real scale test is considered as an accurate method to evaluate the occurrence of the vortex type GE, but the real scale test is generally expensive and not useful in the design study of large and complicated FBR systems, because frequent displacement of inner equipments accompanied by the design change is difficult in the real scale test. Numerical simulation seems to be promising method as an alternative to the real scale test. In this research, to evaluate the applicability of the numerical simulation to the design work, numerical simulations were conducted on the basic experimental system of the vortex type GE. This basic experiment consisted of rectangular flow channel and two important equipments for vortex type GE in the channel, i.e. vortex generation and suction equipments. Generated vortex grew rapidly interacting with the suction flow and the grown vortex formed a free surface dent (gas core). When the tip of the gas core or the bubbles detached from the tip of the gas core reached the suction mouth, the gas was entrained to the suction tube. The results of numerical simulation under the experimental conditions were compared to the experiment in terms of velocity distributions and free surface shape. As a result, the numerical simulation showed qualitatively good agreement with experimental data. The numerical simulation results were similar to the experimental

  4. Accurate evaluation of the angular-dependent direct correlation function of water

    NASA Astrophysics Data System (ADS)

    Zhao, Shuangliang; Liu, Honglai; Ramirez, Rosa; Borgis, Daniel

    2013-07-01

    The direct correlation function (DCF) plays a pivotal role in addressing the thermodynamic properties with non-mean-field statistical theories of liquid state. This work provides an accurate yet efficient calculation procedure for evaluating the angular-dependent DCF of bulk SPC/E water. The DCF here represented in a discrete angles basis is computed with two typical steps: the first step involves solving the molecular Ornstein-Zernike equation with the input of total correlation function extracted from simulation; the resultant DCF is then polished in second step at small wavelength for all orientations in order to match correct thermodynamic properties. This function is also discussed in terms of its rotational invariant components. In particular, we show that the component c112(r) that accounts for dipolar symmetry reaches already its long-range asymptotic behavior at a short distance of 4 Å. With the knowledge of DCF, the angular-dependent bridge function of bulk water is thereafter computed and discussed in comparison with referenced hard-sphere bridge functions. We conclude that, even though such hard-sphere bridge functions may be relevant to improve the calculation of Helmholtz free energies in integral equations or density functional theory, they are doomed to fail at a structural level.

  5. Can a combination of ultrasonographic parameters accurately evaluate concussion and guide return-to-play decisions?

    PubMed

    Cartwright, Michael S; Dupuis, Janae E; Bargoil, Jessica M; Foster, Dana C

    2015-09-01

    Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the

  6. Can a combination of ultrasonographic parameters accurately evaluate concussion and guide return-to-play decisions?

    PubMed

    Cartwright, Michael S; Dupuis, Janae E; Bargoil, Jessica M; Foster, Dana C

    2015-09-01

    Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the

  7. Can a Combination of Ultrasonographic Parameters Accurately Evaluate Concussion and Guide Return-to-Play Decisions?

    PubMed Central

    Cartwright, Michael S.; Dupuis, Janae E.; Bargoil, Jessica M.; Foster, Dana C.

    2015-01-01

    Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the

  8. Thermal numerical simulator for laboratory evaluation of steamflood oil recovery

    SciTech Connect

    Sarathi, P.

    1991-04-01

    A thermal numerical simulator running on an IBM AT compatible personal computer is described. The simulator was designed to assist laboratory design and evaluation of steamflood oil recovery. An overview of the historical evolution of numerical thermal simulation, NIPER's approach to solving these problems with a desk top computer, the derivation of equations and a description of approaches used to solve these equations, and verification of the simulator using published data sets and sensitivity analysis are presented. The developed model is a three-phase, two-dimensional multicomponent simulator capable of being run in one or two dimensions. Mass transfer among the phases and components is dictated by pressure- and temperature-dependent vapor-liquid equilibria. Gravity and capillary pressure phenomena were included. Energy is transferred by conduction, convection, vaporization and condensation. The model employs a block centered grid system with a five-point discretization scheme. Both areal and vertical cross-sectional simulations are possible. A sequential solution technique is employed to solve the finite difference equations. The study clearly indicated the importance of heat loss, injected steam quality, and injection rate to the process. Dependence of overall recovery on oil volatility and viscosity is emphasized. The process is very sensitive to relative permeability values. Time-step sensitivity runs indicted that the current version is time-step sensitive and exhibits conditional stability. 75 refs., 19 figs., 19 tabs.

  9. Evaluating the Impact of Aerosols on Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Freitas, Saulo; Silva, Arlindo; Benedetti, Angela; Grell, Georg; Members, Wgne; Zarzur, Mauricio

    2015-04-01

    The Working Group on Numerical Experimentation (WMO, http://www.wmo.int/pages/about/sec/rescrosscut/resdept_wgne.html) has organized an exercise to evaluate the impact of aerosols on NWP. This exercise will involve regional and global models currently used for weather forecast by the operational centers worldwide and aims at addressing the following questions: a) How important are aerosols for predicting the physical system (NWP, seasonal, climate) as distinct from predicting the aerosols themselves? b) How important is atmospheric model quality for air quality forecasting? c) What are the current capabilities of NWP models to simulate aerosol impacts on weather prediction? Toward this goal we have selected 3 strong or persistent events of aerosol pollution worldwide that could be fairly represented in current NWP models and that allowed for an evaluation of the aerosol impact on weather prediction. The selected events includes a strong dust storm that blew off the coast of Libya and over the Mediterranean, an extremely severe episode of air pollution in Beijing and surrounding areas, and an extreme case of biomass burning smoke in Brazil. The experimental design calls for simulations with and without explicitly accounting for aerosol feedbacks in the cloud and radiation parameterizations. In this presentation we will summarize the results of this study focusing on the evaluation of model performance in terms of its ability to faithfully simulate aerosol optical depth, and the assessment of the aerosol impact on the predictions of near surface wind, temperature, humidity, rainfall and the surface energy budget.

  10. Quantitative evaluation of numerical integration schemes for Lagrangian particle dispersion models

    NASA Astrophysics Data System (ADS)

    Ramli, Huda Mohd.; Esler, J. Gavin

    2016-07-01

    A rigorous methodology for the evaluation of integration schemes for Lagrangian particle dispersion models (LPDMs) is presented. A series of one-dimensional test problems are introduced, for which the Fokker-Planck equation is solved numerically using a finite-difference discretisation in physical space and a Hermite function expansion in velocity space. Numerical convergence errors in the Fokker-Planck equation solutions are shown to be much less than the statistical error associated with a practical-sized ensemble (N = 106) of LPDM solutions; hence, the former can be used to validate the latter. The test problems are then used to evaluate commonly used LPDM integration schemes. The results allow for optimal time-step selection for each scheme, given a required level of accuracy. The following recommendations are made for use in operational models. First, if computational constraints require the use of moderate to long time steps, it is more accurate to solve the random displacement model approximation to the LPDM rather than use existing schemes designed for long time steps. Second, useful gains in numerical accuracy can be obtained, at moderate additional computational cost, by using the relatively simple "small-noise" scheme of Honeycutt.

  11. Factors influencing undergraduates' self-evaluation of numerical competence

    NASA Astrophysics Data System (ADS)

    Tariq, Vicki N.; Durrani, Naureen

    2012-04-01

    This empirical study explores factors influencing undergraduates' self-evaluation of their numerical competence, using data from an online survey completed by 566 undergraduates from a diversity of academic disciplines, across all four faculties at a post-1992 UK university. Analysis of the data, which included correlation and multiple regression analyses, revealed that undergraduates exhibiting greater confidence in their mathematical and numeracy skills, as evidenced by their higher self-evaluation scores and their higher scores on the confidence sub-scale contributing to the measurement of attitude, possess more cohesive, rather than fragmented, conceptions of mathematics, and display more positive attitudes towards mathematics/numeracy. They also exhibit lower levels of mathematics anxiety. Students exhibiting greater confidence also tended to be those who were relatively young (i.e. 18-29 years), whose degree programmes provided them with opportunities to practise and further develop their numeracy skills, and who possessed higher pre-university mathematics qualifications. The multiple regression analysis revealed two positive predictors (overall attitude towards mathematics/numeracy and possession of a higher pre-university mathematics qualification) and five negative predictors (mathematics anxiety, lack of opportunity to practise/develop numeracy skills, being a more mature student, being enrolled in Health and Social Care compared with Science and Technology, and possessing no formal mathematics/numeracy qualification compared with a General Certificate of Secondary Education or equivalent qualification) accounted for approximately 64% of the variation in students' perceptions of their numerical competence. Although the results initially suggested that male students were significantly more confident than females, one compounding variable was almost certainly the students' highest pre-university mathematics or numeracy qualification, since a higher

  12. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  13. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and

  14. THE EVALUATION OF METHODS FOR CREATING DEFENSIBLE, REPEATABLE, OBJECTIVE AND ACCURATE TOLERANCE VALUES

    EPA Science Inventory

    In the field of bioassessment, tolerance has traditionally referred to the degree to which organisms can withstand environmental degradation. This concept has been around for many years and its use is widespread. In numerous cases, tolerance values (TVs) have been assigned to i...

  15. Numerical Weather Predictions Evaluation Using Spatial Verification Methods

    NASA Astrophysics Data System (ADS)

    Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2014-12-01

    During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain-­-Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is co­financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-­-2013).

  16. Evaluation of kinetic uncertainty in numerical models of petroleum generation

    USGS Publications Warehouse

    Peters, K.E.; Walters, C.C.; Mankiewicz, P.J.

    2006-01-01

    Oil-prone marine petroleum source rocks contain type I or type II kerogen having Rock-Eval pyrolysis hydrogen indices greater than 600 or 300-600 mg hydrocarbon/g total organic carbon (HI, mg HC/g TOC), respectively. Samples from 29 marine source rocks worldwide that contain mainly type II kerogen (HI = 230-786 mg HC/g TOC) were subjected to open-system programmed pyrolysis to determine the activation energy distributions for petroleum generation. Assuming a burial heating rate of 1??C/m.y. for each measured activation energy distribution, the calculated average temperature for 50% fractional conversion of the kerogen in the samples to petroleum is approximately 136 ?? 7??C, but the range spans about 30??C (???121-151??C). Fifty-two outcrop samples of thermally immature Jurassic Oxford Clay Formation were collected from five locations in the United Kingdom to determine the variations of kinetic response for one source rock unit. The samples contain mainly type I or type II kerogens (HI = 230-774 mg HC/g TOC). At a heating rate of 1??C/m.y., the calculated temperatures for 50% fractional conversion of the Oxford Clay kerogens to petroleum differ by as much as 23??C (127-150??C). The data indicate that kerogen type, as defined by hydrogen index, is not systematically linked to kinetic response, and that default kinetics for the thermal decomposition of type I or type II kerogen can introduce unacceptable errors into numerical simulations. Furthermore, custom kinetics based on one or a few samples may be inadequate to account for variations in organofacies within a source rock. We propose three methods to evaluate the uncertainty contributed by kerogen kinetics to numerical simulations: (1) use the average kinetic distribution for multiple samples of source rock and the standard deviation for each activation energy in that distribution; (2) use source rock kinetics determined at several locations to describe different parts of the study area; and (3) use a weighted

  17. Evaluating Cloud and Precipitation Processes in Numerical Models using Current and Potential Future Satellite Missions

    NASA Astrophysics Data System (ADS)

    van den Heever, S. C.; Tao, W. K.; Skofronick Jackson, G.; Tanelli, S.; L'Ecuyer, T. S.; Petersen, W. A.; Kummerow, C. D.

    2014-12-01

    Cloud, aerosol and precipitation processes play a fundamental role in the water and energy cycle. It is critical to accurately represent these microphysical processes in numerical models if we are to better predict cloud and precipitation properties on weather through climate timescales. Much has been learned about cloud properties and precipitation characteristics from NASA satellite missions such as TRMM, CloudSat, and more recently GPM. Furthermore, data from these missions have been successfully utilized in evaluating the microphysical schemes in cloud-resolving models (CRMs) and global models. However, there are still many uncertainties associated with these microphysics schemes. These uncertainties can be attributed, at least in part, to the fact that microphysical processes cannot be directly observed or measured, but instead have to be inferred from those cloud properties that can be measured. Evaluation of microphysical parameterizations are becoming increasingly important as enhanced computational capabilities are facilitating the use of more sophisticated schemes in CRMs, and as future global models are being run on what has traditionally been regarded as cloud-resolving scales using CRM microphysical schemes. In this talk we will demonstrate how TRMM, CloudSat and GPM data have been used to evaluate different aspects of current CRM microphysical schemes, providing examples of where these approaches have been successful. We will also highlight CRM microphysical processes that have not been well evaluated and suggest approaches for addressing such issues. Finally, we will introduce a potential NASA satellite mission, the Cloud and Precipitation Processes Mission (CAPPM), which would facilitate the development and evaluation of different microphysical-dynamical feedbacks in numerical models.

  18. Evaluating Cloud and Precipitation Processes in Numerical Models using Current and Potential Future Satellite Missions

    NASA Astrophysics Data System (ADS)

    van den Heever, S. C.; Tao, W. K.; Skofronick Jackson, G.; Tanelli, S.; L'Ecuyer, T. S.; Petersen, W. A.; Kummerow, C. D.

    2015-12-01

    Cloud, aerosol and precipitation processes play a fundamental role in the water and energy cycle. It is critical to accurately represent these microphysical processes in numerical models if we are to better predict cloud and precipitation properties on weather through climate timescales. Much has been learned about cloud properties and precipitation characteristics from NASA satellite missions such as TRMM, CloudSat, and more recently GPM. Furthermore, data from these missions have been successfully utilized in evaluating the microphysical schemes in cloud-resolving models (CRMs) and global models. However, there are still many uncertainties associated with these microphysics schemes. These uncertainties can be attributed, at least in part, to the fact that microphysical processes cannot be directly observed or measured, but instead have to be inferred from those cloud properties that can be measured. Evaluation of microphysical parameterizations are becoming increasingly important as enhanced computational capabilities are facilitating the use of more sophisticated schemes in CRMs, and as future global models are being run on what has traditionally been regarded as cloud-resolving scales using CRM microphysical schemes. In this talk we will demonstrate how TRMM, CloudSat and GPM data have been used to evaluate different aspects of current CRM microphysical schemes, providing examples of where these approaches have been successful. We will also highlight CRM microphysical processes that have not been well evaluated and suggest approaches for addressing such issues. Finally, we will introduce a potential NASA satellite mission, the Cloud and Precipitation Processes Mission (CAPPM), which would facilitate the development and evaluation of different microphysical-dynamical feedbacks in numerical models.

  19. A novel stress-accurate FE technology for highly non-linear analysis with incompressibility constraint. Application to the numerical simulation of the FSW process

    NASA Astrophysics Data System (ADS)

    Chiumenti, M.; Cervera, M.; Agelet de Saracibar, C.; Dialami, N.

    2013-05-01

    In this work a novel finite element technology based on a three-field mixed formulation is presented. The Variational Multi Scale (VMS) method is used to circumvent the LBB stability condition allowing the use of linear piece-wise interpolations for displacement, stress and pressure fields, respectively. The result is an enhanced stress field approximation which enables for stress-accurate results in nonlinear computational mechanics. The use of an independent nodal variable for the pressure field allows for an adhoc treatment of the incompressibility constraint. This is a mandatory requirement due to the isochoric nature of the plastic strain in metal forming processes. The highly non-linear stress field typically encountered in the Friction Stir Welding (FSW) process is used as an example to show the performance of this new FE technology. The numerical simulation of the FSW process is tackled by means of an Arbitrary-Lagrangian-Eulerian (ALE) formulation. The computational domain is split into three different zones: the work.piece (defined by a rigid visco-plastic behaviour in the Eulerian framework), the pin (within the Lagrangian framework) and finally the stirzone (ALE formulation). A fully coupled thermo-mechanical analysis is introduced showing the heat fluxes generated by the plastic dissipation in the stir-zone (Sheppard rigid-viscoplastic constitutive model) as well as the frictional dissipation at the contact interface (Norton frictional contact model). Finally, tracers have been implemented to show the material flow around the pin allowing a better understanding of the welding mechanism. Numerical results are compared with experimental evidence.

  20. The identification of complete domains within protein sequences using accurate E-values for semi-global alignment

    PubMed Central

    Kann, Maricel G.; Sheetlin, Sergey L.; Park, Yonil; Bryant, Stephen H.; Spouge, John L.

    2007-01-01

    The sequencing of complete genomes has created a pressing need for automated annotation of gene function. Because domains are the basic units of protein function and evolution, a gene can be annotated from a domain database by aligning domains to the corresponding protein sequence. Ideally, complete domains are aligned to protein subsequences, in a ‘semi-global alignment’. Local alignment, which aligns pieces of domains to subsequences, is common in high-throughput annotation applications, however. It is a mature technique, with the heuristics and accurate E-values required for screening large databases and evaluating the screening results. Hidden Markov models (HMMs) provide an alternative theoretical framework for semi-global alignment, but their use is limited because they lack heuristic acceleration and accurate E-values. Our new tool, GLOBAL, overcomes some limitations of previous semi-global HMMs: it has accurate E-values and the possibility of the heuristic acceleration required for high-throughput applications. Moreover, according to a standard of truth based on protein structure, two semi-global HMM alignment tools (GLOBAL and HMMer) had comparable performance in identifying complete domains, but distinctly outperformed two tools based on local alignment. When searching for complete protein domains, therefore, GLOBAL avoids disadvantages commonly associated with HMMs, yet maintains their superior retrieval performance. PMID:17596268

  1. EEMD based pitch evaluation method for accurate grating measurement by AFM

    NASA Astrophysics Data System (ADS)

    Li, Changsheng; Yang, Shuming; Wang, Chenying; Jiang, Zhuangde

    2016-09-01

    The pitch measurement and AFM calibration precision are significantly influenced by the grating pitch evaluation method. This paper presents the ensemble empirical mode decomposition (EEMD) based pitch evaluation method to relieve the accuracy deterioration caused by high and low frequency components of scanning profile during pitch evaluation. The simulation analysis shows that the application of EEMD can improve the pitch accuracy of the FFT-FT algorithm. The pitch error is small when the iteration number of the FFT-FT algorithms was 8. The AFM measurement of the 500 nm-pitch one-dimensional grating shows that the EEMD based pitch evaluation method could improve the pitch precision, especially the grating line position precision, and greatly expand the applicability of the gravity center algorithm when particles and impression marks were distributed on the sample surface. The measurement indicates that the nonlinearity was stable, and the nonlinearity of x axis and forward scanning was much smaller than their counterpart. Finally, a detailed pitch measurement uncertainty evaluation model suitable for commercial AFMs was demonstrated and a pitch uncertainty in the sub-nanometer range was achieved. The pitch uncertainty was reduced about 10% by EEMD.

  2. How accurately do drivers evaluate their own driving behavior? An on-road observational study.

    PubMed

    Amado, Sonia; Arıkan, Elvan; Kaça, Gülin; Koyuncu, Mehmet; Turkan, B Nilay

    2014-02-01

    Self-assessment of driving skills became a noteworthy research subject in traffic psychology, since by knowing one's strenghts and weaknesses, drivers can take an efficient compensatory action to moderate risk and to ensure safety in hazardous environments. The current study aims to investigate drivers' self-conception of their own driving skills and behavior in relation to expert evaluations of their actual driving, by using naturalistic and systematic observation method during actual on-road driving session and to assess the different aspects of driving via comprehensive scales sensitive to different specific aspects of driving. 19-63 years old male participants (N=158) attended an on-road driving session lasting approximately 80min (45km). During the driving session, drivers' errors and violations were recorded by an expert observer. At the end of the driving session, observers completed the driver evaluation questionnaire, while drivers completed the driving self-evaluation questionnaire and Driver Behavior Questionnaire (DBQ). Low to moderate correlations between driver and observer evaluations of driving skills and behavior, mainly on errors and violations of speed and traffic lights was found. Furthermore, the robust finding that drivers evaluate their driving performance as better than the expert was replicated. Over-positive appraisal was higher among drivers with higher error/violation score and with the ones that were evaluated by the expert as "unsafe". We suggest that the traffic environment might be regulated by increasing feedback indicators of errors and violations, which in turn might increase the insight into driving performance. Improving self-awareness by training and feedback sessions might play a key role for reducing the probability of risk in their driving activity.

  3. Fast and accurate simulations of diffusion-weighted MRI signals for the evaluation of acquisition sequences

    NASA Astrophysics Data System (ADS)

    Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime

    2016-03-01

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.

  4. Variable impedance cardiography waveforms: how to evaluate the preejection period more accurately

    NASA Astrophysics Data System (ADS)

    Ermishkin, V. V.; Kolesnikov, V. A.; Lukoshkova, E. V.; Mokh, V. P.; Sonina, R. S.; Dupik, N. V.; Boitsov, S. A.

    2012-12-01

    Impedance method has been successfully applied for left ventricular function assessment during functional tests. The preejection period (PEP), the interval between Q peak in ECG and a specific mark on impedance cardiogram (ICG) which corresponds to aortic valve opening, is an important indicator of the contractility state and its neurogenic control. Accurate identification of ejection onset by ICG is often problematic, especially in the cardiologic patients, due to peculiar waveforms. An essential obstacle is variability of the shape of the ICG waveform during the exercise and subsequent recovery. A promissing solution can be introduction of an additional pulse sensor placed in the nearby region. We tested this idea in 28 healthy subjects and 6 cardiologic patients using a dual-channel impedance cardiograph for simultaneous recording from the aortic and neck regions, and an earlobe photoplethysmograph. Our findings suggest that incidence of abnormal complicated ICG waveforms increases with age. The combination of standard ICG with ear photoplethysmography and/or additional impedance channel significantly improves the efficacy and accuracy of PEP estimation.

  5. Melt-rock reaction in the asthenospheric mantle: Perspectives from high-order accurate numerical simulations in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.

    2013-12-01

    The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales

  6. Is internal target volume accurate for dose evaluation in lung cancer stereotactic body radiotherapy?

    PubMed Central

    Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Hu, Weigang

    2016-01-01

    Purpose 4DCT delineated internal target volume (ITV) was applied to determine the tumor motion and used as planning target in treatment planning in lung cancer stereotactic body radiotherapy (SBRT). This work is to study the accuracy of using ITV to predict the real target dose in lung cancer SBRT. Materials and methods Both for phantom and patient cases, the ITV and gross tumor volumes (GTVs) were contoured on the maximum intensity projection (MIP) CT and ten CT phases, respectively. A SBRT plan was designed using ITV as the planning target on average projection (AVG) CT. This plan was copied to each CT phase and the dose distribution was recalculated. The GTV_4D dose was acquired through accumulating the GTV doses over all ten phases and regarded as the real target dose. To analyze the ITV dose error, the ITV dose was compared to the real target dose by endpoints of D99, D95, D1 (doses received by the 99%, 95% and 1% of the target volume), and dose coverage endpoint of V100(relative volume receiving at least the prescription dose). Results The phantom study shows that the ITV underestimates the real target dose by 9.47%∼19.8% in D99, 4.43%∼15.99% in D95, and underestimates the dose coverage by 5% in V100. The patient cases show that the ITV underestimates the real target dose and dose coverage by 3.8%∼10.7% in D99, 4.7%∼7.2% in D95, and 3.96%∼6.59% in V100 in motion target cases. Conclusions Cautions should be taken that ITV is not accurate enough to predict the real target dose in lung cancer SBRT with large tumor motions. Restricting the target motion or reducing the target dose heterogeneity could reduce the ITV dose underestimation effect in lung SBRT. PMID:26968812

  7. Development and evaluation of polycrystalline cadmium telluride dosimeters for accurate quality assurance in radiation therapy

    NASA Astrophysics Data System (ADS)

    Oh, K.; Han, M.; Kim, K.; Heo, Y.; Moon, C.; Park, S.; Nam, S.

    2016-02-01

    For quality assurance in radiation therapy, several types of dosimeters are used such as ionization chambers, radiographic films, thermo-luminescent dosimeter (TLD), and semiconductor dosimeters. Among them, semiconductor dosimeters are particularly useful for in vivo dosimeters or high dose gradient area such as the penumbra region because they are more sensitive and smaller in size compared to typical dosimeters. In this study, we developed and evaluated Cadmium Telluride (CdTe) dosimeters, one of the most promising semiconductor dosimeters due to their high quantum efficiency and charge collection efficiency. Such CdTe dosimeters include single crystal form and polycrystalline form depending upon the fabrication process. Both types of CdTe dosimeters are commercially available, but only the polycrystalline form is suitable for radiation dosimeters, since it is less affected by volumetric effect and energy dependence. To develop and evaluate polycrystalline CdTe dosimeters, polycrystalline CdTe films were prepared by thermal evaporation. After that, CdTeO3 layer, thin oxide layer, was deposited on top of the CdTe film by RF sputtering to improve charge carrier transport properties and to reduce leakage current. Also, the CdTeO3 layer which acts as a passivation layer help the dosimeter to reduce their sensitivity changes with repeated use due to radiation damage. Finally, the top and bottom electrodes, In/Ti and Pt, were used to have Schottky contact. Subsequently, the electrical properties under high energy photon beams from linear accelerator (LINAC), such as response coincidence, dose linearity, dose rate dependence, reproducibility, and percentage depth dose, were measured to evaluate polycrystalline CdTe dosimeters. In addition, we compared the experimental data of the dosimeter fabricated in this study with those of the silicon diode dosimeter and Thimble ionization chamber which widely used in routine dosimetry system and dose measurements for radiation

  8. Comparison of numerical techniques for the evaluation of the Doppler broadening functions psi(x,theta) and chi(x,theta)

    NASA Technical Reports Server (NTRS)

    Canright, R. B., Jr.; Semler, T. T.

    1972-01-01

    Several approximations to the Doppler broadening functions psi(x, theta) and chi(x, theta) are compared with respect to accuracy and speed of evaluation. A technique, due to A. M. Turning (1943), is shown to be at least as accurate as direct numerical quadrature and somewhat faster than Gaussian quadrature. FORTRAN 4 listings are included.

  9. Laboratory and numerical evaluation of borehole methods for subsurface horizontal flow characterization.

    SciTech Connect

    Pedler, William H. (Radon Abatement Systems, Inc., Golden, CO); Jepsen, Richard Alan (Sandia National Laboratories, Carlsbad, NM)

    2003-08-01

    The requirement to accurately measure subsurface groundwater flow at contaminated sites, as part of a time and cost effective remediation program, has spawned a variety of flow evaluation technologies. Validation of the accuracy and knowledge regarding the limitations of these technologies are critical for data quality and application confidence. Leading the way in the effort to validate and better understand these methodologies, the US Army Environmental Center has funded a multi-year program to compare and evaluate all viable horizontal flow measurement technologies. This multi-year program has included a field comparison phase, an application of selected methods as part of an integrated site characterization program phase, and most recently, a laboratory and numerical simulator phase. As part of this most recent phase, numerical modeling predictions and laboratory measurements were made in a simulated fracture borehole set-up within a controlled flow simulator. The scanning colloidal borescope flowmeter (SCBFM) and advanced hydrophysical logging (NxHpL{trademark}) tool were used to measure velocities and flow rate in a simulated fractured borehole in the flow simulator. Particle tracking and mass flux measurements were observed and recorded under a range of flow conditions in the simulator. Numerical models were developed to aid in the design of the flow simulator and predict the flow conditions inside the borehole. Results demonstrated that the flow simulator allowed for predictable, easily controlled, and stable flow rates both inside and outside the well. The measurement tools agreed well with each other over a wide range of flow conditions. The model results demonstrate that the Scanning Colloidal Borescope did not interfere with the flow in the borehole in any of the tests. The model is capable of predicting flow conditions and agreed well with the measurements and observations in the flow simulator and borehole. Both laboratory and model results showed a

  10. Accurate evaluation of viscoelasticity of radial artery wall during flow-mediated dilation in ultrasound measurement

    NASA Astrophysics Data System (ADS)

    Sakai, Yasumasa; Taki, Hirofumi; Kanai, Hiroshi

    2016-07-01

    In our previous study, the viscoelasticity of the radial artery wall was estimated to diagnose endothelial dysfunction using a high-frequency (22 MHz) ultrasound device. In the present study, we employed a commercial ultrasound device (7.5 MHz) and estimated the viscoelasticity using arterial pressure and diameter, both of which were measured at the same position. In a phantom experiment, the proposed method successfully estimated the elasticity and viscosity of the phantom with errors of 1.8 and 30.3%, respectively. In an in vivo measurement, the transient change in the viscoelasticity was measured for three healthy subjects during flow-mediated dilation (FMD). The proposed method revealed the softening of the arterial wall originating from the FMD reaction within 100 s after avascularization. These results indicate the high performance of the proposed method in evaluating vascular endothelial function just after avascularization, where the function is difficult to be estimated by a conventional FMD measurement.

  11. Evaluation of a low-cost and accurate ocean temperature logger on subsurface mooring systems

    SciTech Connect

    Tian, Chuan; Deng, Zhiqun; Lu, Jun; Xu, Xiaoyang; Zhao, Wei; Xu, Ming

    2014-06-23

    Monitoring seawater temperature is important to understanding evolving ocean processes. To monitor internal waves or ocean mixing, a large number of temperature loggers are typically mounted on subsurface mooring systems to obtain high-resolution temperature data at different water depths. In this study, we redesigned and evaluated a compact, low-cost, self-contained, high-resolution and high-accuracy ocean temperature logger, TC-1121. The newly designed TC-1121 loggers are smaller, more robust, and their sampling intervals can be automatically changed by indicated events. They have been widely used in many mooring systems to study internal wave and ocean mixing. The logger’s fundamental design, noise analysis, calibration, drift test, and a long-term sea trial are discussed in this paper.

  12. Evaluation of the EURO-CORDEX RCMs to accurately simulate the Etesian wind system

    NASA Astrophysics Data System (ADS)

    Dafka, Stella; Xoplaki, Elena; Toreti, Andrea; Zanis, Prodromos; Tyrlis, Evangelos; Luterbacher, Jürg

    2016-04-01

    The Etesians are among the most persistent regional scale wind systems in the lower troposphere that blow over the Aegean Sea during the extended summer season. ΑAn evaluation of the high spatial resolution, EURO-CORDEX Regional Climate Models (RCMs) is here presented. The study documents the performance of the individual models in representing the basic spatiotemporal pattern of the Etesian wind system for the period 1989-2004. The analysis is mainly focused on evaluating the abilities of the RCMs in simulating the surface wind over the Aegean Sea and the associated large scale atmospheric circulation. Mean Sea Level Pressure (SLP), wind speed and geopotential height at 500 hPa are used. The simulated results are validated against reanalysis datasets (20CR-v2c and ERA20-C) and daily observational measurements (12:00 UTC) from the mainland Greece and Aegean Sea. The analysis highlights the general ability of the RCMs to capture the basic features of the Etesians, but also indicates considerable deficiencies for selected metrics, regions and subperiods. Some of these deficiencies include the significant underestimation (overestimation) of the mean SLP in the northeastern part of the analysis domain in all subperiods (for May and June) when compared to 20CR-v2c (ERA20-C), the significant overestimation of the anomalous ridge over the Balkans and central Europe and the underestimation of the wind speed over the Aegean Sea. Future work will include an assessment of the Etesians for the next decades using EURO-CORDEX projections under different RCP scenarios and estimate the future potential for wind energy production.

  13. Semi-numerical evaluation of one-loop corrections

    SciTech Connect

    Ellis, R.K.; Giele, W.T.; Zanderighi, G.; /Fermilab

    2005-08-01

    We present a semi-numerical algorithm to calculate one-loop virtual corrections to scattering amplitudes. The divergences of the loop amplitudes are regulated using dimensional regularization. We treat in detail the case of amplitudes with up to five external legs and massless internal lines, although the method is more generally applicable. Tensor integrals are reduced to generalized scalar integrals, which in turn are reduced to a set of known basis integrals using recursion relations. The reduction algorithm is modified near exceptional configurations to ensure numerical stability. To test the procedure we apply these techniques to one-loop corrections to the Higgs to four quark process for which analytic results have recently become available.

  14. How Accurately Can Older Adults Evaluate the Quality of Their Text Recall? The Effect of Providing Standards on Judgment Accuracy.

    PubMed

    Baker, Julie; Dunlosky, John; Hertzog, Christopher

    2009-01-01

    Adults have difficulties accurately judging how well they have learned text materials; unfortunately, such low levels of accuracy may obscure age-related deficits. Higher levels of accuracy have been obtained when younger adults make postdictions about which test questions they answered correctly. Accordingly, we focus on the accuracy of postdictive judgments to evaluate whether age deficits would emerge with higher levels of accuracy and whether people's postdictive accuracy would benefit from providing an appropriate standard of evlauation. Participants read texts with definitions embedded in them, attempted to recall each definition, and then made a postdictive judgment about the quality of their recall. When making these judgments, participants either received no standard or were presented the correct definition as a standard for evaluation. Age-related equivalence was found in the relative accuracy of these term-specific judgments, and older adults' absolute accuracy benefited from providing standards to the same degree as did younger adults.

  15. Pitfalls and guidelines for the numerical evaluation of moderate-order system frequency response

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1981-01-01

    The design and evaluation of a feedback control system via frequency response methods relies heavily upon numerical methods. In application, one can usually develop low order simulation models which for the most part are devoid of numerical problems. However, when complex feedback interactions, for example, between instrument control systems and their flexible mounting structure, must be evaluated, simulation models become moderate to large order and numerical problems become common. A large body of relevant numerical error analysis literature is summarized in a large language understandable to nonspecialists. The intent is to provide engineers using simulation models with an engineering feel for potential numerical problems without getting intertwined in the complexities of the associated mathematical theory. Guidelines are also provided by suggesting alternate state of the art methods which have good numerical evaluation characteristics.

  16. Lift capability prediction for helicopter rotor blade-numerical evaluation

    NASA Astrophysics Data System (ADS)

    Rotaru, Constantin; Cîrciu, Ionicǎ; Luculescu, Doru

    2016-06-01

    The main objective of this paper is to describe the key physical features for modelling the unsteady aerodynamic effects found on helicopter rotor blade operating under nominally attached flow conditions away from stall. The unsteady effects were considered as phase differences between the forcing function and the aerodynamic response, being functions of the reduced frequency, the Mach number and the mode forcing. For a helicopter rotor, the reduced frequency at any blade element can't be exactly calculated but a first order approximation for the reduced frequency gives useful information about the degree of unsteadiness. The sources of unsteady effects were decomposed into perturbations to the local angle of attack and velocity field. The numerical calculus and graphics were made in FLUENT and MAPLE soft environments. This mathematical model is applicable for aerodynamic design of wind turbine rotor blades, hybrid energy systems optimization and aeroelastic analysis.

  17. Numerical evaluation of effective unsaturated hydraulic properties for fractured rocks

    SciTech Connect

    Lu, Zhiming; Kwicklis, Edward M

    2009-01-01

    To represent a heterogeneous unsaturated fractured rock by its homogeneous equivalent, Monte Carlo simulations are used to obtain upscaled (effective) flow properties. In this study, we present a numerical procedure for upscaling the van Genuchten parameters of unsaturated fractured rocks by conducting Monte Carlo simulations of the unsaturated flow in a domain under gravity-dominated regime. The simulation domain can be chosen as the scale of block size in the field-scale modeling. The effective conductivity is computed from the steady-state flux at the lower boundary and plotted as a function of the averaging pressure head or saturation over the domain. The scatter plot is then fitted using van Genuchten model and three parameters, i.e., the saturated conductivity K{sub s}, the air-entry parameter {alpha}, the pore-size distribution parameter n, corresponding to this model are considered as the effective K{sub s}, effective {alpha}, and effective n, respectively.

  18. High Specificity in Circulating Tumor Cell Identification Is Required for Accurate Evaluation of Programmed Death-Ligand 1

    PubMed Central

    Schultz, Zachery D.; Warrick, Jay W.; Guckenberger, David J.; Pezzi, Hannah M.; Sperger, Jamie M.; Heninger, Erika; Saeed, Anwaar; Leal, Ticiana; Mattox, Kara; Traynor, Anne M.; Campbell, Toby C.; Berry, Scott M.; Beebe, David J.; Lang, Joshua M.

    2016-01-01

    Background Expression of programmed-death ligand 1 (PD-L1) in non-small cell lung cancer (NSCLC) is typically evaluated through invasive biopsies; however, recent advances in the identification of circulating tumor cells (CTCs) may be a less invasive method to assay tumor cells for these purposes. These liquid biopsies rely on accurate identification of CTCs from the diverse populations in the blood, where some tumor cells share characteristics with normal blood cells. While many blood cells can be excluded by their high expression of CD45, neutrophils and other immature myeloid subsets have low to absent expression of CD45 and also express PD-L1. Furthermore, cytokeratin is typically used to identify CTCs, but neutrophils may stain non-specifically for intracellular antibodies, including cytokeratin, thus preventing accurate evaluation of PD-L1 expression on tumor cells. This holds even greater significance when evaluating PD-L1 in epithelial cell adhesion molecule (EpCAM) positive and EpCAM negative CTCs (as in epithelial-mesenchymal transition (EMT)). Methods To evaluate the impact of CTC misidentification on PD-L1 evaluation, we utilized CD11b to identify myeloid cells. CTCs were isolated from patients with metastatic NSCLC using EpCAM, MUC1 or Vimentin capture antibodies and exclusion-based sample preparation (ESP) technology. Results Large populations of CD11b+CD45lo cells were identified in buffy coats and stained non-specifically for intracellular antibodies including cytokeratin. The amount of CD11b+ cells misidentified as CTCs varied among patients; accounting for 33–100% of traditionally identified CTCs. Cells captured with vimentin had a higher frequency of CD11b+ cells at 41%, compared to 20% and 18% with MUC1 or EpCAM, respectively. Cells misidentified as CTCs ultimately skewed PD-L1 expression to varying degrees across patient samples. Conclusions Interfering myeloid populations can be differentiated from true CTCs with additional staining criteria

  19. Evaluation and purchase of confocal microscopes: Numerous factors to consider

    EPA Science Inventory

    The purchase of a confocal microscope can be a complex and difficult decision for an individual scientist, group or evaluation committee. This is true even for scientists that have used confocal technology for many years. The task of reaching the optimal decision becomes almost i...

  20. Evaluation of the sample needed to accurately estimate outcome-based measurements of dairy welfare on farm.

    PubMed

    Endres, M I; Lobeck-Luchterhand, K M; Espejo, L A; Tucker, C B

    2014-01-01

    Dairy welfare assessment programs are becoming more common on US farms. Outcome-based measurements, such as locomotion, hock lesion, hygiene, and body condition scores (BCS), are included in these assessments. The objective of the current study was to investigate the proportion of cows in the pen or subsamples of pens on a farm needed to provide an accurate estimate of the previously mentioned measurements. In experiment 1, we evaluated cows in 52 high pens (50 farms) for lameness using a 1- to 5-scale locomotion scoring system (1 = normal and 5 = severely lame; 24.4 and 6% of animals were scored ≥ 3 or ≥ 4, respectively). Cows were also given a BCS using a 1- to 5-scale, where 1 = emaciated and 5 = obese; cows were rarely thin (BCS ≤ 2; 0.10% of cows) or fat (BCS ≥ 4; 0.11% of cows). Hygiene scores were assessed on a 1- to 5-scale with 1 = clean and 5 = severely dirty; 54.9% of cows had a hygiene score ≥ 3. Hock injuries were classified as 1 = no lesion, 2 = mild lesion, and 3 = severe lesion; 10.6% of cows had a score of 3. Subsets of data were created with 10 replicates of random sampling that represented 100, 90, 80, 70, 60, 50, 40, 30, 20, 15, 10, 5, and 3% of the cows measured/pen. In experiment 2, we scored the same outcome measures on all cows in lactating pens from 12 farms and evaluated using pen subsamples: high; high and fresh; high, fresh, and hospital; and high, low, and hospital. For both experiments, the association between the estimates derived from all subsamples and entire pen (experiment 1) or herd (experiment 2) prevalence was evaluated using linear regression. To be considered a good estimate, 3 criteria must be met: R(2)>0.9, slope = 1, and intercept = 0. In experiment 1, on average, recording 15% of the pen represented the percentage of clinically lame cows (score ≥ 3), whereas 30% needed to be measured to estimate severe lameness (score ≥ 4). Only 15% of the pen was needed to estimate the percentage of the herd with a hygiene

  1. Numerical evaluation of lateral diffusion inside diffusive gradients in thin films samplers.

    PubMed

    Santner, Jakob; Kreuzeder, Andreas; Schnepf, Andrea; Wenzel, Walter W

    2015-05-19

    Using numerical simulation of diffusion inside diffusive gradients in thin films (DGT) samplers, we show that the effect of lateral diffusion inside the sampler on the solute flux into the sampler is a nonlinear function of the diffusion layer thickness and the physical sampling window size. In contrast, earlier work concluded that this effect was constant irrespective of parameters of the sampler geometry. The flux increase caused by lateral diffusion inside the sampler was determined to be ∼8.8% for standard samplers, which is considerably lower than the previous estimate of ∼20%. Lateral diffusion is also propagated to the diffusive boundary layer (DBL), where it leads to a slightly stronger decrease in the mass uptake than suggested by the common 1D diffusion model that is applied for evaluating DGT results. We introduce a simple correction procedure for lateral diffusion and demonstrate how the effect of lateral diffusion on diffusion in the DBL can be accounted for. These corrections often result in better estimates of the DBL thickness (δ) and the DGT-measured concentration than earlier approaches and will contribute to more accurate concentration measurements in solute monitoring in waters.

  2. Numerical Evaluation of Lateral Diffusion Inside Diffusive Gradients in Thin Films Samplers

    PubMed Central

    2015-01-01

    Using numerical simulation of diffusion inside diffusive gradients in thin films (DGT) samplers, we show that the effect of lateral diffusion inside the sampler on the solute flux into the sampler is a nonlinear function of the diffusion layer thickness and the physical sampling window size. In contrast, earlier work concluded that this effect was constant irrespective of parameters of the sampler geometry. The flux increase caused by lateral diffusion inside the sampler was determined to be ∼8.8% for standard samplers, which is considerably lower than the previous estimate of ∼20%. Lateral diffusion is also propagated to the diffusive boundary layer (DBL), where it leads to a slightly stronger decrease in the mass uptake than suggested by the common 1D diffusion model that is applied for evaluating DGT results. We introduce a simple correction procedure for lateral diffusion and demonstrate how the effect of lateral diffusion on diffusion in the DBL can be accounted for. These corrections often result in better estimates of the DBL thickness (δ) and the DGT-measured concentration than earlier approaches and will contribute to more accurate concentration measurements in solute monitoring in waters. PMID:25877251

  3. 3-D numerical evaluation of density effects on tracer tests.

    PubMed

    Beinhorn, M; Dietrich, P; Kolditz, O

    2005-12-01

    In this paper we present numerical simulations carried out to assess the importance of density-dependent flow on tracer plume development. The scenario considered in the study is characterized by a short-term tracer injection phase into a fully penetrating well and a natural hydraulic gradient. The scenario is thought to be typical for tracer tests conducted in the field. Using a reference case as a starting point, different model parameters were changed in order to determine their importance to density effects. The study is based on a three-dimensional model domain. Results were interpreted using concentration contours and a first moment analysis. Tracer injections of 0.036 kg per meter of saturated aquifer thickness do not cause significant density effects assuming hydraulic gradients of at least 0.1%. Higher tracer input masses, as used for geoelectrical investigations, may lead to buoyancy-induced flow in the early phase of a tracer test which in turn impacts further plume development. This also holds true for shallow aquifers. Results of simulations with different tracer injection rates and durations imply that the tracer input scenario has a negligible effect on density flow. Employing model cases with different realizations of a log conductivity random field, it could be shown that small variations of hydraulic conductivity in the vicinity of the tracer injection well have a major control on the local tracer distribution but do not mask effects of buoyancy-induced flow. PMID:16183165

  4. Evaluation of a Second-Order Accurate Navier-Stokes Code for Detached Eddy Simulation Past a Circular Cylinder

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Singer, Bart A.

    2003-01-01

    We evaluate the applicability of a production computational fluid dynamics code for conducting detached eddy simulation for unsteady flows. A second-order accurate Navier-Stokes code developed at NASA Langley Research Center, known as TLNS3D, is used for these simulations. We focus our attention on high Reynolds number flow (Re = 5 x 10(sup 4) - 1.4 x 10(sup 5)) past a circular cylinder to simulate flows with large-scale separations. We consider two types of flow situations: one in which the flow at the separation point is laminar, and the other in which the flow is already turbulent when it detaches from the surface of the cylinder. Solutions are presented for two- and three-dimensional calculations using both the unsteady Reynolds-averaged Navier-Stokes paradigm and the detached eddy simulation treatment. All calculations use the standard Spalart-Allmaras turbulence model as the base model.

  5. Non-destructive evaluation of the cladding thickness in LEU fuel plates by accurate ultrasonic scanning technique

    SciTech Connect

    Borring, J.; Gundtoft, H.E.; Borum, K.K.; Toft, P.

    1997-08-01

    In an effort to improve their ultrasonic scanning technique for accurate determination of the cladding thickness in LEU fuel plates, new equipment and modifications to the existing hardware and software have been tested and evaluated. The authors are now able to measure an aluminium thickness down to 0.25 mm instead of the previous 0.35 mm. Furthermore, they have shown how the measuring sensitivity can be improved from 0.03 mm to 0.01 mm. It has now become possible to check their standard fuel plates for DR3 against the minimum cladding thickness requirements non-destructively. Such measurements open the possibility for the acceptance of a thinner nominal cladding than normally used today.

  6. Numerical evaluation of one-loop diagrams near exceptional momentum configurations

    SciTech Connect

    Walter T Giele; Giulia Zanderighi; E.W.N. Glover

    2004-07-06

    One problem which plagues the numerical evaluation of one-loop Feynman diagrams using recursive integration by part relations is a numerical instability near exceptional momentum configurations. In this contribution we will discuss a generic solution to this problem. As an example we consider the case of forward light-by-light scattering.

  7. Numerical evaluation and experimental validation of vascular access stenosis estimation.

    PubMed

    Chen, Weiling; Kan, Chung Dann; Kao, Rui-Hung

    2015-01-01

    Vascular access dysfunction commonly occurs in hemodialysis patients. Regularly monitoring and evaluating the vascular access condition is an important issue for these diseased patients. The objective of this study was to identify acoustic parameters and hemodynamics that related to changes in the stenosis of vascular access. In-vitro experimental circulation system offered pulsatile and physiological condition to simulate the arteriovenouse access in hemodialysis patient. We created the environments of various degrees of stenosis (DOS) inside the arteriovenouse access to simulate the stenotic conditions in patients. And we also used the computational fluid dynamics (CFD) to simulate the pressure distribution, primary axial velocity distribution, and secondary flow distribution in the same various DOS and boundary condition. There are two findings, one is recorded the bruit which caused by the fluctuation of fluid in different severe stenosis, the other is described the correlation between bruit and hemodynamic parameters. Experimental results show the time constants have linear regression with a positive correlation as the degree of stenosis (DOS) increases. Finally, in contrast to CFD computerized analysis and acoustic methods, the proposed parameter provides a feasibility index for evaluating the risk of AVG dysfunction in on-line/real time analysis.

  8. Numerical evaluation of single central jet for turbine disk cooling

    NASA Astrophysics Data System (ADS)

    Subbaraman, M. R.; Hadid, A. H.; McConnaughey, P. K.

    The cooling arrangement of the Space Shuttle Main Engine High Pressure Oxidizer Turbopump (HPOTP) incorporates two jet rings, each of which produces 19 high-velocity coolant jets. At some operating conditions, the frequency of excitation associated with the 19 jets coincides with the natural frequency of the turbine blades, contributing to fatigue cracking of blade shanks. In this paper, an alternate turbine disk cooling arrangement, applicable to disk faces of zero hub radius, is evaluated, which consists of a single coolant jet impinging at the center of the turbine disk. Results of the CFD analysis show that replacing the jet ring with a single central coolant jet in the HPOTP leads to an acceptable thermal environment at the disk rim. Based on the predictions of flow and temperature fields for operating conditions, the single central jet cooling system was recommended for implementation into the development program of the Technology Test Bed Engine at NASA Marshall Space Flight Center.

  9. [Numerical evaluation of soil quality under different conservation tillage patterns].

    PubMed

    Wu, Yu-Hong; Tian, Xiao-Hong; Chi, Wen-Bo; Nan, Xiong-Xiong; Yan, Xiao-Li; Zhu, Rui-Xiang; Tong, Yan-An

    2010-06-01

    A 9-year field experiment was conducted on the Guanzhong Plain of Shaanxi Province to study the effects of subsoiling, rotary tillage, straw return, no-till seeding, and traditional tillage on the soil physical and chemical properties and the grain yield in a winter wheat-summer maize rotation system, and a comprehensive evaluation was made on the soil quality under these tillage patterns by the method of principal components analysis (PCA). Comparing with traditional tillage, all the conservation tillage patterns improved soil fertility quality and soil physical properties. Under conservative tillage, the activities of soil urease and alkaline phosphatase increased significantly, soil quality index increased by 19.8%-44.0%, and the grain yield of winter wheat and summer maize (expect that under no till seeding with straw covering) increased by 13%-28% and 3%-12%, respectively. Subsoiling every other year, straw-chopping combined with rotary tillage, and straw-mulching combined with subsoiling not only increased crop yield, but also improved soil quality. Based on the economic and ecological benefits, the practices of subsoiling and straw return should be promoted.

  10. [Numerical evaluation of soil quality under different conservation tillage patterns].

    PubMed

    Wu, Yu-Hong; Tian, Xiao-Hong; Chi, Wen-Bo; Nan, Xiong-Xiong; Yan, Xiao-Li; Zhu, Rui-Xiang; Tong, Yan-An

    2010-06-01

    A 9-year field experiment was conducted on the Guanzhong Plain of Shaanxi Province to study the effects of subsoiling, rotary tillage, straw return, no-till seeding, and traditional tillage on the soil physical and chemical properties and the grain yield in a winter wheat-summer maize rotation system, and a comprehensive evaluation was made on the soil quality under these tillage patterns by the method of principal components analysis (PCA). Comparing with traditional tillage, all the conservation tillage patterns improved soil fertility quality and soil physical properties. Under conservative tillage, the activities of soil urease and alkaline phosphatase increased significantly, soil quality index increased by 19.8%-44.0%, and the grain yield of winter wheat and summer maize (expect that under no till seeding with straw covering) increased by 13%-28% and 3%-12%, respectively. Subsoiling every other year, straw-chopping combined with rotary tillage, and straw-mulching combined with subsoiling not only increased crop yield, but also improved soil quality. Based on the economic and ecological benefits, the practices of subsoiling and straw return should be promoted. PMID:20873622

  11. New glycoproteomics software, GlycoPep Evaluator, generates decoy glycopeptides de novo and enables accurate false discovery rate analysis for small data sets.

    PubMed

    Zhu, Zhikai; Su, Xiaomeng; Go, Eden P; Desaire, Heather

    2014-09-16

    Glycoproteins are biologically significant large molecules that participate in numerous cellular activities. In order to obtain site-specific protein glycosylation information, intact glycopeptides, with the glycan attached to the peptide sequence, are characterized by tandem mass spectrometry (MS/MS) methods such as collision-induced dissociation (CID) and electron transfer dissociation (ETD). While several emerging automated tools are developed, no consensus is present in the field about the best way to determine the reliability of the tools and/or provide the false discovery rate (FDR). A common approach to calculate FDRs for glycopeptide analysis, adopted from the target-decoy strategy in proteomics, employs a decoy database that is created based on the target protein sequence database. Nonetheless, this approach is not optimal in measuring the confidence of N-linked glycopeptide matches, because the glycopeptide data set is considerably smaller compared to that of peptides, and the requirement of a consensus sequence for N-glycosylation further limits the number of possible decoy glycopeptides tested in a database search. To address the need to accurately determine FDRs for automated glycopeptide assignments, we developed GlycoPep Evaluator (GPE), a tool that helps to measure FDRs in identifying glycopeptides without using a decoy database. GPE generates decoy glycopeptides de novo for every target glycopeptide, in a 1:20 target-to-decoy ratio. The decoys, along with target glycopeptides, are scored against the ETD data, from which FDRs can be calculated accurately based on the number of decoy matches and the ratio of the number of targets to decoys, for small data sets. GPE is freely accessible for download and can work with any search engine that interprets ETD data of N-linked glycopeptides. The software is provided at https://desairegroup.ku.edu/research.

  12. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions.

    PubMed

    Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan

    2016-05-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To

  13. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions

    PubMed Central

    Brezovský, Jan

    2016-01-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools’ predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations

  14. Development and clinical evaluation of a highly accurate dengue NS1 rapid test: from the preparation of a soluble NS1 antigen to the construction of an RDT.

    PubMed

    Lee, Jihoo; Kim, Hak-Yong; Chong, Chom-Kyu; Song, Hyun-Ok

    2015-06-01

    Early diagnosis of dengue virus (DENV) is important. There are numerous products on the market claiming to detect DENV NS1, but these are not always reliable. In this study, a highly sensitive and accurate rapid diagnostic test (RDT) was developed using anti-dengue NS1 monoclonal antibodies. A recombinant NS1 protein was produced with high antigenicity and purity. Monoclonal antibodies were raised against this purified NS1 antigen. The RDT was constructed using a capturing (4A6A10, Kd=7.512±0.419×10(-9)) and a conjugating antibody (3E12E6, Kd=7.032±0.322×10(-9)). The diagnostic performance was evaluated with NS1-positive clinical samples collected from various dengue endemic countries and compared to SD BioLine Dengue NS1 Ag kit. The constructed RDT exhibited higher sensitivity (92.9%) with more obvious diagnostic performance than the commercial kit (83.3%). The specificity of constructed RDT was 100%. The constructed RDT could offer a reliable point-of-care testing tool for the early detection of dengue infections in remote areas and contribute to the control of dengue-related diseases. PMID:25824725

  15. Development and clinical evaluation of a highly accurate dengue NS1 rapid test: from the preparation of a soluble NS1 antigen to the construction of an RDT.

    PubMed

    Lee, Jihoo; Kim, Hak-Yong; Chong, Chom-Kyu; Song, Hyun-Ok

    2015-06-01

    Early diagnosis of dengue virus (DENV) is important. There are numerous products on the market claiming to detect DENV NS1, but these are not always reliable. In this study, a highly sensitive and accurate rapid diagnostic test (RDT) was developed using anti-dengue NS1 monoclonal antibodies. A recombinant NS1 protein was produced with high antigenicity and purity. Monoclonal antibodies were raised against this purified NS1 antigen. The RDT was constructed using a capturing (4A6A10, Kd=7.512±0.419×10(-9)) and a conjugating antibody (3E12E6, Kd=7.032±0.322×10(-9)). The diagnostic performance was evaluated with NS1-positive clinical samples collected from various dengue endemic countries and compared to SD BioLine Dengue NS1 Ag kit. The constructed RDT exhibited higher sensitivity (92.9%) with more obvious diagnostic performance than the commercial kit (83.3%). The specificity of constructed RDT was 100%. The constructed RDT could offer a reliable point-of-care testing tool for the early detection of dengue infections in remote areas and contribute to the control of dengue-related diseases.

  16. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: sensitivity and specificity analysis.

    PubMed

    Kapp, Eugene A; Schütz, Frédéric; Connolly, Lisa M; Chakel, John A; Meza, Jose E; Miller, Christine A; Fenyo, David; Eng, Jimmy K; Adkins, Joshua N; Omenn, Gilbert S; Simpson, Richard J

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X!Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, PeptideProphet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X!Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of "consensus scoring", i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  17. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  18. Evaluating the use of high-resolution numerical weather forecast for debris flow prediction.

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Efthymios I.; Bartsotas, Nikolaos S.; Borga, Marco; Kallos, George

    2015-04-01

    The sudden occurrence combined with the high destructive power of debris flows pose a significant threat to human life and infrastructures. Therefore, developing early warning procedures for the mitigation of debris flows risk is of great economical and societal importance. Given that rainfall is the predominant factor controlling debris flow triggering, it is indisputable that development of effective debris flows warning procedures requires accurate knowledge of the properties (e.g. duration, intensity) of the triggering rainfall. Moreover, efficient and timely response of emergency operations depends highly on the lead-time provided by the warning systems. Currently, the majority of early warning systems for debris flows are based on nowcasting procedures. While the latter may be successful in predicting the hazard, they provide warnings with a relatively short lead-time (~6h). Increasing the lead-time is necessary in order to improve the pre-incident operations and communication of the emergency, thus coupling warning systems with weather forecasting is essential for advancing early warning procedures. In this work we evaluate the potential of using high-resolution (1km) rainfall fields forecasted with a state-of-the-art numerical weather prediction model (RAMS/ICLAMS), in order to predict the occurrence of debris flows. Analysis is focused over the Upper Adige region, Northeast Italy, an area where debris flows are frequent. Seven storm events that generated a large number (>80) of debris flows during the period 2007-2012 are analyzed. Radar-based rainfall estimates, available from the operational C-band radar located at Mt Macaion, are used as the reference to evaluate the forecasted rainfall fields. Evaluation is mainly focused on assessing the error in forecasted rainfall properties (magnitude, duration) and the correlation in space and time with the reference field. Results show that the forecasted rainfall fields captured very well the magnitude and

  19. Numerous Numerals.

    ERIC Educational Resources Information Center

    Henle, James M.

    This pamphlet consists of 17 brief chapters, each containing a discussion of a numeration system and a set of problems on the use of that system. The numeration systems used include Egyptian fractions, ordinary continued fractions and variants of that method, and systems using positive and negative bases. The book is informal and addressed to…

  20. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure.

    PubMed

    vom Saal, Frederick S; Welshons, Wade V

    2014-12-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources.

  1. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure

    PubMed Central

    vom Saal, Frederick S.; Welshons, Wade V.

    2016-01-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273

  2. Description and Evaluation of Numerical Groundwater Flow Models for the Edwards Aquifer, South-Central Texas

    USGS Publications Warehouse

    Lindgren, Richard J.; Taylor, Charles J.; Houston, Natalie A.

    2009-01-01

    A substantial number of public water system wells in south-central Texas withdraw groundwater from the karstic, highly productive Edwards aquifer. However, the use of numerical groundwater flow models to aid in the delineation of contributing areas for public water system wells in the Edwards aquifer is problematic because of the complex hydrogeologic framework and the presence of conduit-dominated flow paths in the aquifer. The U.S. Geological Survey, in cooperation with the Texas Commission on Environmental Quality, evaluated six published numerical groundwater flow models (all deterministic) that have been developed for the Edwards aquifer San Antonio segment or Barton Springs segment, or both. This report describes the models developed and evaluates each with respect to accessibility and ease of use, range of conditions simulated, accuracy of simulations, agreement with dye-tracer tests, and limitations of the models. These models are (1) GWSIM model of the San Antonio segment, a FORTRAN computer-model code that pre-dates the development of MODFLOW; (2) MODFLOW conduit-flow model of San Antonio and Barton Springs segments; (3) MODFLOW diffuse-flow model of San Antonio and Barton Springs segments; (4) MODFLOW Groundwater Availability Modeling [GAM] model of the Barton Springs segment; (5) MODFLOW recalibrated GAM model of the Barton Springs segment; and (6) MODFLOW-DCM (dual conductivity model) conduit model of the Barton Springs segment. The GWSIM model code is not commercially available, is limited in its application to the San Antonio segment of the Edwards aquifer, and lacks the ability of MODFLOW to easily incorporate newly developed processes and packages to better simulate hydrologic processes. MODFLOW is a widely used and tested code for numerical modeling of groundwater flow, is well documented, and is in the public domain. These attributes make MODFLOW a preferred code with regard to accessibility and ease of use. The MODFLOW conduit-flow model

  3. Deciphering the mechanisms of cellular uptake of engineered nanoparticles by accurate evaluation of internalization using imaging flow cytometry

    PubMed Central

    2013-01-01

    Background The uptake of nanoparticles (NPs) by cells remains to be better characterized in order to understand the mechanisms of potential NP toxicity as well as for a reliable risk assessment. Real NP uptake is still difficult to evaluate because of the adsorption of NPs on the cellular surface. Results Here we used two approaches to distinguish adsorbed fluorescently labeled NPs from the internalized ones. The extracellular fluorescence was either quenched by Trypan Blue or the uptake was analyzed using imaging flow cytometry. We used this novel technique to define the inside of the cell to accurately study the uptake of fluorescently labeled (SiO2) and even non fluorescent but light diffracting NPs (TiO2). Time course, dose-dependence as well as the influence of surface charges on the uptake were shown in the pulmonary epithelial cell line NCI-H292. By setting up an integrative approach combining these flow cytometric analyses with confocal microscopy we deciphered the endocytic pathway involved in SiO2 NP uptake. Functional studies using energy depletion, pharmacological inhibitors, siRNA-clathrin heavy chain induced gene silencing and colocalization of NPs with proteins specific for different endocytic vesicles allowed us to determine macropinocytosis as the internalization pathway for SiO2 NPs in NCI-H292 cells. Conclusion The integrative approach we propose here using the innovative imaging flow cytometry combined with confocal microscopy could be used to identify the physico-chemical characteristics of NPs involved in their uptake in view to redesign safe NPs. PMID:23388071

  4. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces.

    PubMed

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-12-01

    Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.

  5. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces.

    PubMed

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-12-01

    Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number. PMID:26183389

  6. Numerical evaluation of the incomplete airy functions and their application to high frequency scattering and diffraction

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1992-01-01

    The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals of such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. Here, a convergent series solution form for the incomplete Airy functions is derived. Asymptotic expansions involving several terms were also developed and serve as large argument approximations. The combination of the series solution form with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.

  7. Difficulties in applying numerical simulations to an evaluation of occupational hazards caused by electromagnetic fields

    PubMed Central

    Zradziński, Patryk

    2015-01-01

    Due to the various physical mechanisms of interaction between a worker's body and the electromagnetic field at various frequencies, the principles of numerical simulations have been discussed for three areas of worker exposure: to low frequency magnetic field, to low and intermediate frequency electric field and to radiofrequency electromagnetic field. This paper presents the identified difficulties in applying numerical simulations to evaluate physical estimators of direct and indirect effects of exposure to electromagnetic fields at various frequencies. Exposure of workers operating a plastic sealer have been taken as an example scenario of electromagnetic field exposure at the workplace for discussion of those difficulties in applying numerical simulations. The following difficulties in reliable numerical simulations of workers’ exposure to the electromagnetic field have been considered: workers’ body models (posture, dimensions, shape and grounding conditions), working environment models (objects most influencing electromagnetic field distribution) and an analysis of parameters for which exposure limitations are specified in international guidelines and standards. PMID:26323781

  8. A Framework for Evaluating Regional-Scale Numerical Photochemical Modeling Systems

    EPA Science Inventory

    This paper discusses the need for critically evaluating regional-scale (~ 200-2000 km) three dimensional numerical photochemical air quality modeling systems to establish a model's credibility in simulating the spatio-temporal features embedded in the observations. Because of li...

  9. Numerical evaluation of the three-dimensional searchlight problem in a half-space

    SciTech Connect

    Kornreich, D.E.; Ganapol, B.D.

    1997-11-01

    The linear Boltzmann equation for the transport of neutral particles is investigated with the objective of generating a benchmark-quality calculation for the three-dimensional searchlight problem in a semi-infinite medium. The derivation assumes stationarity, one energy group, and isotropic scattering. The scalar flux (both surface and interior) and the current at the surface are the quantities of interest. The source considered is a pencil-beam incident at a point on the surface of a semi-infinite medium. The scalar flux will have two-dimensional variation only if the beam is normal; otherwise, it is three-dimensional. The solutions are obtained by using Fourier and Laplace transform models. The transformed transport equation is formulated so that it can be related to a one-dimensional pseudo problem, thus providing some analytical leverage for the inversions. The numerical inversions use standard numerical techniques such as Gauss-Legendre quadrature, summation of infinite series, H-function iteration and evaluation, and Euler-Knopp acceleration. The numerical evaluations of the scalar flux and current at the surface are relatively simple, and the interior scalar flux is relatively difficult to calculate because of the embedded two-dimensional Fourier transform inversion, Laplace transform inversion, and H-function evaluation. Comparisons of these numerical solutions to results from the MCNP probabilistic code and the THREE-DANT discrete ordinates code are provided and help confirm proper operation of the analytical code.

  10. Quick numerical evaluation of the probability that an asteroid will collide with a planet

    NASA Astrophysics Data System (ADS)

    Avdyushev, V. A.; Galushina, T. Yu.

    2014-07-01

    We propose a numerical method for quick evaluation of the probability that an asteroid will collide with a planet. The method is based on linear mappings of an expected moment of a close approach of the asteroid to the planet and the detection of collisions of the virtual objects with the massive body. The standard way for solving the problem of estimating the collision probability consists in simulating the evolution of the uncertainty cloud numerically based on the stepwise integration of virtual orbits. This is naturally associated with huge processor time costs. The proposed method is tested using the examples of the 2011 AG5 and 2007 VK184 asteroids that are presently in the top of the list of the most dangerous celestial objects. The test results show that linear mappings allow one to obtain the estimates of probabilities quicker by several orders than numerical integration of all virtual orbits.

  11. Generalization Evaluation of Machine Learning Numerical Observers for Image Quality Assessment.

    PubMed

    Kalayeh, Mahdi M; Marin, Thibault; Brankov, Jovan G

    2013-06-01

    In this paper, we present two new numerical observers (NO) based on machine learning for image quality assessment. The proposed NOs aim to predict human observer performance in a cardiac perfusion-defect detection task for single-photon emission computed tomography (SPECT) images. Human observer (HumO) studies are now considered to be the gold standard for task-based evaluation of medical images. However such studies are impractical for use in early stages of development for imaging devices and algorithms, because they require extensive involvement of trained human observers who must evaluate a large number of images. To address this problem, numerical observers (also called model observers) have been developed as a surrogate for human observers. The channelized Hotelling observer (CHO), with or without internal noise model, is currently the most widely used NO of this kind. In our previous work we argued that development of a NO model to predict human observers' performance can be viewed as a machine learning (or system identification) problem. This consideration led us to develop a channelized support vector machine (CSVM) observer, a kernel-based regression model that greatly outperformed the popular and widely used CHO. This was especially evident when the numerical observers were evaluated in terms of generalization performance. To evaluate generalization we used a typical situation for the practical use of a numerical observer: after optimizing the NO (which for a CHO might consist of adjusting the internal noise model) based upon a broad set of reconstructed images, we tested it on a broad (but different) set of images obtained by a different reconstruction method. In this manuscript we aim to evaluate two new regression models that achieve accuracy higher than the CHO and comparable to our earlier CSVM method, while dramatically reducing model complexity and computation time. The new models are defined in a Bayesian machine-learning framework: a channelized

  12. Generalization Evaluation of Machine Learning Numerical Observers for Image Quality Assessment

    PubMed Central

    Kalayeh, Mahdi M.; Marin, Thibault; Brankov, Jovan G.

    2014-01-01

    In this paper, we present two new numerical observers (NO) based on machine learning for image quality assessment. The proposed NOs aim to predict human observer performance in a cardiac perfusion-defect detection task for single-photon emission computed tomography (SPECT) images. Human observer (HumO) studies are now considered to be the gold standard for task-based evaluation of medical images. However such studies are impractical for use in early stages of development for imaging devices and algorithms, because they require extensive involvement of trained human observers who must evaluate a large number of images. To address this problem, numerical observers (also called model observers) have been developed as a surrogate for human observers. The channelized Hotelling observer (CHO), with or without internal noise model, is currently the most widely used NO of this kind. In our previous work we argued that development of a NO model to predict human observers' performance can be viewed as a machine learning (or system identification) problem. This consideration led us to develop a channelized support vector machine (CSVM) observer, a kernel-based regression model that greatly outperformed the popular and widely used CHO. This was especially evident when the numerical observers were evaluated in terms of generalization performance. To evaluate generalization we used a typical situation for the practical use of a numerical observer: after optimizing the NO (which for a CHO might consist of adjusting the internal noise model) based upon a broad set of reconstructed images, we tested it on a broad (but different) set of images obtained by a different reconstruction method. In this manuscript we aim to evaluate two new regression models that achieve accuracy higher than the CHO and comparable to our earlier CSVM method, while dramatically reducing model complexity and computation time. The new models are defined in a Bayesian machine-learning framework: a channelized

  13. Numerical evaluation of aperture coupling in resonant cavities and frequency perturbation analysis

    NASA Astrophysics Data System (ADS)

    Dash, R.; Nayak, B.; Sharma, A.; Mittal, K. C.

    2014-01-01

    This paper presents a general formulation for numerical evaluation of the coupling between two identical resonant cavities by a small elliptical aperture in a plane common wall of arbitrary thickness. It is organized into two parts. In the first one we discuss the aperture coupling that is expressed in terms of electric and magnetic dipole moments and polarizabilities using Carlson symmetric elliptical integrals. Carlson integrals have been numerically evaluated and under zero thickness approximation, the results match with the complete elliptical integrals of first and second kind. It is found that with zero wall thickness, the results obtained are the same as those of Bethe and Collin for an elliptical and circular aperture of zero thickness. In the second part, Slater's perturbation method is applied to find the frequency changes due to apertures of finite thickness on the cavity wall.

  14. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  15. A comment on the importance of numerical evaluation of analytic solutions involving approximations.

    PubMed

    Overall, J E; Starbuck, R R; Doyle, S R

    1994-07-01

    An analytic solution proposed by Senn (1) for removing the effects of covariate imbalance in controlled clinical trials was subjected to Monte Carlo evaluation. For practical applications of his derivation, Senn proposed substitution of sample statistics for parameters of the bivariate normal model. Unfortunately, that substitution produces severe distortion in the size of tests of significance for treatment effects when covariate imbalance is present. Numerical verification of proposed substitutions into analytic models is recommended as a prudent approach. PMID:7951276

  16. Evaluation of thermal bioclimate based on observational data and numerical simulations: an application to Greece

    NASA Astrophysics Data System (ADS)

    Giannaros, Theodore M.; Melas, Dimitrios; Matzarakis, Andreas

    2015-02-01

    The evaluation of thermal bioclimate can be conducted employing either observational or modeling techniques. The advantage of the numerical modeling approach lies in that it can be applied in areas where there is lack of observational data, providing a detailed insight on the prevailing thermal bioclimatic conditions. However, this approach should be exploited carefully since model simulations can be frequently biased. The aim of this paper is to examine the suitability of a mesoscale atmospheric model in terms of evaluating thermal bioclimate. For this, the numerical weather prediction Weather Research and Forecasting (WRF) model and the radiation RayMan model are employed for simulating thermal bioclimatic conditions in Greece during a 1-year time period. The physiologically equivalent temperature (PET) is selected as an index for evaluating thermal bioclimate, while synoptic weather station data are exploited for verifying model performance. The results of the present study shed light on the strengths and weaknesses of the numerical modeling approach. Overall, it is shown that model simulations can provide a useful alternative tool for studying thermal bioclimate. Specifically for Greece, the WRF/RayMan modeling system was found to perform adequately well in reproducing the spatial and temporal variations of PET.

  17. Neither Fair nor Accurate: Research-Based Reasons Why High-Stakes Tests Should Not Be Used to Evaluate Teachers

    ERIC Educational Resources Information Center

    Au, Wayne

    2011-01-01

    Current and former leaders of many major urban school districts, including Washington, D.C.'s Michelle Rhee and New Orleans' Paul Vallas, have sought to use tests to evaluate teachers. In fact, the use of high-stakes standardized tests to evaluate teacher performance in the manner of value-added measurement (VAM) has become one of the cornerstones…

  18. Numerical evaluation of implantable hearing devices using a finite element model of human ear considering viscoelastic properties.

    PubMed

    Zhang, Jing; Tian, Jiabin; Ta, Na; Huang, Xinsheng; Rao, Zhushi

    2016-08-01

    Finite element method was employed in this study to analyze the change in performance of implantable hearing devices due to the consideration of soft tissues' viscoelasticity. An integrated finite element model of human ear including the external ear, middle ear and inner ear was first developed via reverse engineering and analyzed by acoustic-structure-fluid coupling. Viscoelastic properties of soft tissues in the middle ear were taken into consideration in this model. The model-derived dynamic responses including middle ear and cochlea functions showed a better agreement with experimental data at high frequencies above 3000 Hz than the Rayleigh-type damping. On this basis, a coupled finite element model consisting of the human ear and a piezoelectric actuator attached to the long process of incus was further constructed. Based on the electromechanical coupling analysis, equivalent sound pressure and power consumption of the actuator corresponding to viscoelasticity and Rayleigh damping were calculated using this model. The analytical results showed that the implant performance of the actuator evaluated using a finite element model considering viscoelastic properties gives a lower output above about 3 kHz than does Rayleigh damping model. Finite element model considering viscoelastic properties was more accurate to numerically evaluate implantable hearing devices. PMID:27276992

  19. Accurate Evaluation of Ion Conductivity of the Gramicidin A Channel Using a Polarizable Force Field without Any Corrections.

    PubMed

    Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui

    2016-06-14

    Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823

  20. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  1. Numerical evaluation of the multiple-pair method of calculating temperature from a measured continuous spectrum.

    PubMed

    Andreic, Z

    1988-10-01

    When a measured spectrum is digitized and stored in the computer's memory, many pixel pairs can be used for color temperature evaluation. Although calculated color temperature for an individual pair can oscillate a lot, the average color temperature tends to be quite accurate if enough pairs (50-100) are used. The sensitivity of calculated mean color temperature to noise and linear variation of emissivity (as a function of wavelength) are described. It was found that the influence of noise is much greater than that of emissivity variations.

  2. Three-Dimensional Numerical Evaluation of Thermal Performance of Uninsulated Wall Assemblies: Preprint

    SciTech Connect

    Ridouane, E. H.; Bianchi, M.

    2011-11-01

    This study describes a detailed three-dimensional computational fluid dynamics modeling to evaluate the thermal performance of uninsulated wall assemblies accounting for conduction through framing, convection, and radiation. The model allows for material properties variations with temperature. Parameters that were varied in the study include ambient outdoor temperature and cavity surface emissivity. Understanding the thermal performance of uninsulated wall cavities is essential for accurate prediction of energy use in residential buildings. The results can serve as input for building energy simulation tools for modeling the temperature dependent energy performance of homes with uninsulated walls.

  3. Combined experimental and numerical evaluation of a prototype nano-PCM enhanced wallboard

    SciTech Connect

    Biswas, Kaushik; LuPh.D., Jue; Soroushian, Parviz; Shrestha, Som S

    2014-01-01

    In the United States, forty-eight (48) percent of the residential end-use energy consumption is spent on space heating and air conditioning. Reducing envelope-generated heating and cooling loads through application of phase change material (PCM)-enhanced building envelopes can facilitate maximizing the energy efficiency of buildings. Combined experimental testing and numerical modeling of PCM-enhanced envelope components are two important aspects of the evaluation of their energy benefits. An innovative phase change material (nano-PCM) was developed with PCM encapsulated with expanded graphite (interconnected) nanosheets, which is highly conductive for enhanced thermal storage and energy distribution, and is shape-stable for convenient incorporation into lightweight building components. A wall with cellulose cavity insulation and prototype PCM-enhanced interior wallboards was built and tested in a natural exposure test (NET) facility in a hot-humid climate location. The test wall contained PCM wallboards and regular gypsum wallboard, for a side-by-side annual comparison study. Further, numerical modeling of the walls containing the nano-PCM wallboard was performed to determine its actual impact on wall-generated heating and cooling loads. The model was first validated using experimental data, and then used for annual simulations using Typical Meteorological Year (TMY3) weather data. This article presents the measured performance and numerical analysis evaluating the energy-saving potential of the nano-PCM-enhanced wallboard.

  4. Numerical evaluation of two-center integrals over Slater type orbitals

    NASA Astrophysics Data System (ADS)

    Kurt, S. A.; Yükçü, N.

    2016-03-01

    Slater Type Orbitals (STOs) which one of the types of exponential type orbitals (ETOs) are used usually as basis functions in the multicenter molecular integrals to better understand physical and chemical properties of matter. In this work, we develop algorithms for two-center overlap and two-center two-electron hybrid and Coulomb integrals which are calculated with help of translation method for STOs and some auxiliary functions by V. Magnasco's group. We use Mathematica programming language to produce algorithms for these calculations. Numerical results for some quantum numbers are presented in the tables. Consequently, we compare our obtained numerical results with the other known literature results and other details of evaluation method are discussed.

  5. Development and evaluation of a liquid chromatography-mass spectrometry method for rapid, accurate quantitation of malondialdehyde in human plasma.

    PubMed

    Sobsey, Constance A; Han, Jun; Lin, Karen; Swardfager, Walter; Levitt, Anthony; Borchers, Christoph H

    2016-09-01

    Malondialdhyde (MDA) is a commonly used marker of lipid peroxidation in oxidative stress. To provide a sensitive analytical method that is compatible with high throughput, we developed a multiple reaction monitoring-mass spectrometry (MRM-MS) approach using 3-nitrophenylhydrazine chemical derivatization, isotope-labeling, and liquid chromatography (LC) with electrospray ionization (ESI)-tandem mass spectrometry assay to accurately quantify MDA in human plasma. A stable isotope-labeled internal standard was used to compensate for ESI matrix effects. The assay is linear (R(2)=0.9999) over a 20,000-fold concentration range with a lower limit of quantitation of 30fmol (on-column). Intra- and inter-run coefficients of variation (CVs) were <2% and ∼10% respectively. The derivative was stable for >36h at 5°C. Standards spiked into plasma had recoveries of 92-98%. When compared to a common LC-UV method, the LC-MS method found near-identical MDA concentrations. A pilot project to quantify MDA in patient plasma samples (n=26) in a study of major depressive disorder with winter-type seasonal pattern (MDD-s) confirmed known associations between MDA concentrations and obesity (p<0.02). The LC-MS method provides high sensitivity and high reproducibility for quantifying MDA in human plasma. The simple sample preparation and rapid analysis time (5x faster than LC-UV) offers high throughput for large-scale clinical applications. PMID:27437618

  6. Numerical evaluation of virtual corrections to multi-jet production in massless QCD

    NASA Astrophysics Data System (ADS)

    Badger, Simon; Biedermann, Benedikt; Uwer, Peter; Yundin, Valery

    2013-08-01

    We present a C++ library for the numerical evaluation of one-loop virtual corrections to multi-jet production in massless QCD. The pure gluon primitive amplitudes are evaluated using NGLUON (Badger et al., (2011) [62]). A generalized unitarity reduction algorithm is used to construct arbitrary multiplicity fermion-gluon primitive amplitudes. From these basic building blocks the one-loop contribution to the squared matrix element, summed over colour and helicities, is calculated. No approximation in colour is performed. While the primitive amplitudes are given for arbitrary multiplicities, we provide the squared matrix elements only for up to 7 external partons allowing the evaluation of the five jet cross section at next-to-leading order accuracy. The library has been recently successfully applied to four jet production at next-to-leading order in QCD (Badger et al., 2012 [92]).

  7. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    NASA Astrophysics Data System (ADS)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  8. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    NASA Astrophysics Data System (ADS)

    Montes-Hugo, M.; Bouakba, H.; Arnone, R.

    2014-06-01

    The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.

  9. Numerical Study and Performance Evaluation for Pulse Detonation Engine with Exhaust Nozzle

    NASA Astrophysics Data System (ADS)

    Kimura, Yuichiro; Tsuboi, Nobuyuki; Hayashi, A. Koichi; Yamada, Eisuke

    This paper presents the propulsive performance evaluation for the H2/Air Pulse Detonation Engine (PDE) with a converging-diverging exhaust nozzle by the system-level modeling and multi-cycle numerical simulations. This study deals with the two-dimensional and axisymmetric compressible Euler equations with a detail chemical reaction model. First, single-shot propulsive performance of simplified-PDE, which is without exhaust nozzle, is evaluated to show the validity of the numerical and performance evaluation method. The influences of the initial conditions, ignition energy, grid resolution, and scale effects on the propulsive performance are studied with the multi-cycle simulations. The present results are compared with the results calculated by Ma et al. and Harris et al. and the difference between their results and the present simulations are approximately 2-3% because their chemical reactions use one-step model with one-γ model. The effects of the specific heat ratio should be estimated for various nozzle configurations and flight conditions.

  10. Numerical evaluation of cavitation void ratio significance on hydrofoil dynamic response

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Wang, Zhengwei; Escaler, Xavier; Zhou, Lingjiu

    2015-12-01

    The added mass effects on a NACA0009 hydrofoil under cavitation conditions determined in a cavitation tunnel have been numerically simulated using finite element method (FEM). Based on the validated model, the effects of averaged properties of the cavity considered as a two-phase mixture have been evaluated. The results indicate that the void ratio of the cavity plays an increasing role on the frequency reduction ratio and on the mode shape as the mode number increases. Moreover, the sound speed shows a more important role than the average cavity density.

  11. Evaluation of Faecalibacterium 16S rDNA genetic markers for accurate identification of swine faecal waste by quantitative PCR.

    PubMed

    Duan, Chuanren; Cui, Yamin; Zhao, Yi; Zhai, Jun; Zhang, Baoyun; Zhang, Kun; Sun, Da; Chen, Hang

    2016-10-01

    A genetic marker within the 16S rRNA gene of Faecalibacterium was identified for use in a quantitative PCR (qPCR) assay to detect swine faecal contamination in water. A total of 146,038 bacterial sequences were obtained using 454 pyrosequencing. By comparative bioinformatics analysis of Faecalibacterium sequences with those of numerous swine and other animal species, swine-specific Faecalibacterium 16S rRNA gene sequences were identified and Polymerase Chain Okabe (PCR) primer sets designed and tested against faecal DNA samples from swine and non-swine sources. Two PCR primer sets, PFB-1 and PFB-2, showed the highest specificity to swine faecal waste and had no cross-reaction with other animal samples. PFB-1 and PFB-2 amplified 16S rRNA gene sequences from 50 samples of swine with positive ratios of 86 and 90%, respectively. We compared swine-specific Faecalibacterium qPCR assays for the purpose of quantifying the newly identified markers. The quantification limits (LOQs) of PFB-1 and PFB-2 markers in environmental water were 6.5 and 2.9 copies per 100 ml, respectively. Of the swine-associated assays tested, PFB-2 was more sensitive in detecting the swine faecal waste and quantifying the microbial load. Furthermore, the microbial abundance and diversity of the microbiomes of swine and other animal faeces were estimated using operational taxonomic units (OTUs). The species specificity was demonstrated for the microbial populations present in various animal faeces. PMID:27353369

  12. Quantum chemical approach for condensed-phase thermochemistry (III): Accurate evaluation of proton hydration energy and standard hydrogen electrode potential

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atsushi; Nakai, Hiromi

    2016-04-01

    Gibbs free energy of hydration of a proton and standard hydrogen electrode potential were evaluated using high-level quantum chemical calculations. The solvent effect was included using the cluster-continuum model, which treated short-range effects by quantum chemical calculations of proton-water complexes, and the long-range effects by a conductor-like polarizable continuum model. The harmonic solvation model (HSM) was employed to estimate enthalpy and entropy contributions due to nuclear motions of the clusters by including the cavity-cluster interactions. Compared to the commonly used ideal gas model, HSM treatment significantly improved the contribution of entropy, showing a systematic convergence toward the experimental data.

  13. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation

    NASA Astrophysics Data System (ADS)

    Jahanshahi, Mohammad R.; Masri, Sami F.

    2013-03-01

    In mechanical, aerospace and civil structures, cracks are important defects that can cause catastrophes if neglected. Visual inspection is currently the predominant method for crack assessment. This approach is tedious, labor-intensive, subjective and highly qualitative. An inexpensive alternative to current monitoring methods is to use a robotic system that could perform autonomous crack detection and quantification. To reach this goal, several image-based crack detection approaches have been developed; however, the crack thickness quantification, which is an essential element for a reliable structural condition assessment, has not been sufficiently investigated. In this paper, a new contact-less crack quantification methodology, based on computer vision and image processing concepts, is introduced and evaluated against a crack quantification approach which was previously developed by the authors. The proposed approach in this study utilizes depth perception to quantify crack thickness and, as opposed to most previous studies, needs no scale attachment to the region under inspection, which makes this approach ideal for incorporation with autonomous or semi-autonomous mobile inspection systems. Validation tests are performed to evaluate the performance of the proposed approach, and the results show that the new proposed approach outperforms the previously developed one.

  14. Analytical expression for gas-particle equilibration time scale and its numerical evaluation

    NASA Astrophysics Data System (ADS)

    Anttila, Tatu; Lehtinen, Kari E. J.; Dal Maso, Miikka

    2016-05-01

    We have derived a time scale τeq that describes the characteristic time for a single compound i with a saturation vapour concentration Ceff,i to reach thermodynamic equilibrium between the gas and particle phases. The equilibration process was assumed to take place via gas-phase diffusion and absorption into a liquid-like phase present in the particles. It was further shown that τeq combines two previously derived and often applied time scales τa and τs that account for the changes in the gas and particle phase concentrations of i resulting from the equilibration, respectively. The validity of τeq was tested by comparing its predictions against results from a numerical model that explicitly simulates the transfer of i between the gas and particle phases. By conducting a large number of simulations where the values of the key input parameters were varied randomly, it was found out that τeq yields highly accurate results when i is a semi-volatile compound in the sense that the ratio of total (gas and particle phases) concentration of i to the saturation vapour concentration of i, μ, is below unity. On the other hand, the comparison of analytical and numerical time scales revealed that using τa or τs alone to calculate the equilibration time scale may lead to considerable errors. It was further shown that τeq tends to overpredict the equilibration time when i behaves as a non-volatile compound in a sense that μ > 1. Despite its simplicity, the time scale derived here has useful applications. First, it can be used to assess if semi-volatile compounds reach thermodynamic equilibrium during dynamic experiments that involve changes in the compound volatility. Second, the time scale can be used in modeling of secondary organic aerosol (SOA) to check whether SOA forming compounds equilibrate over a certain time interval.

  15. Numerical evaluation of the production of radionuclides in a nuclear reactor (Part I).

    PubMed

    Mirzadeh, S; Walsh, P

    1998-04-01

    Numerical evaluation of nuclear transmutations is essential for optimization of radionuclide production. Accordingly, we have developed a generalized computer program called LAURA for predicting the production rates of radionuclides in a nuclear reactor. The program is generalized in the sense that it can be used for calculation of the production rate of any member of a network undergoing spontaneous decay and/or induced neutron transformation in a nuclear reactor. In this paper (Part I), we describe the mathematical basis for the development of LAURA. Expressions based on the Rubinson (1949) approach have been used for the evaluation of the depletion functions. This method is also valid when some of the depletion constants have identical values; thus, the approximate solution of Raykin and Shlyakhter (1989) can be used to account for the effects of feedback due to alpha decay.

  16. Experimental and numerical evaluation of drug release from nanofiber mats to brain tissue.

    PubMed

    Nakielski, Paweł; Kowalczyk, Tomasz; Zembrzycki, Krzysztof; Kowalewski, Tomasz A

    2015-02-01

    Drug delivery systems based on nanofibrous mats appear to be a promising healing practice for preventing brain neurodegeneration after surgery. One of the problems encountered during planning and constructing optimal delivery system based on nanofibrous mats is the estimation of parameters crucial for predicting drug release dynamics. This study describes our experimental setup allowing for spatial and temporary evaluation of drug release from nanofibrous polymers to obtain data necessary to validate appropriate numerical models. We applied laser light sheet method to illuminate released fluorescent drug analog and CCD camera for imaging selected cross-section of the investigated volume. Transparent hydrogel was used as a brain tissue phantom. The proposed setup allows for continuous observation of drug analog (fluorescent dye) diffusion for time span of several weeks. Images captured at selected time intervals were processed to determine concentration profiles and drug release kinetics. We used presented method to evaluate drug release from several polymers to validate numerical model used for optimizing nanofiber system for neuroprotective dressing.

  17. Topological invariants for interacting topological insulators. I. Efficient numerical evaluation scheme and implementations

    NASA Astrophysics Data System (ADS)

    He, Yuan-Yao; Wu, Han-Qing; Meng, Zi Yang; Lu, Zhong-Yi

    2016-05-01

    The aim of this series of two papers is to discuss topological invariants for interacting topological insulators (TIs). In the first paper (I), we provide a paradigm of efficient numerical evaluation scheme for topological invariants, in which we demystify the procedures and techniques employed in calculating Z2 invariant and spin Chern number via zero-frequency single-particle Green's function in quantum Monte Carlo (QMC) simulations. Here we introduce an interpolation process to overcome the ubiquitous finite-size effect, so that the calculated spin Chern number shows ideally quantized values. We also show that making use of symmetry properties of the underlying systems can greatly reduce the computational effort. To demonstrate the effectiveness of our numerical evaluation scheme, especially the interpolation process, for calculating topological invariants, we apply it on two independent two-dimensional models of interacting topological insulators. In the subsequent paper (II), we apply the scheme developed here to wider classes of models of interacting topological insulators, for which certain limitation of constructing topological invariant via single-particle Green's functions will be presented.

  18. Evaluation of numerical sediment quality targets for the St. Louis River Area of Concern

    USGS Publications Warehouse

    Crane, J.L.; MacDonald, D.D.; Ingersoll, C.G.; Smorong, D.E.; Lindskoog, R.A.; Severn, C.G.; Berger, T.A.; Field, L.J.

    2002-01-01

    Numerical sediment quality targets (SQTs) for the protection of sediment-dwelling organisms have been established for the St. Louis River Area of Concern (AOC), 1 of 42 current AOCs in the Great Lakes basin. The two types of SQTs were established primarily from consensus-based sediment quality guidelines. Level I SQTs are intended to identify contaminant concentrations below which harmful effects on sediment-dwelling organisms are unlikely to be observed. Level II SQTs are intended to identify contaminant concentrations above which harmful effects on sediment-dwelling organisms are likely to be observed. The predictive ability of the numerical SQTs was evaluated using the matching sediment chemistry and toxicity data set for the St. Louis River AOC. This evaluation involved determination of the incidence of toxicity to amphipods (Hyalella azteca) and midges (Chironomus tentans) within five ranges of Level II SQT quotients (i.e., mean probable effect concentration quotients [PEC-Qs]). The incidence of toxicity was determined based on the results of 10-day toxicity tests with amphipods (endpoints: survival and growth) and 10-day toxicity tests with midges (endpoints: survival and growth). For both toxicity tests, the incidence of toxicity increased as the mean PEC-Q ranges increased. The incidence of toxicity observed in these tests was also compared to that for other geographic areas in the Great Lakes region and in North America for 10- to 14-day amphipod (H. azteca) and 10- to 14-day midge (C. tentans or C. riparius) toxicity tests. In general, the predictive ability of the mean PEC-Qs was similar across geographic areas. The results of these predictive ability evaluations indicate that collectively the mean PEC-Qs provide a reliable basis for classifying sediments as toxic or not toxic in the St. Louis River AOC, in the larger geographic areas of the Great Lakes, and elsewhere in North America.

  19. 3D models of slow motions in the Earth's crust and upper mantle in the source zones of seismically active regions and their comparison with highly accurate observational data: II. Results of numerical calculations

    NASA Astrophysics Data System (ADS)

    Molodenskii, S. M.; Molodenskii, M. S.; Begitova, T. A.

    2016-09-01

    In the first part of the paper, a new method was developed for solving the inverse problem of coseismic and postseismic deformations in the real (imperfectly elastic, radially and horizontally heterogeneous, self-gravitating) Earth with hydrostatic initial stresses from highly accurate modern satellite data. The method is based on the decomposition of the sought parameters in the orthogonalized basis. The method was suggested for estimating the ambiguity of the solution of the inverse problem for coseismic and postseismic deformations. For obtaining this estimate, the orthogonal complement is constructed to the n-dimensional space spanned by the system of functional derivatives of the residuals in the system of n observed and model data on the coseismic and postseismic displacements at a variety of sites on the ground surface with small variations in the models. Below, we present the results of the numerical modeling of the elastic displacements of the ground surface, which were based on calculating Green's functions of the real Earth for the plane dislocation surface and different orientations of the displacement vector as described in part I of the paper. The calculations were conducted for the model of a horizontally homogeneous but radially heterogeneous selfgravitating Earth with hydrostatic initial stresses and the mantle rheology described by the Lomnitz logarithmic creep function according to (M. Molodenskii, 2014). We compare our results with the previous numerical calculations (Okado, 1985; 1992) for the simplest model of a perfectly elastic nongravitating homogeneous Earth. It is shown that with the source depths starting from the first hundreds of kilometers and with magnitudes of about 8.0 and higher, the discrepancies significantly exceed the errors of the observations and should therefore be taken into account. We present the examples of the numerical calculations of the creep function of the crust and upper mantle for the coseismic deformations. We

  20. Ergonomic design of beverage can lift tabs based on numerical evaluations of fingertip discomfort.

    PubMed

    Han, Jing; Nishiyama, Sadao; Yamazaki, Koetsu; Itoh, Ryouiti

    2008-03-01

    This paper introduces finite element analyses to evaluate numerically and objectively the feelings in the fingertip when opening aluminum beverage cans, in order to design the shape of the tab. Experiments of indenting vertically the fingertip pulp by a probe and by tabs of aluminum beverage can ends have allowed us to observe force responses and feelings in the fingertip. It was found that a typical force-displacement curve may be simplified as a combination of three curves with different gradients. Participants feel a touch at Curve 1 of the force-displacement curve, then feel a pressure and their pulse at Curve 2, finally feel discomfort followed by a pain in the fingertip at Curve 3. Finite element analyses have been performed to simulate indenting the tab with the fingertip vertically to confirm that the simulation results agree well with the experimental observations. Finally, numerical simulations of the finger pulling up the tab of the can end has also been performed and discomfort in the fingertip has been related to the maximum value of the contact stress of the finger model. Comparisons of three designs of tab ring shape showed that the tab with a larger contact area with finger is better.

  1. SEQUESTRATION OF METALS IN ACTIVE CAP MATERIALS: A LABORATORY AND NUMERICAL EVALUATION

    SciTech Connect

    Dixon, K.; Knox, A.

    2012-02-13

    Active capping involves the use of capping materials that react with sediment contaminants to reduce their toxicity or bioavailability. Although several amendments have been proposed for use in active capping systems, little is known about their long-term ability to sequester metals. Recent research has shown that the active amendment apatite has potential application for metals contaminated sediments. The focus of this study was to evaluate the effectiveness of apatite in the sequestration of metal contaminants through the use of short-term laboratory column studies in conjunction with predictive, numerical modeling. A breakthrough column study was conducted using North Carolina apatite as the active amendment. Under saturated conditions, a spike solution containing elemental As, Cd, Co, Se, Pb, Zn, and a non-reactive tracer was injected into the column. A sand column was tested under similar conditions as a control. Effluent water samples were periodically collected from each column for chemical analysis. Relative to the non-reactive tracer, the breakthrough of each metal was substantially delayed by the apatite. Furthermore, breakthrough of each metal was substantially delayed by the apatite compared to the sand column. Finally, a simple 1-D, numerical model was created to qualitatively predict the long-term performance of apatite based on the findings from the column study. The results of the modeling showed that apatite could delay the breakthrough of some metals for hundreds of years under typical groundwater flow velocities.

  2. [Evaluation of Vessel Depictability in Compressed Sensing MR Angiography Using Numerical Phantom Model].

    PubMed

    Saito, Toshiki; Machida, Yoshio; Miyamoto, Kota; Ichinoseki, Yuki

    2015-11-01

    As an acceleration technique for use with magnetic resonance imaging (MRI), compressed sensing MRI (CSMRI) was introduced recently to obtain MR images from under sampled k-space data. Images generated using a nonlinear iterative procedure based on sophisticated theory in informatics using data sparsity have complicated characteristics. Therefore, the factors affecting image quality (IQ) in CS-MRI must be elucidated. This article specifically describes the examination of the IQ of clinically important MR angiography (MRA). For MRA, the depictability of thin blood vessels is extremely important, but quantitative evaluation of thin blood vessel depictability is difficult. Therefore, we conducted numerical experiments using a simple numerical phantom model mimicking the cerebral arteries so that the experimental conditions, including the thin vessel positions, can be given. Results show that vessel depictability changed depending on the noise intensity when the wavelet transform was used as the sparsifying transform. Decreased vessel depictability might present difficulties at the clinical signal-to-noise ratio (SNR) level. Therefore, selecting data acquisition and reconstruction conditions carefully in terms of the SNR is crucially important for CS-MRI study. PMID:26596199

  3. Numerical simulation and fracture evaluation method of dual laterolog in organic shale

    NASA Astrophysics Data System (ADS)

    Tan, Maojin; Wang, Peng; Li, Jun; Liu, Qiong; Yang, Qinshan

    2014-01-01

    Fracture identification and parameter evaluation are important for logging interpretation of organic shale, especially fracture evaluation from conventional logs in case the imaging log is not available. It is helpful to study dual laterolog responses of the fractured shale reservoir. First, a physical model is set up according to the property of organic shale, and three-dimensional finite element method (FEM) based on the principle of dual laterolog is introduced and applied to simulate dual laterolog responses in various shale models, which can help identify the fractures in shale formations. Then, through a number of numerical simulations of dual laterolog for various shale models with different base rock resistivities and fracture openings, the corresponding equations of various cases are constructed respectively, and the fracture porosity can be calculated consequently. Finally, we apply this methodology proposed above to a case study of organic shale, and the fracture porosity and fracture opening are calculated. The results are consistent with the fracture parameters processed from Full borehole Micro-resistivity Imaging (FMI). It indicates that the method is applicable for fracture evaluation of organic shale.

  4. Evaluation and Numerical Simulation of Tsunami for Coastal Nuclear Power Plants of India

    SciTech Connect

    Sharma, Pavan K.; Singh, R.K.; Ghosh, A.K.; Kushwaha, H.S.

    2006-07-01

    Recent tsunami generated on December 26, 2004 due to Sumatra earthquake of magnitude 9.3 resulted in inundation at the various coastal sites of India. The site selection and design of Indian nuclear power plants demand the evaluation of run up and the structural barriers for the coastal plants: Besides it is also desirable to evaluate the early warning system for tsunami-genic earthquakes. The tsunamis originate from submarine faults, underwater volcanic activities, sub-aerial landslides impinging on the sea and submarine landslides. In case of a submarine earthquake-induced tsunami the wave is generated in the fluid domain due to displacement of the seabed. There are three phases of tsunami: generation, propagation, and run-up. Reactor Safety Division (RSD) of Bhabha Atomic Research Centre (BARC), Trombay has initiated computational simulation for all the three phases of tsunami source generation, its propagation and finally run up evaluation for the protection of public life, property and various industrial infrastructures located on the coastal regions of India. These studies could be effectively utilized for design and implementation of early warning system for coastal region of the country apart from catering to the needs of Indian nuclear installations. This paper presents some results of tsunami waves based on different analytical/numerical approaches with shallow water wave theory. (authors)

  5. Numerical evaluation of the radiation from unbaffled, finite plates using the FFT

    NASA Technical Reports Server (NTRS)

    Williams, E. G.

    1983-01-01

    An iteration technique is described which numerically evaluates the acoustic pressure and velocity on and near unbaffled, finite, thin plates vibrating in air. The technique is based on Rayleigh's integral formula and its inverse. These formulas are written in their angular spectrum form so that the fast Fourier transform (FFT) algorithm may be used to evaluate them. As an example of the technique the pressure on the surface of a vibrating, unbaffled disk is computed and shown to be in excellent agreement with the exact solution using oblate spheroidal functions. Furthermore, the computed velocity field outside the disk shows the well-known singularity at the rim of the disk. The radiated fields from unbaffled flat sources of any geometry with prescribed surface velocity may be evaluated using this technique. The use of the FFT to perform the integrations in Rayleigh's formulas provides a great savings in computation time compared with standard integration algorithms, especially when an array processor can be used to implement the FFT.

  6. Evaluation of Temperature Gradient in Advanced Automated Directional Solidification Furnace (AADSF) by Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.

    1996-01-01

    A numerical model of heat transfer using combined conduction, radiation and convection in AADSF was used to evaluate temperature gradients in the vicinity of the crystal/melt interface for variety of hot and cold zone set point temperatures specifically for the growth of mercury cadmium telluride (MCT). Reverse usage of hot and cold zones was simulated to aid the choice of proper orientation of crystal/melt interface regarding residual acceleration vector without actual change of furnace location on board the orbiter. It appears that an additional booster heater will be extremely helpful to ensure desired temperature gradient when hot and cold zones are reversed. Further efforts are required to investigate advantages/disadvantages of symmetrical furnace design (i.e. with similar length of hot and cold zones).

  7. Review of nonlinear ultrasonic guided wave nondestructive evaluation: theory, numerics, and experiments

    NASA Astrophysics Data System (ADS)

    Chillara, Vamshi Krishna; Lissenden, Cliff J.

    2016-01-01

    Interest in using the higher harmonic generation of ultrasonic guided wave modes for nondestructive evaluation continues to grow tremendously as the understanding of nonlinear guided wave propagation has enabled further analysis. The combination of the attractive properties of guided waves with the attractive properties of higher harmonic generation provides a very unique potential for characterization of incipient damage, particularly in plate and shell structures. Guided waves can propagate relatively long distances, provide access to hidden structural components, have various displacement polarizations, and provide many opportunities for mode conversions due to their multimode character. Moreover, higher harmonic generation is sensitive to changing aspects of the microstructures such as to the dislocation density, precipitates, inclusions, and voids. We review the recent advances in the theory of nonlinear guided waves, as well as the numerical simulations and experiments that demonstrate their utility.

  8. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  9. Design and numerical evaluation of a volume coil array for parallel MR imaging at ultrahigh fields

    PubMed Central

    Pang, Yong; Wong, Ernest W.H.; Yu, Baiying

    2014-01-01

    In this work, we propose and investigate a volume coil array design method using different types of birdcage coils for MR imaging. Unlike the conventional radiofrequency (RF) coil arrays of which the array elements are surface coils, the proposed volume coil array consists of a set of independent volume coils including a conventional birdcage coil, a transverse birdcage coil, and a helix birdcage coil. The magnetic fluxes of these three birdcage coils are intrinsically cancelled, yielding a highly decoupled volume coil array. In contrast to conventional non-array type volume coils, the volume coil array would be beneficial in improving MR signal-to-noise ratio (SNR) and also gain the capability of implementing parallel imaging. The volume coil array is evaluated at the ultrahigh field of 7T using FDTD numerical simulations, and the g-factor map at different acceleration rates was also calculated to investigate its parallel imaging performance. PMID:24649435

  10. Numerical evaluation of a 13.5-nm high-brightness microplasma extreme ultraviolet source

    SciTech Connect

    Hara, Hiroyuki Arai, Goki; Dinh, Thanh-Hung; Higashiguchi, Takeshi; Jiang, Weihua; Miura, Taisuke; Endo, Akira; Ejima, Takeo; Li, Bowen; Dunne, Padraig; O'Sullivan, Gerry; Sunahara, Atsushi

    2015-11-21

    The extreme ultraviolet (EUV) emission and its spatial distribution as well as plasma parameters in a microplasma high-brightness light source are characterized by the use of a two-dimensional radiation hydrodynamic simulation. The expected EUV source size, which is determined by the expansion of the microplasma due to hydrodynamic motion, was evaluated to be 16 μm (full width) and was almost reproduced by the experimental result which showed an emission source diameter of 18–20 μm at a laser pulse duration of 150 ps [full width at half-maximum]. The numerical simulation suggests that high brightness EUV sources should be produced by use of a dot target based microplasma with a source diameter of about 20 μm.

  11. Numerical evaluation of the Bose-ghost propagator in minimal Landau gauge on the lattice

    NASA Astrophysics Data System (ADS)

    Cucchieri, Attilio; Mendes, Tereza

    2016-07-01

    We present numerical details of the evaluation of the so-called Bose-ghost propagator in lattice minimal Landau gauge, for the SU(2) case in four Euclidean dimensions. This quantity has been proposed as a carrier of the confining force in the Gribov-Zwanziger approach and, as such, its infrared behavior could be relevant for the understanding of color confinement in Yang-Mills theories. Also, its nonzero value can be interpreted as direct evidence of Becchi-Rouet-Stora-Tyutin-symmetry breaking, which is induced when restricting the functional measure to the first Gribov region Ω . Our simulations are done for lattice volumes up to 1204 and for physical lattice extents up to 13.5 fm. We investigate the infinite-volume and continuum limits.

  12. Numerical evaluation of AC loss properties in assembled superconductor strips exposed to perpendicular magnetic field

    NASA Astrophysics Data System (ADS)

    Kajikawa, K.; Funaki, K.; Shikimachi, K.; Hirano, N.; Nagaya, S.

    2009-10-01

    AC losses in superconductor strips assembled face-to-face are numerically evaluated by means of a finite element method. The external magnetic field is applied perpendicular to their flat face. It is also assumed that the superconductor strips have the voltage-current characteristics represented by the critical state model with constant critical current density. The influences of the number of strips and the gap length between strips on the losses are quantitatively discussed as compared with the conventional theoretical expressions for some special cases in order to understand only the geometrical effects on the perpendicular-field losses in actual assembled conductors with the finite numbers of Y-based superconducting tapes.

  13. Numerical surrogates for human observers in myocardial motion evaluation from SPECT image

    PubMed Central

    Marin, Thibault; Kalayehis, Mahdi M.; Parages, Felipe M.; Brankov, Jovan G.

    2014-01-01

    In medical imaging, the gold standard for image-quality assessment is a task-based approach in which one evaluates human observer performance for a given diagnostic task (e.g., detection of a myocardial perfusion or motion defect). To facilitate practical task-based image-quality assessment, model observers are needed as approximate surrogates for human observers. In cardiac-gated SPECT imaging, diagnosis relies on evaluation of the myocardial motion as well as perfusion. Model observers for the perfusion-defect detection task have been studied previously, but little effort has been devoted toward development of a model observer for cardiac-motion defect detection. In this work describe two model observers for predicting human observer performance in detection of cardiac-motion defects. Both proposed methods rely on motion features extracted using previously reported deformable mesh model for myocardium motion estimation. The first method is based on a Hotelling linear discriminant that is similar in concept to that used commonly for perfusion-defect detection. In the second method, based on relevance vector machines (RVM) for regression, we compute average human observer performance by first directly predicting individual human observer scores, and then using multi reader receiver operating characteristic (ROC) analysis. Our results suggest that the proposed RVM model observer can predict human observer performance accurately, while the new Hotelling motion-defect detector is somewhat less effective. PMID:23981533

  14. Evaluating the effects of climate changes on Patos Lagoon's hydrodynamics using numerical modeling techniques, Brazil

    NASA Astrophysics Data System (ADS)

    Barros, G. P.; Marques, W. C.

    2013-05-01

    Estuarine circulation is normally controlled by the wind action, tides and freshwater discharge. Since it's located in a microtidal region, wind is the most effective forcing controlling Patos Lagoon's circulation over synoptic timescales. However, in interannual timescales, precipitation and freshwater discharge are the most effective forcing in the region. The south region of Brazil shows precipitation anomalies associated with the occurrence of ENSO events. In El Niño years, spring tends to be wetter, and in La Niña years droughts anomalies occur. Analyzing freshwater discharge time series from 1940 to 2006, it was observed that the non-linear term trend indicates a pattern with values normally above (below) the mean after (before) 1973. An increasing trend starting after 1970 possibly indicates a longer term cycle influencing the interannual variability of the Patos Lagoon discharge. The objective of this study is to investigate the influence of freshwater discharge in the hydrodynamic circulation using a tridimensional numerical model. The model used is the TELEMAC3D, developed by the Laboratoire National d'Hydraulique of the Company Electricité de France (EDF), and it is based on the finite element methods. This numerical model has been widely used in the study area to describe estuarine circulation, morphodynamic processes and sediment dispersion. Boundary conditions are created using freshwater discharge data, salinity, temperature, ocean current velocity and direction, as well as wind and air temperature data. Two numerical simulations were performed using the same boundary conditions, except for the freshwater discharge. Two different climatic monthly means of freshwater discharge were used, one from 1940 to 1973 and the other from 1973 to 2006, in order to evaluate the evolution of water levels, salinity and current velocities, as well as identify the influence of this parameters in the circulation and exchange processes between the estuarine and

  15. Evaluation of Site Effects Using Numerical and Experimental Analyses In Cittas Di Castello (italy)

    NASA Astrophysics Data System (ADS)

    Pergalani, F.; de Franco, R.; Compagnoni, M.; Caielli, G.

    In the paper the results of the numerical and experimental analyses, in a site of the Umbria Region (Città di Castello - PG), finalized to the evaluations of site effects are shown. The aim of the work was to compare the two type of analyses, to give some methodologies that may be used at the level of urban planning, to consider these as- pects. Therefore a series of geologic, geomorphologic (1:5.000 scale), geotechnic and seismic analyses have been carried out, to identify the areas affected to local effects and to characterize the lithotechnic units. The expected seismic inputs are been indi- viduated and 2D (Quad4M, Hudson et al., 1993) numerical analyses have been done. An experimental analysis, using the registrations of small events, has been done. The results, for the two approaches, were performed in terms of elastic pseudo-acceleration spectra and amplification factors, as a ratio between spectral intensity (Housner, 1952), calculated using the pseudo-velocity spectra, in the periods of 0.1-0.5 s and 0.1-2.5 s of output and input. The results have been analyzed and compared, to give a method- ology that may be exhaustive and precise. The conclusions can be summarized in the following points: u° the results of the two approaches are coherent; u° the differences between the two approaches are: the use of the numerical analysis is easy and quick but, in this case, the use of 2D analysis produces a simplification of real geometry; the use of experimental analysis allows to consider the 3D conditions, but, in this case, the registrations of events characterized by low energy, do not allow to consider the non linear behavior of materials, moreover it is necessary to perform the registrations for a period depending from the seismicity of the region (1 month - two years); u° the possi- bility of integration of the two methodologies allows to perform a complete analysis, using the advantages of the two methods. Housner G.W., Spectrum Intensities of strong

  16. Numerical evaluation of a fixed-amplitude variable-phase integral.

    SciTech Connect

    Lyness, J. N.; Mathematics and Computer Science

    2008-01-01

    We treat the evaluation of a fixed-amplitude variable-phase integral of the form {integral}{sub a}{sup b} exp[ikG(x)]dx, where G{prime}(x) {ge} 0 and has moderate differentiability in the integration interval. In particular, we treat in detail the case in which G{prime}(a) = G{prime}(b) = 0, but G{double_prime}(a)G{double_prime}(b) < 0. For this, we re-derive a standard asymptotic expansion in inverse half-integer inverse powers of k. This derivation is direct, making no explicit appeal to the theories of stationary phase or steepest descent. It provides straightforward expressions for the coefficients in the expansion in terms of derivatives of G at the end-points. Thus it can be used to evaluate the integrals numerically in cases where k is large. We indicate the generalizations to the theory required to cover cases where the oscillator function G has higher order zeros at either or both end-points, but this is not treated in detail. In the simpler case in which G{prime}(a)G{prime}(b) > 0, the same approach would recover a special case of a recent result due to Iserles and Norsett.

  17. Evaluation of the chondral modeling theory using fe-simulation and numeric shape optimization

    PubMed Central

    Plochocki, Jeffrey H; Ward, Carol V; Smith, Douglas E

    2009-01-01

    The chondral modeling theory proposes that hydrostatic pressure within articular cartilage regulates joint size, shape, and congruence through regional variations in rates of tissue proliferation.The purpose of this study is to develop a computational model using a nonlinear two-dimensional finite element analysis in conjunction with numeric shape optimization to evaluate the chondral modeling theory. The model employed in this analysis is generated from an MR image of the medial portion of the tibiofemoral joint in a subadult male. Stress-regulated morphological changes are simulated until skeletal maturity and evaluated against the chondral modeling theory. The computed results are found to support the chondral modeling theory. The shape-optimized model exhibits increased joint congruence, broader stress distributions in articular cartilage, and a relative decrease in joint diameter. The results for the computational model correspond well with experimental data and provide valuable insights into the mechanical determinants of joint growth. The model also provides a crucial first step toward developing a comprehensive model that can be employed to test the influence of mechanical variables on joint conformation. PMID:19438771

  18. Numerical modeling of geothermal heat pump system: evaluation of site specific groundwater thermal impact

    NASA Astrophysics Data System (ADS)

    Pedron, Roberto; Sottani, Andrea; Vettorello, Luca

    2014-05-01

    A pilot plant using a geothermal open-loop heat pump system has been realized in the city of Vicenza (Northern Italy), in order to meet the heating and cooling needs of the main monumental building in the historical center, the Palladian Basilica. The low enthalpy geothermal system consists of a pumping well and a reinjection well, both intercepting the same confined aquifer; three other monitoring wells have been drilled and then provided with water level and temperature dataloggers. After about 1 year and a half of activity, during a starting experimental period of three years, we have now the opportunity to analyze long term groundwater temperature data series and to evaluate the numerical modeling reliability about thermal impact prediction. The initial model, based on MODFLOW and SHEMAT finite difference codes, has been calibrated using pumping tests and other field investigations data, obtaining a valid and reliable groundwater flow simulation. But thermal parameters, such as thermal conductivity and volumetric heat capacity, didn't have a site specific direct estimation and therefore they have been assigned to model cells referring to bibliographic standards, usually derived from laboratory tests and barely representing real aquifer properties. Anyway preliminary heat transport results have been compared with observed temperature trends, showing an efficient representation of the thermal plume extension and shape. The ante operam simulation could not consider heat pump real utilization, that happened to be relevantly different from the expected project values; so the first numerical model could not properly simulate the groundwater temperature evolution. Consequently a second model has been implemented, in order to calibrate the mathematical simulation with monitored groundwater temperature datasets, trying to achieve higher levels of reliability in heat transport phenomena interpretation. This second step analysis focuses on aquifer thermal parameters

  19. Evaluating time-lapse ERT for monitoring DNAPL remediation via numerical simulation

    NASA Astrophysics Data System (ADS)

    Power, C.; Karaoulis, M.; Gerhard, J.; Tsourlos, P.; Giannopoulos, A.

    2012-12-01

    Dense non-aqueous phase liquids (DNAPLs) remain a challenging geoenvironmental problem in the near subsurface. Numerous thermal, chemical, and biological treatment methods are being applied at sites but without a non-destructive, rapid technique to map the evolution of DNAPL mass in space and time, the degree of remedial success is difficult to quantify. Electrical resistivity tomography (ERT) has long been presented as highly promising in this context but has not yet become a practitioner's tool due to challenges in interpreting the survey results at real sites where the initial condition (DNAPL mass, DNAPL distribution, subsurface heterogeneity) is typically unknown. Recently, a new numerical model was presented that couples DNAPL and ERT simulation at the field scale, providing a tool for optimizing ERT application and interpretation at DNAPL sites (Power et al., 2011, Fall AGU, H31D-1191). The objective of this study is to employ this tool to evaluate the effectiveness of time-lapse ERT to monitor DNAPL source zone remediation, taking advantage of new inversion methodologies that exploit the differences in the target over time. Several three-dimensional releases of chlorinated solvent DNAPLs into heterogeneous clayey sand at the field scale were generated, varying in the depth and complexity of the source zone (target). Over time, dissolution of the DNAPL in groundwater was simulated with simultaneous mapping via periodic ERT surveys. Both surface and borehole ERT surveys were conducted for comparison purposes. The latest four-dimensional ERT inversion algorithms were employed to generate time-lapse isosurfaces of the DNAPL source zone for all cases. This methodology provided a qualitative assessment of the ability of ERT to track DNAPL mass removal for complex source zones in realistically heterogeneous environments. In addition, it provided a quantitative comparison between the actual DNAPL mass removed and that interpreted by ERT as a function of depth below

  20. A New Look at Stratospheric Sudden Warmings. Part II: Evaluation of Numerical Model Simulations

    NASA Technical Reports Server (NTRS)

    Charlton, Andrew J.; Polvani, Lorenza M.; Perlwitz, Judith; Sassi, Fabrizio; Manzini, Elisa; Shibata, Kiyotaka; Pawson, Steven; Nielsen, J. Eric; Rind, David

    2007-01-01

    The simulation of major midwinter stratospheric sudden warmings (SSWs) in six stratosphere-resolving general circulation models (GCMs) is examined. The GCMs are compared to a new climatology of SSWs, based on the dynamical characteristics of the events. First, the number, type, and temporal distribution of SSW events are evaluated. Most of the models show a lower frequency of SSW events than the climatology, which has a mean frequency of 6.0 SSWs per decade. Statistical tests show that three of the six models produce significantly fewer SSWs than the climatology, between 1.0 and 2.6 SSWs per decade. Second, four process-based diagnostics are calculated for all of the SSW events in each model. It is found that SSWs in the GCMs compare favorably with dynamical benchmarks for SSW established in the first part of the study. These results indicate that GCMs are capable of quite accurately simulating the dynamics required to produce SSWs, but with lower frequency than the climatology. Further dynamical diagnostics hint that, in at least one case, this is due to a lack of meridional heat flux in the lower stratosphere. Even though the SSWs simulated by most GCMs are dynamically realistic when compared to the NCEP-NCAR reanalysis, the reasons for the relative paucity of SSWs in GCMs remains an important and open question.

  1. Numerical Evaluation of Fluid Mixing Phenomena in Boiling Water Reactor Using Advanced Interface Tracking Method

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Takase, Kazuyuki

    Thermal-hydraulic design of the current boiling water reactor (BWR) is performed with the subchannel analysis codes which incorporated the correlations based on empirical results including actual-size tests. Then, for the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) core, an actual size test of an embodiment of its design is required to confirm or modify such correlations. In this situation, development of a method that enables the thermal-hydraulic design of nuclear reactors without these actual size tests is desired, because these tests take a long time and entail great cost. For this reason, we developed an advanced thermal-hydraulic design method for FLWRs using innovative two-phase flow simulation technology. In this study, a detailed Two-Phase Flow simulation code using advanced Interface Tracking method: TPFIT is developed to calculate the detailed information of the two-phase flow. In this paper, firstly, we tried to verify the TPFIT code by comparing it with the existing 2-channel air-water mixing experimental results. Secondary, the TPFIT code was applied to simulation of steam-water two-phase flow in a model of two subchannels of a current BWRs and FLWRs rod bundle. The fluid mixing was observed at a gap between the subchannels. The existing two-phase flow correlation for fluid mixing is evaluated using detailed numerical simulation data. This data indicates that pressure difference between fluid channels is responsible for the fluid mixing, and thus the effects of the time average pressure difference and fluctuations must be incorporated in the two-phase flow correlation for fluid mixing. When inlet quality ratio of subchannels is relatively large, it is understood that evaluation precision of the existing two-phase flow correlations for fluid mixing are relatively low.

  2. Numerical Evaluation Of Shape Memory Alloy Recentering Braces In Reinforced Concrete Buildings Subjected To Seismic Loading

    NASA Astrophysics Data System (ADS)

    Charles, Winsbert Curt

    Seismic protective techniques utilizing specialized energy dissipation devices within the lateral resisting frames have been successfully used to limit inelastic deformation in reinforced concrete buildings by increasing damping and/or altering the stiffness of these structures. However, there is a need to investigate and develop systems with self-centering capabilities; systems that are able to assist in returning a structure to its original position after an earthquake. In this project, the efficacy of a shape memory alloy (SMA) based device, as a structural recentering device is evaluated through numerical analysis using the OpenSees framework. OpenSees is a software framework for simulating the seismic response of structural and geotechnical systems. OpenSees has been developed as the computational platform for research in performance-based earthquake engineering at the Pacific Earthquake Engineering Research Center (PEER). A non-ductile reinforced concrete building, which is modelled using OpenSees and verified with available experimental data is used for the analysis in this study. The model is fitted with Tension/Compression (TC) SMA devices. The performance of the SMA recentering device is evaluated for a set of near-field and far-field ground motions. Critical performance measures of the analysis include residual displacements, interstory drift and acceleration (horizontal and vertical) for different types of ground motions. The results show that the TC device's performance is unaffected by the type of ground motion. The analysis also shows that the inclusion of the device in the lateral force resisting system of the building resulted in a 50% decrease in peak horizontal displacement, and inter-story drift elimination of residual deformations, acceleration was increased up to 110%.

  3. Numerical and experimental investigations for the evaluation of the wear coefficient of reverse total shoulder prostheses.

    PubMed

    Mattei, Lorenza; Di Puccio, Francesca; Joyce, Thomas J; Ciulli, Enrico

    2015-03-01

    In the present study, numerical and experimental wear investigations on reverse total shoulder arthroplasties (RTSAs) were combined in order to estimate specific wear coefficients, currently not available in the literature. A wear model previously developed by the authors for metal-on-plastic hip implants was adapted to RTSAs and applied in a double direction: firstly, to evaluate specific wear coefficients for RTSAs from experimental results and secondly, to predict wear distribution. In both cases, the Archard wear law (AR) and the wear law of UHMWPE (PE) were considered, assuming four different k functions. The results indicated that both the wear laws predict higher wear coefficients for RTSA with respect to hip implants, particularly the AR law, with k values higher than twofold the hip ones. Such differences can significantly affect predictive wear model results for RTSA, when non-specific wear coefficients are used. Moreover, the wear maps simulated with the two laws are markedly different, although providing the same wear volume. A higher wear depth (+51%) is obtained with the AR law, located at the dome of the cup, while with the PE law the most worn region is close to the edge. Taking advantage of the linear trend of experimental volume losses, the wear coefficients obtained with the AR law should be valid despite having neglected the geometry update in the model.

  4. Numerical simulation of small perturbation transonic flows

    NASA Technical Reports Server (NTRS)

    Seebass, A. R.; Yu, N. J.

    1976-01-01

    The results of a systematic study of small perturbation transonic flows are presented. Both the flow over thin airfoils and the flow over wedges were investigated. Various numerical schemes were employed in the study. The prime goal of the research was to determine the efficiency of various numerical procedures by accurately evaluating the wave drag, both by computing the pressure integral around the body and by integrating the momentum loss across the shock. Numerical errors involved in the computations that affect the accuracy of drag evaluations were analyzed. The factors that effect numerical stability and the rate of convergence of the iterative schemes were also systematically studied.

  5. Evaluation of the influence mode on the CVC GaN HEMT using numerical modeling

    NASA Astrophysics Data System (ADS)

    Parnes, Ya M.; Tikhomirov, V. G.; Petrov, V. A.; Gudkov, A. G.; Marzhanovskiy, I. N.; Kukhareva, E. S.; Vyuginov, V. N.; Volkov, V. V.; Zybin, A. A.

    2016-08-01

    Done numerically simulated the effects of certain modes of operation on the CVC of field microwave transistors on the basis of heterostructures AlGaN / GaN (HEMT). The results of these studies suggest the possibility of quite efficient use of numerical simulation for the development of HEMT microwave transistors allowing for the real instrument designs.

  6. Development and Evaluation of a Remedial Numerical Skills Workbook for Navy Training. Final Report.

    ERIC Educational Resources Information Center

    Bowman, Harry L.; And Others

    A remedial Navy-relevant numerical skills workbook was developed and field tested for use in Navy recruit training commands and as part of the Navy Junior Reserve Officers Training curriculum. Research and curriculum specialists from the Department of the Navy and Memphis State University identified Navy-relevant topics requiring numerical skill…

  7. Seismic fragility evaluation of a piping system in a nuclear power plant by shaking table test and numerical analysis

    SciTech Connect

    Kim, M. K.; Kim, J. H.; Choi, I. K.

    2012-07-01

    In this study, a seismic fragility evaluation of the piping system in a nuclear power plant was performed. For the evaluation of seismic fragility of the piping system, this research was progressed as three steps. At first, several piping element capacity tests were performed. The monotonic and cyclic loading tests were conducted under the same internal pressure level of actual nuclear power plants to evaluate the performance. The cracks and wall thinning were considered as degradation factors of the piping system. Second, a shaking tale test was performed for an evaluation of seismic capacity of a selected piping system. The multi-support seismic excitation was performed for the considering a difference of an elevation of support. Finally, a numerical analysis was performed for the assessment of seismic fragility of piping system. As a result, a seismic fragility for piping system of NPP in Korea by using a shaking table test and numerical analysis. (authors)

  8. Evaluation of numerical weather predictions performed in the context of the project DAPHNE

    NASA Astrophysics Data System (ADS)

    Tegoulias, Ioannis; Pytharoulis, Ioannis; Bampzelis, Dimitris; Karacostas, Theodore

    2014-05-01

    The region of Thessaly in central Greece is one of the main areas of agricultural production in Greece. Severe weather phenomena affect the agricultural production in this region with adverse effects for farmers and the national economy. For this reason the project DAPHNE aims at tackling the problem of drought by means of weather modification through the development of the necessary tools to support the application of a rainfall enhancement program. In the present study the numerical weather prediction system WRF-ARW is used, in order to assess its ability to represent extreme weather phenomena in the region of Thessaly. WRF is integrated in three domains covering Europe, Eastern Mediterranean and Central-Northern Greece (Thessaly and a large part of Macedonia) using telescoping nesting with grid spacing of 15km, 5km and 1.667km, respectively. The cases examined span throughout the transitional and warm period (April to September) of the years 2008 to 2013, including days with thunderstorm activity. Model results are evaluated against all available surface observations and radar products, taking into account the spatial characteristics and intensity of the storms. Preliminary results indicate a good level of agreement between the simulated and observed fields as far as the standard parameters (such as temperature, humidity and precipitation) are concerned. Moreover, the model generally exhibits a potential to represent the occurrence of the convective activity, but not its exact spatiotemporal characteristics. Acknowledgements This research work has been co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-2013)

  9. Evaluating acute pain intensity relief: challenges when using an 11-point numerical rating scale.

    PubMed

    Chauny, Jean-Marc; Paquet, Jean; Lavigne, Gilles; Marquis, Martin; Daoust, Raoul

    2016-02-01

    Percentage of pain intensity difference (PercentPID) is a recognized way of evaluating pain relief with an 11-point numerical rating scale (NRS) but is not without flaws. A new metric, the slope of relative pain intensity difference (SlopePID), which consists in dividing PercentPID by the time between 2 pain measurements, is proposed. This study aims to validate SlopePID with 3 measures of subjective pain relief: a 5-category relief scale (not, a little, moderate, very, complete), a 2-category relief question ("I'm relieved," "I'm not relieved"), and a single-item question, "Wanting other medication to treat pain?" (Yes/No). This prospective cohort study included 361 patients in the emergency department who had an initial acute pain NRS > 3 and a pain intensity assessment within 90 minutes after analgesic administration. Mean age was 50.2 years (SD = 19.3) and 59% were women. Area under the curves of receiver operating characteristic curves analyses revealed similar discriminative power for PercentPID (0.83; 95% confidence interval [CI], 0.79-0.88) and SlopePID (0.82; 95% CI, 0.77-0.86). Considering the "very" category from the 5-category relief scale as a substantial relief, the average cutoff for substantial relief was a decrease of 64% (95% CI, 59-69) for PercentPID and of 49% per hour (95% CI, 44-54) for SlopePID. However, when a cutoff criterion of 50% was used as a measure of pain relief for an individual patient, PercentPID underestimated pain-relieved patients by 12.1% (P < 0.05) compared with the SlopePID measurement, when pain intensity at baseline was an odd number compared with an even number (32.9% vs 45.0%, respectively). SlopePID should be used instead of PercentPID as a metric to evaluate acute pain relief on a 0 to 10 NRS.

  10. An Improved Transformation and Optimized Sampling Scheme for the Numerical Evaluation of Singular and Near-Singular Potentials

    NASA Technical Reports Server (NTRS)

    Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.

    2007-01-01

    Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.

  11. Accurate Analysis and Evaluation of Acidic Plant Growth Regulators in Transgenic and Nontransgenic Edible Oils with Facile Microwave-Assisted Extraction-Derivatization.

    PubMed

    Liu, Mengge; Chen, Guang; Guo, Hailong; Fan, Baolei; Liu, Jianjun; Fu, Qiang; Li, Xiu; Lu, Xiaomin; Zhao, Xianen; Li, Guoliang; Sun, Zhiwei; Xia, Lian; Zhu, Shuyun; Yang, Daoshan; Cao, Ziping; Wang, Hua; Suo, Yourui; You, Jinmao

    2015-09-16

    Determination of plant growth regulators (PGRs) in a signal transduction system (STS) is significant for transgenic food safety, but may be challenged by poor accuracy and analyte instability. In this work, a microwave-assisted extraction-derivatization (MAED) method is developed for six acidic PGRs in oil samples, allowing an efficient (<1.5 h) and facile (one step) pretreatment. Accuracies are greatly improved, particularly for gibberellin A3 (-2.72 to -0.65%) as compared with those reported (-22 to -2%). Excellent selectivity and quite low detection limits (0.37-1.36 ng mL(-1)) are enabled by fluorescence detection-mass spectrum monitoring. Results show the significant differences in acidic PGRs between transgenic and nontransgenic oils, particularly 1-naphthaleneacetic acid (1-NAA), implying the PGRs induced variations of components and genes. This study provides, for the first time, an accurate and efficient determination for labile PGRs involved in STS and a promising concept for objectively evaluating the safety of transgenic foods.

  12. Evaluation and Visualization of Surface Defects — a Numerical and Experimental Study on Sheet-Metal Parts

    NASA Astrophysics Data System (ADS)

    Andersson, A.

    2005-08-01

    The ability to predict surface defects in outer panels is of vital importance in the automotive industry, especially for brands in the premium car segment. Today, measures to prevent these defects can not be taken until a test part has been manufactured, which requires a great deal of time and expense. The decision as to whether a certain surface is of acceptable quality or not is based on subjective evaluation. It is quite possible to detect a defect by measurement, but it is not possible to correlate measured defects and the subjective evaluation. If all results could be based on the same criteria, it would be possible to compare a surface by both FE simulations, experiments and subjective evaluation with the same result. In order to find a solution concerning the prediction of surface defects, a laboratory tool was manufactured and analysed both experimentally and numerically. The tool represents the area around a fuel filler lid and the aim was to recreate surface defects, so-called "teddy bear ears". A major problem with the evaluation of such defects is that the panels are evaluated manually and to a great extent subjectivity is involved in the classification and judgement of the defects. In this study the same computer software was used for the evaluation of both the experimental and the numerical results. In this software the surface defects were indicated by a change in the curvature of the panel. The results showed good agreement between numerical and experimental results. Furthermore, the evaluation software gave a good indication of the appearance of the surface defects compared to an analysis done in existing tools for surface quality measurements. Since the agreement between numerical and experimental results was good, this indicates that these tools can be used for an early verification of surface defects in outer panels.

  13. Numerical simulation of the 2002 Northern Rhodes Slide (Greece) and evaluation of the generated tsunami

    NASA Astrophysics Data System (ADS)

    Zaniboni, Filippo; Armigliato, Alberto; Pagnoni, Gianluca; Tinti, Stefano

    2013-04-01

    Small landslides are very common along the submarine margins, due to steep slopes and continuous material deposition that increment mass instability and supply collapse occurrences, even without earthquake triggering. This kind of events can have relevant consequences when occurring close to the coast, because they are characterized by sudden change of velocity and relevant speed achievement, reflecting into high tsunamigenic potential. This is the case for example of the slide of Rhodes Island (Greece), named Northern Rhodes Slide (NRS), where unusual 3-4 m waves were registered on 24 March 2002, provoking some damage in the coastal stretch of the city of Rhodes (Papadopoulos et al., 2007). The event was not associated with earthquake occurrence, and eyewitnesses supported the hypothesis of a non-seismic source for the tsunami, placed 1 km offshore. Subsequent marine geophysical surveys (Sakellariou et al., 2002) evidenced the presence of several detachment niches at about 300-400 m depth along the northern steep slope, one of which can be considered responsible of the observed tsunami, fitting with the previously mentioned supposition. In this work, that is carried out in the frame of the European funded project NearToWarn, we evaluated the tsunami effects due to the NRS by means of numerical modelling: after having reconstructed the sliding body basing on morphological assumptions (obtaining an esteemed volume of 33 million m3), we simulated the sliding motion through the in-house built code UBO-BLOCK1, adopting a Lagrangian approach and splitting the sliding mass into a "chain" of interacting blocks. This provides the complete dynamics of the landslide, including the shape changes that relevantly influence the tsunami generation. After the application of an intermediate code, accounting for the slide impulse filtering through the water depth, the tsunami propagation in the sea around the island of Rhodes and up to near coasts of Turkey was simulated via the

  14. Numerical analysis on the effect of angle of attack on evaluating radio-frequency blackout in atmospheric reentry

    NASA Astrophysics Data System (ADS)

    Jung, Minseok; Kihara, Hisashi; Abe, Ken-ichi; Takahashi, Yusuke

    2016-06-01

    A three-dimensional numerical simulation model that considers the effect of the angle of attack was developed to evaluate plasma flows around reentry vehicles. In this simulation model, thermochemical nonequilibrium of flowfields is considered by using a four-temperature model for high-accuracy simulations. Numerical simulations were performed for the orbital reentry experiment of the Japan Aerospace Exploration Agency, and the results were compared with experimental data to validate the simulation model. A comparison of measured and predicted results showed good agreement. Moreover, to evaluate the effect of the angle of attack, we performed numerical simulations around the Atmospheric Reentry Demonstrator of the European Space Agency by using an axisymmetric model and a three-dimensional model. Although there were no differences in the flowfields in the shock layer between the results of the axisymmetric and the three-dimensional models, the formation of the electron number density, which is an important parameter in evaluating radio-frequency blackout, was greatly changed in the wake region when a non-zero angle of attack was considered. Additionally, the number of altitudes at which radio-frequency blackout was predicted in the numerical simulations declined when using the three-dimensional model for considering the angle of attack.

  15. Use of Numerical Groundwater Modeling to Evaluate Uncertainty in Conceptual Models of Recharge and Hydrostratigraphy

    SciTech Connect

    Pohlmann, Karl; Ye, Ming; Pohll, Greg; Chapman, Jenny

    2007-01-19

    Numerical groundwater models are based on conceptualizations of hydrogeologic systems that are by necessity developed from limited information and therefore are simplifications of real conditions. Each aspect (e.g. recharge, hydrostratigraphy, boundary conditions) of the groundwater model is often based on a single conceptual model that is considered to be the best representation given the available data. However, the very nature of their construction means that each conceptual model is inherently uncertain and the available information may be insufficient to refute plausible alternatives, thereby raising the possibility that the flow model is underestimating overall uncertainty. In this study we use the Death Valley Regional Flow System model developed by the U.S. Geological Survey as a framework to predict regional groundwater flow southward into Yucca Flat on the Nevada Test Site. An important aspect of our work is to evaluate the uncertainty associated with multiple conceptual models of groundwater recharge and subsurface hydrostratigraphy and quantify the impacts of this uncertainty on model predictions. In our study, conceptual model uncertainty arises from two sources: (1) alternative interpretations of the hydrostratigraphy in the northern portion of Yucca Flat where, owing to sparse data, the hydrogeologic system can be conceptualized in different ways, and (2) uncertainty in groundwater recharge in the region as evidenced by the existence of several independent approaches for estimating this aspect of the hydrologic system. The composite prediction of groundwater flow is derived from the regional model that formally incorporates the uncertainty in these alternative input models using the maximum likelihood Bayesian model averaging method. An assessment of the joint predictive uncertainty of the input conceptual models is also produced. During this process, predictions of the alternative models are weighted by model probability, which is the degree of

  16. Evaluation of transverse dispersion effects in tank experiments by numerical modeling: parameter estimation, sensitivity analysis and revision of experimental design.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2012-06-01

    Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications. PMID:22575873

  17. Evaluation of transverse dispersion effects in tank experiments by numerical modeling: parameter estimation, sensitivity analysis and revision of experimental design.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2012-06-01

    Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications.

  18. Numerical evaluation of a sensible heat balance method to determine rates of soil freezing and thawing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In-situ determination of ice formation and thawing in soils is difficult despite its importance for many environmental processes. A sensible heat balance (SHB) method using a sequence of heat pulse probes has been shown to accurately measure water evaporation in subsurface soil, and it has the poten...

  19. Electron transport and energy degradation in the ionosphere: Evaluation of the numerical solution, comparison with laboratory experiments and auroral observations

    NASA Technical Reports Server (NTRS)

    Lummerzheim, D.; Lilensten, J.

    1994-01-01

    Auroral electron transport calculations are a critical part of auroral models. We evaluate a numerical solution to the transport and energy degradation problem. The numerical solution is verified by reproducing simplified problems to which analytic solutions exist, internal self-consistency tests, comparison with laboratory experiments of electron beams penetrating a collision chamber, and by comparison with auroral observations, particularly the emission ratio of the N2 second positive to N2(+) first negative emissions. Our numerical solutions agree with range measurements in collision chambers. The calculated N(2)2P to N2(+)1N emission ratio is independent of the spectral characteristics of the incident electrons, and agrees with the value observed in aurora. Using different sets of energy loss cross sections and different functions to describe the energy distribution of secondary electrons that emerge from ionization collisions, we discuss the uncertainties of the solutions to the electron transport equation resulting from the uncertainties of these input parameters.

  20. Rapid, Sensitive, and Accurate Evaluation of Drug Resistant Mutant (NS5A-Y93H) Strain Frequency in Genotype 1b HCV by Invader Assay.

    PubMed

    Yoshimi, Satoshi; Ochi, Hidenori; Murakami, Eisuke; Uchida, Takuro; Kan, Hiromi; Akamatsu, Sakura; Hayes, C Nelson; Abe, Hiromi; Miki, Daiki; Hiraga, Nobuhiko; Imamura, Michio; Aikata, Hiroshi; Chayama, Kazuaki

    2015-01-01

    Daclatasvir and asunaprevir dual oral therapy is expected to achieve high sustained virological response (SVR) rates in patients with HCV genotype 1b infection. However, presence of the NS5A-Y93H substitution at baseline has been shown to be an independent predictor of treatment failure for this regimen. By using the Invader assay, we developed a system to rapidly and accurately detect the presence of mutant strains and evaluate the proportion of patients harboring a pre-treatment Y93H mutation. This assay system, consisting of nested PCR followed by Invader reaction with well-designed primers and probes, attained a high overall assay success rate of 98.9% among a total of 702 Japanese HCV genotype 1b patients. Even in serum samples with low HCV titers, more than half of the samples could be successfully assayed. Our assay system showed a better lower detection limit of Y93H proportion than using direct sequencing, and Y93H frequencies obtained by this method correlated well with those of deep-sequencing analysis (r = 0.85, P <0.001). The proportion of the patients with the mutant strain estimated by this assay was 23.6% (164/694). Interestingly, patients with the Y93H mutant strain showed significantly lower ALT levels (p=8.8 x 10-4), higher serum HCV RNA levels (p=4.3 x 10-7), and lower HCC risk (p=6.9 x 10-3) than those with the wild type strain. Because the method is both sensitive and rapid, the NS5A-Y93H mutant strain detection system established in this study may provide important pre-treatment information valuable not only for treatment decisions but also for prediction of disease progression in HCV genotype 1b patients. PMID:26083687

  1. Small and efficient basis sets for the evaluation of accurate interaction energies: aromatic molecule-argon ground-state intermolecular potentials and rovibrational states.

    PubMed

    Cybulski, Hubert; Baranowska-Łączkowska, Angelika; Henriksen, Christian; Fernández, Berta

    2014-11-01

    By evaluating a representative set of CCSD(T) ground state interaction energies for van der Waals dimers formed by aromatic molecules and the argon atom, we test the performance of the polarized basis sets of Sadlej et al. (J. Comput. Chem. 2005, 26, 145; Collect. Czech. Chem. Commun. 1988, 53, 1995) and the augmented polarization-consistent bases of Jensen (J. Chem. Phys. 2002, 117, 9234) in providing accurate intermolecular potentials for the benzene-, naphthalene-, and anthracene-argon complexes. The basis sets are extended by addition of midbond functions. As reference we consider CCSD(T) results obtained with Dunning's bases. For the benzene complex a systematic basis set study resulted in the selection of the (Z)Pol-33211 and the aug-pc-1-33321 bases to obtain the intermolecular potential energy surface. The interaction energy values and the shape of the CCSD(T)/(Z)Pol-33211 calculated potential are very close to the best available CCSD(T)/aug-cc-pVTZ-33211 potential with the former basis set being considerably smaller. The corresponding differences for the CCSD(T)/aug-pc-1-33321 potential are larger. In the case of the naphthalene-argon complex, following a similar study, we selected the (Z)Pol-3322 and aug-pc-1-333221 bases. The potentials show four symmetric absolute minima with energies of -483.2 cm(-1) for the (Z)Pol-3322 and -486.7 cm(-1) for the aug-pc-1-333221 basis set. To further check the performance of the selected basis sets, we evaluate intermolecular bound states of the complexes. The differences between calculated vibrational levels using the CCSD(T)/(Z)Pol-33211 and CCSD(T)/aug-cc-pVTZ-33211 benzene-argon potentials are small and for the lowest energy levels do not exceed 0.70 cm(-1). Such differences are substantially larger for the CCSD(T)/aug-pc-1-33321 calculated potential. For naphthalene-argon, bound state calculations demonstrate that the (Z)Pol-3322 and aug-pc-1-333221 potentials are of similar quality. The results show that these

  2. Numerical models to evaluate the temperature increase induced by ex vivo microwave thermal ablation.

    PubMed

    Cavagnaro, M; Pinto, R; Lopresto, V

    2015-04-21

    Microwave thermal ablation (MTA) therapies exploit the local absorption of an electromagnetic field at microwave (MW) frequencies to destroy unhealthy tissue, by way of a very high temperature increase (about 60 °C or higher). To develop reliable interventional protocols, numerical tools able to correctly foresee the temperature increase obtained in the tissue would be very useful. In this work, different numerical models of the dielectric and thermal property changes with temperature were investigated, looking at the simulated temperature increments and at the size of the achievable zone of ablation. To assess the numerical data, measurement of the temperature increases close to a MTA antenna were performed in correspondence with the antenna feed-point and the antenna cooling system, for increasing values of the radiated power. Results show that models not including the changes of the dielectric and thermal properties can be used only for very low values of the power radiated by the antenna, whereas a good agreement with the experimental values can be obtained up to 20 W if water vaporization is included in the numerical model. Finally, for higher power values, a simulation that dynamically includes the tissue's dielectric and thermal property changes with the temperature should be performed.

  3. Numerical evaluation of voltage gradient constraints on electrokinetic injection of amendments

    NASA Astrophysics Data System (ADS)

    Wu, Ming Zhi; Reynolds, David A.; Prommer, Henning; Fourie, Andy; Thomas, David G.

    2012-03-01

    A new numerical model is presented that simulates groundwater flow and multi-species reactive transport under hydraulic and electrical gradients. Coupled into the existing, reactive transport model PHT3D, the model was verified against published analytical and experimental studies, and has applications in remediation cases where the geochemistry plays an important role. A promising method for remediation of low-permeability aquifers is the electrokinetic transport of amendments for in situ chemical oxidation. Numerical modelling showed that amendment injection resulted in the voltage gradient adjacent to the cathode decreasing below a linear gradient, producing a lower achievable concentration of the amendment in the medium. An analytical method is derived to estimate the achievable amendment concentration based on the inlet concentration. Even with low achievable concentrations, analysis showed that electrokinetic remediation is feasible due to its ability to deliver a significantly higher mass flux in low-permeability media than under a hydraulic gradient.

  4. Numerical evaluation of a novel high-temperature superconductor-based quasi-diamagnetic motor

    NASA Astrophysics Data System (ADS)

    Racz, Arpad; Vajda, Istvan

    2014-05-01

    An investigation is being pursued at the Budapest University of Technology and Economics, Department of Electric Power Engineering for the application of high-temperature superconductors (HTS) in electrical power systems. In this paper we are going to propose a novel electrical machine construction based on the quasi-diamagnetic behaviour of the HTS materials. The basic operation principle of this machine will be introduced with detailed numerical simulations. Also a possible geometric outline will be presented.

  5. Large deviations in boundary-driven systems: Numerical evaluation and effective large-scale behavior

    NASA Astrophysics Data System (ADS)

    Bunin, Guy; Kafri, Yariv; Podolsky, Daniel

    2012-07-01

    We study rare events in systems of diffusive fields driven out of equilibrium by the boundaries. We present a numerical technique and use it to calculate the probabilities of rare events in one and two dimensions. Using this technique, we show that the probability density of a slowly varying configuration can be captured with a small number of long-wavelength modes. For a configuration which varies rapidly in space this description can be complemented by a local-equilibrium assumption.

  6. Numerical and experimental evaluation of ferrofluid potential in mobilizing trapped non-wetting fluid

    NASA Astrophysics Data System (ADS)

    Prodanovic, M.; Soares, F.; Huh, C.

    2014-12-01

    Ferrofluid is a stable dispersion of paramagnetic nanosize particles in a liquid carrier which are magnetized in the presence of magnetic field. Functionalized coating and small size of nanoparticles allows them to flow through porous media without significantly compromising permeability and with little retention. We numerically and experimentally investigate the potential of ferrofluid in mobilizing trapped non-wetting phase. Numerical method is based on a coupled level set model for two-phase flow and an immersed interface method for finding magnetic field strength, and provides the equilibrium configuration of an oleic (non-wetting) phase inside some pore geometry in the presence of dispersed excitable nanoparticles in surrounding water phase. The magnetic pressures near fluid-fluid interface depend locally on the magnetic field intensity and direction, which in turn depend on the fluid configuration. Interfaces represent magnetic permeability discontinuities and hence cause disturbances in the spatial distribution of the magnetic field. Experiments are conducted in micromodels with high pore-to-throat aspect size ratio. Both numerical and experimental results show that stresses produced by the magnetization of ferrofluids can help overcome strong capillary pressures and displace trapped ganglia in the presence of additional mobilizing force such as increased fluid flux or surfactant injection.

  7. Stress analysis and damage evaluation of flawed composite laminates by hybrid-numerical methods

    NASA Technical Reports Server (NTRS)

    Yang, Yii-Ching

    1992-01-01

    Structural components in flight vehicles is often inherited flaws, such as microcracks, voids, holes, and delamination. These defects will degrade structures the same as that due to damages in service, such as impact, corrosion, and erosion. It is very important to know how a structural component can be useful and survive after these flaws and damages. To understand the behavior and limitation of these structural components researchers usually do experimental tests or theoretical analyses on structures with simulated flaws. However, neither approach has been completely successful. As Durelli states that 'Seldom does one method give a complete solution, with the most efficiency'. Examples of this principle is seen in photomechanics which additional strain-gage testing can only average stresses at locations of high concentration. On the other hand, theoretical analyses including numerical analyses are implemented with simplified assumptions which may not reflect actual boundary conditions. Hybrid-Numerical methods which combine photomechanics and numerical analysis have been used to correct this inefficiency since 1950's. But its application is limited until 1970's when modern computer codes became available. In recent years, researchers have enhanced the data obtained from photoelasticity, laser speckle, holography and moire' interferometry for input of finite element analysis on metals. Nevertheless, there is only few of literature being done on composite laminates. Therefore, this research is dedicated to this highly anisotropic material.

  8. Evaluation of gravimetric and volumetric dispensers of particles of nuclear material. [Accurate dispensing of fissile and fertile fuel into fuel rods

    SciTech Connect

    Bayne, C.K.; Angelini, P.

    1981-08-01

    Theoretical and experimental studies compared the abilities of volumetric and gravimetric dispensers to dispense accurately fissile and fertile fuel particles. Such devices are being developed for the fabrication of sphere-pac fuel rods for high-temperature gas-cooled light water and fast breeder reactors. The theoretical examination suggests that, although the fuel particles are dispensed more accurately by the gravimetric dispenser, the amount of nuclear material in the fuel particles dispensed by the two methods is not significantly different. The experimental results demonstrated that the volumetric dispenser can dispense both fuel particles and nuclear materials that meet standards for fabricating fuel rods. Performance of the more complex gravimetric dispenser was not significantly better than that of the simple yet accurate volumetric dispenser.

  9. Numerical performance evaluation of design modifications on a centrifugal pump impeller running in reverse mode

    NASA Astrophysics Data System (ADS)

    Kassanos, Ioannis; Chrysovergis, Marios; Anagnostopoulos, John; Papantonis, Dimitris; Charalampopoulos, George

    2016-06-01

    In this paper the effect of impeller design variations on the performance of a centrifugal pump running as turbine is presented. Numerical simulations were performed after introducing various modifications in the design for various operating conditions. Specifically, the effects of the inlet edge shape, the meridional channel width, the number of blades and the addition of splitter blades on impeller performance was investigated. The results showed that, an increase in efficiency can be achieved by increasing the number of blades and by introducing splitter blades.

  10. EVALUATION OF U10MO FUEL PLATE IRRADIATION BEHAVIOR VIA NUMERICAL AND EXPERIMENTAL BENCHMARKING

    SciTech Connect

    Samuel J. Miller; Hakan Ozaltun

    2012-11-01

    This article analyzes dimensional changes due to irradiation of monolithic plate-type nuclear fuel and compares results with finite element analysis of the plates during fabrication and irradiation. Monolithic fuel plates tested in the Advanced Test Reactor (ATR) at Idaho National Lab (INL) are being used to benchmark proposed fuel performance for several high power research reactors. Post-irradiation metallographic images of plates sectioned at the midpoint were analyzed to determine dimensional changes of the fuel and the cladding response. A constitutive model of the fabrication process and irradiation behavior of the tested plates was developed using the general purpose commercial finite element analysis package, Abaqus. Using calculated burn-up profiles of irradiated plates to model the power distribution and including irradiation behaviors such as swelling and irradiation enhanced creep, model simulations allow analysis of plate parameters that are either impossible or infeasible in an experimental setting. The development and progression of fabrication induced stress concentrations at the plate edges was of primary interest, as these locations have a unique stress profile during irradiation. Additionally, comparison between 2D and 3D models was performed to optimize analysis methodology. In particular, the ability of 2D and 3D models account for out of plane stresses which result in 3-dimensional creep behavior that is a product of these components. Results show that assumptions made in 2D models for the out-of-plane stresses and strains cannot capture the 3-dimensional physics accurately and thus 2D approximations are not computationally accurate. Stress-strain fields are dependent on plate geometry and irradiation conditions, thus, if stress based criteria is used to predict plate behavior (as opposed to material impurities, fine micro-structural defects, or sharp power gradients), unique 3D finite element formulation for each plate is required.

  11. Experimental and numerical evaluation of the heat fluxes in a basic two-dimensional motor

    NASA Astrophysics Data System (ADS)

    Nicoud, F.

    In the framework of a study assessing the ablation of Internal Thermal Insulation (ITI) of the Ariane 5 P230 Solid Rocket Booster (SRB), a 2D basic motor has been designed and manufactured at ONERA. During the first phase of the study, emphasis has been put on the heat flux measurements on an inert wall facing a propellant grain. In order to numerically reproduce the increase of the heat transfer exchange coefficient which is experimentally observed when one proceeds from the head-end to the aft-end of the port, a 2D explicit code with a two-equation turbulence model has been used. It is found that the computed heat transfer coefficient is closer to the experimental one when a wall law accounting for the mean density variations due to the large temperature gradient near the ITI is used. For this, the ITI is assumed to be completely inert and the wall temperature is imposed. The experimental data for two other tests, not numerically simulated, are also presented.

  12. A numerical analysis to evaluate Betz's Law for vertical axis wind turbines

    NASA Astrophysics Data System (ADS)

    Thönnißen, F.; Marnett, M.; Roidl, B.; Schröder, W.

    2016-09-01

    The upper limit for the energy conversion rate of horizontal axis wind turbines (HAWT) is known as the Betz limit. Often this limit is also applied to vertical axis wind turbines (VAWT). However, a literature review reveals that early analytical and recent numerical approaches predicted values for the maximum power output of VAWTs close to or even higher than the Betz limit. Thus, it can be questioned whether the application of Betz's Law to VAWTs is justified. To answer this question, the current approach combines a free vortex model with a 2D inviscid panel code to represent the flow field of a generic VAWT. To ensure the validity of the model, an active blade pitch control system is used to avoid flow separation. An optimal pitch curve avoiding flow separation is determined for one specific turbine configuration by applying an evolutionary algorithm. The analysis yields a net power output that is slightly (≈6%) above the Betz limit. Besides the numerical result of an increased energy conversion rate, especially the identification of two physical power increasing mechanisms shows, that the application of Betz's Law to VAWTs is not justified.

  13. Evaluation of Sulfur Flow Emplacement on Io from Galileo Data and Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Williams, David A.; Greeley, Ronald; Lopes, Rosaly M. C.; Davies, Ashley G.

    2001-01-01

    Galileo images of bright lava flows surrounding Emakong Patera have bee0 analyzed and numerical modeling has been performed to assess whether these flows could have resulted from the emplacement of sulfur lavas on Io. Images from the solid-state imaging (SSI) camera show that these bright, white to yellow Emakong flows are up to 370 km long and contain dark, sinuous features that are interpreted to be lava conduits, -300-500 m wide and >lo0 km lorig. Neiu-Infrared Mapping S estimate of 344 K f 60 G131'C) within the Bmakong caldera. We suggest that these bright flows likely resulted from either sulfur lavas or silicate lavas that have undergone extensive cooling, pyroclastic mantling, and/or alteration with bright sulfurous materials. The Emakoag bright flows have estimated volume of -250-350 km', similar to some of the smaller Columbia River Basalt flows, If the Emakong flows did result from effusive sulfur eruptions, then they are orders of magnitude reater in volume than any terrestrial sulfur flows. Our numerical modeling capable of traveling tens to hundreds of kilometers, consistent with the predictions of Sagan. Our modeled flow distances are also consistent with the measured lengths of the Emakong channels and bright flows.

  14. Numerical simulation study of polar low in Kara sea: developing mechanisms evaluation

    NASA Astrophysics Data System (ADS)

    Verezemskaya, Polina; Stepanenko, Victor

    2016-04-01

    The study focuses on investigating the mechanisms of interaction between potential vorticity's anomalies and latent heat release as polar low development factors. The polar low observed in Kara sea 29th -30th September 2008 is analyzed using numerical modeling (WRF ARW model) and observational data (IR cloudiness and microwave water vapor and surface wind speeds from MODIS (Aqua)). Two numerical experiments with 5 km spatial resolution were conducted with microphisical scheme turned on and off to assess the role of latent heat on vortex intensification. The quality of modelling was estimated by comparing WRF output and the satellite data. Based on reference experiment (with microphysical parameterization turned on) and observational data PL developed in vertically stable, non-baroclinic atmosphere and characterized by very low surface heat fluxes. «Dry» experiment results suggests that without latent heat source in the middle troposphere polar low intensifies slower, than in reality. In order to divide low- and upper-level forcing within PL dynamics we used attribution concept based on the quasi-geostrophic omega-equation. To ensure that QG theory is applicable for this PL case, we estimate correlation between the modeled and QG vertical speed field obtained from omega-equation using finite-differences method.

  15. Evaluating aerosol impacts on Numerical Weather Prediction in two extreme dust and biomass-burning events

    NASA Astrophysics Data System (ADS)

    Remy, Samuel; Benedetti, Angela; Jones, Luke; Razinger, Miha; Haiden, Thomas

    2014-05-01

    The WMO-sponsored Working Group on Numerical Experimentation (WGNE) set up a project aimed at understanding the importance of aerosols for numerical weather prediction (NWP). Three cases are being investigated by several NWP centres with aerosol capabilities: a severe dust case that affected Southern Europe in April 2012, a biomass burning case in South America in September 2012, and an extreme pollution event in Beijing (China) which took place in January 2013. At ECMWF these cases are being studied using the MACC-II system with radiatively interactive aerosols. Some preliminary results related to the dust and the fire event will be presented here. A preliminary verification of the impact of the aerosol-radiation direct interaction on surface meteorological parameters such as 2m Temperature and surface winds over the region of interest will be presented. Aerosol optical depth (AOD) verification using AERONET data will also be discussed. For the biomass burning case, the impact of using injection heights estimated by a Plume Rise Model (PRM) for the biomass burning emissions will be presented.

  16. Doppler echo evaluation of pulmonary venous-left atrial pressure gradients: human and numerical model studies

    NASA Technical Reports Server (NTRS)

    Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; Prior, D. L.; Scalia, G. M.; Thomas, J. D.; Garcia, M. J.

    2000-01-01

    The simplified Bernoulli equation relates fluid convective energy derived from flow velocities to a pressure gradient and is commonly used in clinical echocardiography to determine pressure differences across stenotic orifices. Its application to pulmonary venous flow has not been described in humans. Twelve patients undergoing cardiac surgery had simultaneous high-fidelity pulmonary venous and left atrial pressure measurements and pulmonary venous pulsed Doppler echocardiography performed. Convective gradients for the systolic (S), diastolic (D), and atrial reversal (AR) phases of pulmonary venous flow were determined using the simplified Bernoulli equation and correlated with measured actual pressure differences. A linear relationship was observed between the convective (y) and actual (x) pressure differences for the S (y = 0.23x + 0.0074, r = 0.82) and D (y = 0.22x + 0.092, r = 0.81) waves, but not for the AR wave (y = 0. 030x + 0.13, r = 0.10). Numerical modeling resulted in similar slopes for the S (y = 0.200x - 0.127, r = 0.97), D (y = 0.247x - 0. 354, r = 0.99), and AR (y = 0.087x - 0.083, r = 0.96) waves. Consistent with numerical modeling, the convective term strongly correlates with but significantly underestimates actual gradient because of large inertial forces.

  17. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  18. Thermal contact algorithms in SIERRA mechanics : mathematical background, numerical verification, and evaluation of performance.

    SciTech Connect

    Copps, Kevin D.; Carnes, Brian R.

    2008-04-01

    We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.

  19. Numerical evaluation of Auger recombination coefficients in relaxed and strained germanium

    NASA Astrophysics Data System (ADS)

    Dominici, Stefano; Wen, Hanqing; Bertazzi, Francesco; Goano, Michele; Bellotti, Enrico

    2016-05-01

    The potential applications of germanium and its alloys in infrared silicon-based photonics have led to a renewed interest in their optical properties. In this letter, we report on the numerical determination of Auger coefficients at T = 300 K for relaxed and biaxially strained germanium. We use a Green's function based model that takes into account all relevant direct and phonon-assisted processes and perform calculations up to a strain level corresponding to the transition from indirect to direct energy gap. We have considered excess carrier concentrations ranging from 1016 cm-3 to 5 × 1019 cm-3. For use in device level simulations, we also provide fitting formulas for the calculated electron and hole Auger coefficients as functions of carrier density.

  20. Evaluation of the impacts of climate changes on the coastal Chaouia aquifer, Morocco, using numerical modeling

    NASA Astrophysics Data System (ADS)

    Moustadraf, J.; Razack, M.; Sinan, M.

    2008-11-01

    The aquifer of the Chaouia Coast, Morocco constitutes an example of groundwater resources subjected to intensive and uncontrolled withdrawals in a semi-arid region. The analysis of the trends of precipitation and piezometric levels of the Chaouia coastal aquifer, with the use of moving averages, emphasized the impact of the climate on the groundwater resources of the system. The results showed that the periods 1977-1993 and 1996-2000 are characterized by a deficit in precipitation, although the precipitation increased slightly during the periods 1973-1977 and 1993-1996. Numerical modeling of the Chaouia aquifer showed that the groundwater resources of this system are less sensitive to the variations in precipitation. Severe degradation of the resource is related to intensive pumping during the periods of drought, which has forced abandonment of wells due to seawater intrusion.

  1. Numerical evaluation of the fidelity error threshold for the surface code

    NASA Astrophysics Data System (ADS)

    Jouzdani, Pejman; Mucciolo, Eduardo R.

    2014-07-01

    We study how the resilience of the surface code is affected by the coupling to a non-Markovian environment at zero temperature. The qubits in the surface code experience an effective dynamics due to the coupling to the environment that induces correlations among them. The range of the effective induced qubit-qubit interaction depends on parameters related to the environment and the duration of the quantum error correction cycle. We show numerically that different interaction ranges set different intrinsic bounds on the fidelity of the code. These bounds are unrelated to the error thresholds based on stochastic error models. We introduce a definition of stabilizers based on logical operators that allows us to efficiently implement a Metropolis algorithm to determine upper bounds to the fidelity error threshold.

  2. Numerical and experimental evaluation of a compact sensor antenna for healthcare devices.

    PubMed

    Alomainy, A; Yang Hao; Pasveer, F

    2007-12-01

    The paper presents a compact planar antenna designed for wireless sensors intended for healthcare applications. Antenna performance is investigated with regards to various parameters governing the overall sensor operation. The study illustrates the importance of including full sensor details in determining and analysing the antenna performance. A globally optimized sensor antenna shows an increase in antenna gain by 2.8 dB and 29% higher radiation efficiency in comparison to a conventional printed strip antenna. The wearable sensor performance is demonstrated and effects on antenna radiated power, efficiency and front to back ratio of radiated energy are investigated both numerically and experimentally. Propagation characteristics of the body-worn sensor to on-body and off-body base units are also studied. It is demonstrated that the improved sensor antenna has an increase in transmitted and received power, consequently sensor coverage range is extended by approximately 25%.

  3. A Comparative Study of Analytical and Numerical Evaluation of Elastic Properties of Short Fiber Composites

    NASA Astrophysics Data System (ADS)

    Reddy, Babu; Badari Narayana, K.

    2016-09-01

    Unlike the case of continuous fiber composites, the prediction of elastic properties of short fiber composites using the corresponding elastic properties of constituents is not a straight forward task. Many authors have attempted to predict the properties using completely either by analytical or by experimental methods or a combination of both leading to empirical solutions. The current trend is to use the well known numerical solution Finite element method (FEM) to model the short fiber composite to predict their properties. In this paper, a RVE (Representative Volume Element) approach is used to model, with appropriate boundary and loading conditions and application of homogenization process to estimate elastic properties. The present values are compared with the available experimental and analytical solutions. The methods that best match with the current FE solutions are highlighted.

  4. Numerical and experimental evaluation of a compact sensor antenna for healthcare devices.

    PubMed

    Alomainy, A; Yang Hao; Pasveer, F

    2007-12-01

    The paper presents a compact planar antenna designed for wireless sensors intended for healthcare applications. Antenna performance is investigated with regards to various parameters governing the overall sensor operation. The study illustrates the importance of including full sensor details in determining and analysing the antenna performance. A globally optimized sensor antenna shows an increase in antenna gain by 2.8 dB and 29% higher radiation efficiency in comparison to a conventional printed strip antenna. The wearable sensor performance is demonstrated and effects on antenna radiated power, efficiency and front to back ratio of radiated energy are investigated both numerically and experimentally. Propagation characteristics of the body-worn sensor to on-body and off-body base units are also studied. It is demonstrated that the improved sensor antenna has an increase in transmitted and received power, consequently sensor coverage range is extended by approximately 25%. PMID:23852005

  5. Numerical study of evaluating the optical quality of supersonic flow fields.

    PubMed

    Wang, Tao; Zhao, Yan; Xu, Dong; Yang, Qiuying

    2007-08-10

    A numerical method based on the uniform and hexahedral grids generated from computational fluid dynamics is presented for the analysis of aero-optical performance. A single grid is taken as a cell with isotropy and homogeneity inside, and it is assumed that the light rays transmit grid by grid. Ray tracing is employed to track the transmission through the flow of supersonic fluids, and a recursive algorithm is derived. The line-of-sight errors and optical path differences produced by the mean density fields were calculated, the phase variances brought from the density fluctuations were computed, and the Strehl ratios were figured out. This method potentially provides a solution for the prediction of aero-optical effects.

  6. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process

    PubMed Central

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-01-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570

  7. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    NASA Astrophysics Data System (ADS)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  8. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process.

    PubMed

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  9. A Numerical model to evaluate proposed ground-water allocations in southwest Kansas

    USGS Publications Warehouse

    Jorgensen, D.G.; Grubb, H.F.; Baker, C.H.; Hilmes, G.E.; Jenkins, E.D.

    1982-01-01

    A computer model was developed to assist the Southwest Kansas Groundwater Management District No. 3 in the evaluation of applications to appropriate ground water. The model calculated the drawdown due from a proposed well at all existing wells in the section of the proposed well and at all wells in the adjacent eight sections. The depletion expected in the 9-square-mile area due to all existing wells and the proposed well is computed and compared with allowable limits defined by the management district. An optional program permits the evaluation of allowable depletion for one or more townships. All options are designed to run interactively, thus allowing for immediate evaluation of proposed ground-water withdrawals. (USGS)

  10. Numerical evaluation of lactoperoxidase inactivation during continuous pulsed electric field processing.

    PubMed

    Buckow, Roman; Semrau, Julius; Sui, Qian; Wan, Jason; Knoerzer, Kai

    2012-01-01

    A computational fluid dynamics (CFD) model describing the flow, electric field and temperature distribution of a laboratory-scale pulsed electric field (PEF) treatment chamber with co-field electrode configuration was developed. The predicted temperature increase was validated by means of integral temperature studies using thermocouples at the outlet of each flow cell for grape juice and salt solutions. Simulations of PEF treatments revealed intensity peaks of the electric field and laminar flow conditions in the treatment chamber causing local temperature hot spots near the chamber walls. Furthermore, thermal inactivation kinetics of lactoperoxidase (LPO) dissolved in simulated milk ultrafiltrate were determined with a glass capillary method at temperatures ranging from 65 to 80 °C. Temperature dependence of first order inactivation rate constants was accurately described by the Arrhenius equation yielding an activation energy of 597.1 kJ mol(-1). The thermal impact of different PEF processes on LPO activity was estimated by coupling the derived Arrhenius model with the CFD model and the predicted enzyme inactivation was compared to experimental measurements. Results indicated that LPO inactivation during combined PEF/thermal treatments was largely due to thermal effects, but 5-12% enzyme inactivation may be related to other electro-chemical effects occurring during PEF treatments.

  11. Three-Dimensional Numerical Evaluation of Thermal Performance of Uninsulated Wall Assemblies

    SciTech Connect

    Ridouane, El Hassan; Bianchi, Marcus V.A.

    2011-11-01

    This study describes a detailed 3D computational fluid dynamics model that evaluates the thermal performance of uninsulated wall assemblies. It accounts for conduction through framing, convection, and radiation and allows for material property variations with temperature. This research was presented at the ASME 2011 International Mechanical Engineering Congress and Exhibition; Denver, Colorado; November 11-17, 2011

  12. Evaluation of Injection Efficiency of Carbon Dioxide Using an Integrated Injection Well and Geologic Formation Numerical Simulation Scheme

    NASA Astrophysics Data System (ADS)

    Kihm, J.; Park, S.; Kim, J.; SNU CO2 GEO-SEQ TEAM

    2011-12-01

    A series of integrated injection well and geologic formation numerical simulations was performed to evaluate the injection efficiency of carbon dioxide using a multiphase thermo-hydrological numerical model. The numerical simulation results show that groundwater flow, carbon dioxide flow, and heat transport in both injection well and sandstone formation can be simultaneously analyzed, and thus the injection efficiency (i.e., injection rate and injectivity) of carbon dioxide can be quantitatively evaluated using the integrated injection well and geologic formation numerical simulation scheme. The injection rate and injectivity of carbon dioxide increase rapidly during the early period of time (about 10 days) and then increase slightly up to about 2.07 kg/s (equivalent to 0.065 Mton/year) and about 2.84 × 10-7 kg/s/Pa, respectively, until 10 years for the base case. The sensitivity test results show that the injection pressure and temperature of carbon dioxide at the wellhead have significant impacts on its injection rate and injectivity. The vertical profile of the fluid pressure in the injection well becomes almost a hydrostatical equilibrium state within 1 month for all the cases. The vertical profile of the fluid temperature in the injection well becomes a monotonously increasing profile with the depth due to isenthalpic or adiabatic compression within 6 months for all the cases. The injection rate of carbon dioxide increases linearly with the fluid pressure difference between the well bottom and the sandstone formation far from the injection well. In contrast, the injectivity of carbon dioxide varies unsystematically with the fluid pressure difference. On the other hand, the reciprocal of the kinematic viscosity of carbon dioxide at the well bottom has an excellent linear relationship with the injectivity of carbon dioxide. It indicates that the above-mentioned variation of the injectivity of carbon dioxide can be corrected using this linear relationship. The

  13. Evaluation of Sulfur Flow Emplacement on Io from Galileo Data and Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Williams, David A.; Greeley, Ronald; Lopes, Rosaly M. C.; Davies, Ashley G.

    2001-01-01

    Galileo images of bright lava flows surrounding Emakong Patera have been analyzed and numerical modeling has been performed to assess whether these flows could have resulted from the emplacement of sulfur lavas on Io. Images from the solid-state imaging.(SSI) camera show that these bright, white to yellow Emakong flows are up to 370 km long and contain dark, sinuous features that are interpreted to be lava conduits, approx. 300-500 m wide and > 100 km long. Near-Infrared Mapping Spectrometer (NIMS) thermal emission data yield a color temperature estimate of 344 K +/- 60 K (less than or equal to 131 C) within the Emakong caldera. We suggest that these bright flows likely resulted from either sulfur lavas or silicate lavas that have undergone extensive cooling, pyroclastic mantling, and/or alteration with bright sulfurous materials. The Emakong bright flows have estimated volumes of approx. 250-350 cu km, similar to some of the smaller Columbia River Basalt flows. If the Emakong flows did result from effusive sulfur eruptions, then they are orders of magnitude greater in volume than any terrestrial sulfur flows. Our numerical modeling results show that sulfur lavas on Io could have been emplaced as turbulent flows, which were capable of traveling tens to hundreds of kilometers, consistent with the predictions of Sagan [ 19793 and Fink et al. [ 19831. Our modeled flow distances are also consistent with the measured lengths of the Emakong channels and bright flows. Modeled thermal erosion rates are approx. 1-4 m/d for flows erupted at approx. 140-180 C, which are consistent with the melting rates of Kieffer et al. [2000]. The Emakong channels could be thermal erosional in nature; however, the morphologic signatures of thermal erosion channels cannot be discerned from available images. There are planned Galileo flybys of Io in 2001 which provide excellent opportunities to obtain high-resolution morphologic and color data of Emakong Patera. Such observations could, along

  14. Numerical evaluation of the effectiveness of NO2 and N2O5 generation during the NO ozonation process.

    PubMed

    Wang, Haiqiang; Zhuang, Zhuokai; Sun, Chenglang; Zhao, Nan; Liu, Yue; Wu, Zhongbiao

    2016-03-01

    Wet scrubbing combined with ozone oxidation has become a promising technology for simultaneous removal of SO2 and NOx in exhaust gas. In this paper, a new 20-species, 76-step detailed kinetic mechanism was proposed between O3 and NOx. The concentration of N2O5 was measured using an in-situ IR spectrometer. The numerical evaluation results kept good pace with both the public experiment results and our experiment results. Key reaction parameters for the generation of NO2 and N2O5 during the NO ozonation process were investigated by a numerical simulation method. The effect of temperature on producing NO2 was found to be negligible. To produce NO2, the optimal residence time was 1.25sec and the molar ratio of O3/NO about 1. For the generation of N2O5, the residence time should be about 8sec while the temperature of the exhaust gas should be strictly controlled and the molar ratio of O3/NO about 1.75. This study provided detailed investigations on the reaction parameters of ozonation of NOx by a numerical simulation method, and the results obtained should be helpful for the design and optimization of ozone oxidation combined with the wet flue gas desulfurization methods (WFGD) method for the removal of NOx. PMID:26969050

  15. Numerical evaluation of the effectiveness of NO2 and N2O5 generation during the NO ozonation process.

    PubMed

    Wang, Haiqiang; Zhuang, Zhuokai; Sun, Chenglang; Zhao, Nan; Liu, Yue; Wu, Zhongbiao

    2016-03-01

    Wet scrubbing combined with ozone oxidation has become a promising technology for simultaneous removal of SO2 and NOx in exhaust gas. In this paper, a new 20-species, 76-step detailed kinetic mechanism was proposed between O3 and NOx. The concentration of N2O5 was measured using an in-situ IR spectrometer. The numerical evaluation results kept good pace with both the public experiment results and our experiment results. Key reaction parameters for the generation of NO2 and N2O5 during the NO ozonation process were investigated by a numerical simulation method. The effect of temperature on producing NO2 was found to be negligible. To produce NO2, the optimal residence time was 1.25sec and the molar ratio of O3/NO about 1. For the generation of N2O5, the residence time should be about 8sec while the temperature of the exhaust gas should be strictly controlled and the molar ratio of O3/NO about 1.75. This study provided detailed investigations on the reaction parameters of ozonation of NOx by a numerical simulation method, and the results obtained should be helpful for the design and optimization of ozone oxidation combined with the wet flue gas desulfurization methods (WFGD) method for the removal of NOx.

  16. Numerical evaluation of crack growth in polymer electrolyte fuel cell membranes based on plastically dissipated energy

    NASA Astrophysics Data System (ADS)

    Ding, Guoliang; Santare, Michael H.; Karlsson, Anette M.; Kusoglu, Ahmet

    2016-06-01

    Understanding the mechanisms of growth of defects in polymer electrolyte membrane (PEM) fuel cells is essential for improving cell longevity. Characterizing the crack growth in PEM fuel cell membrane under relative humidity (RH) cycling is an important step towards establishing strategies essential for developing more durable membrane electrode assemblies (MEA). In this study, a crack propagation criterion based on plastically dissipated energy is investigated numerically. The accumulation of plastically dissipated energy under cyclical RH loading ahead of the crack tip is calculated and compared to a critical value, presumed to be a material parameter. Once the accumulation reaches the critical value, the crack propagates via a node release algorithm. From the literature, it is well established experimentally that membranes reinforced with expanded polytetrafluoroethylene (ePTFE) reinforced perfluorosulfonic acid (PFSA) have better durability than unreinforced membranes, and through-thickness cracks are generally found under the flow channel regions but not land regions in unreinforced PFSA membranes. We show that the proposed plastically dissipated energy criterion captures these experimental observations and provides a framework for investigating failure mechanisms in ionomer membranes subjected to similar environmental loads.

  17. Numerical simulation approaches to evaluate nitrate contamination of groundwater through leakage well in layered aquifer system

    NASA Astrophysics Data System (ADS)

    Koh, E.; Lee, E.; Lee, K.

    2013-12-01

    The layered aquifer system (i.e. perched and regional aquifers) is locally observed in Gosan area of Jeju Island, Korea due to scattered distributions of an impermeable clay layer. In the Gosan area, farming is actively performed and nitrate contamination has been frequently reported in groundwater of regional aquifer which is sole water resource in the island. Water quality of the regional groundwater is impacted by inflows of the nitrate-rich perched groundwater, which is located above the impermeable layer and directly affected by surface contaminants. A poorly grouted well penetrating the impermeable layer provides a passage of contaminated groundwater through the impermeable layer. Such a hydrogeological characteristic consequently induces nitrate contamination of the regional aquifer in this region. To quantify the inflows of the perched groundwater via leakage wells, a numerical model was developed to calculate leakage amounts of the perched groundwater into the regional groundwater. This perched groundwater leakages were applied as point and time-variable contamination sources during the solute transport simulation process for the regional aquifer. This work will provide useful information to suggest effective ways to control nitrate contamination of groundwater in the agricultural field.

  18. Sensor and numerical simulator evaluation for porous medium desiccation and rewetting at the intermediate laboratory scale

    SciTech Connect

    Oostrom, Martinus; Wietsma, Thomas W.; Strickland, Christopher E.; Freedman, Vicky L.; Truex, Michael J.

    2012-02-01

    Soil desiccation, in conjunction with surface infiltration control, is considered at the Hanford Site as a potential technology to limit the flux of technetium and other contaminants in the vadose zone to the groundwater. An intermediate-scale experiment was conducted to test the response of a series of instruments to desiccation and subsequent rewetting of porous media. The instruments include thermistors, thermocouple psychrometers, dual-probe heat pulse sensors, heat dissipation units, and humidity probes. The experiment was simulated with the multifluid flow simulator STOMP, using independently obtained hydraulic and thermal porous medium properties. All instrument types used for this experiment were able to indicate when the desiccation front passed a certain location. In most cases the changes were sharp, indicating rapid changes in moisture content, water potential, or humidity. However, a response to the changing conditions was recorded only when the drying front was very close to a sensor. Of the tested instruments, only the heat dissipation unit and humidity probes were able to detect rewetting. The numerical simulation results reasonably match the experimental data, indicating that the simulator captures the pertinent gas flow and transport processes related to desiccation and rewetting and may be useful in the design and analysis of field tests.

  19. Evaluation of the deflections in the radiator vessel of the ALICE RICH array using numerical methods

    NASA Astrophysics Data System (ADS)

    Demelio, G.; Galantucci, L. M.; Grimaldi, A.; Nappi, E.; Posa, F.; Valentino, V.

    1996-02-01

    The RICH array in ALICE (A Large Ion Collider Experiment) at CERN-LHC is being designed following the basic criterion to optimize the detector performances in terms of Cherenkov angle resolution and the minimisation of the total material traversed by the incoming particles. Due to the physics requirements, low deformation of the liquid freon container is mandatory, therefore a careful engineering design to predict the deflection of the radiator structure when filled with freon is needed. The aim of this study is the design of the liquid freon container under different static load conditions since the RICH array is placed in a barrel frame structure of about 4 m radius and 8 m length. Because of its high stiffness and low weight, a honeycomb sandwich with NOMEX ® core and carbon fiber skins is used for the vessel structure. Different solutions are analyzed using numerical techniques, based on Navier double series expansion and finite element method. They show good agreement and highlight the possibility of obtaining negligible stresses and strains.

  20. Numerical simulation of widening and bed deformation of straight sand-bed rivers. II: Model evaluation

    USGS Publications Warehouse

    Darby, S.E.; Thorne, Colin R.; Simon, A.

    1996-01-01

    In this paper the numerical model presented in the companion paper is tested and applied. Assessment of model accuracy was based on two approaches. First, predictions of evolution of a 13.5 km reach of the South Fork of the Forked Deer River, in west Tennessee, were compared to observations over a 24-yr period. Results suggest that although the model was able to qualitatively predict trends of widening and deepening, quantitative predictions were not reliable. Simulated widths and depths were within 15% of the corresponding observed values, but observed change in these parameters at the study sites were also close to these values. Simulated rates of depth adjustment were within 15% of observed rates, but observed rates of channel widening at the study sites were approximately three times those simulated by the model. In the second approach, the model was used to generate relationships between stable channel width and bank-full discharge. The model was able to successfully replicate the form of empirically derived regime-width equations. Simulations were used to demonstrate the model's ability to obtain more realistic predictions of bed evolution in widening channels.

  1. Numerical approach for the evaluation of Weibull distribution parameters for hydrologic purposes

    NASA Astrophysics Data System (ADS)

    Pierleoni, A.; Di Francesco, S.; Biscarini, C.; Manciola, P.

    2016-06-01

    In hydrology, the statistical description of low flow phenomena is very important in order to evaluate the available water resource especially in a river and the related values can be obviously considered as random variables, therefore probability distributions dealing with extreme values (maximum and/or minimum) of the variable play a fundamental role. Computational procedures for the estimation of the parameters featuring these distributions are actually very useful especially when embedded into analysis software [1][2] or as standalone applications. In this paper a computational procedure for the evaluation of the Weibull[3] distribution is presented focusing on the case when the lower limit of the distribution is not known or not set to a specific value a priori. The procedure takes advantage of the Gumbel[4] moment approach to the problem.

  2. Design and evaluation of the computer-based training program Calcularis for enhancing numerical cognition.

    PubMed

    Käser, Tanja; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; Richtmann, Verena; Grond, Ursina; Gross, Markus; von Aster, Michael

    2013-01-01

    This article presents the design and a first pilot evaluation of the computer-based training program Calcularis for children with developmental dyscalculia (DD) or difficulties in learning mathematics. The program has been designed according to insights on the typical and atypical development of mathematical abilities. The learning process is supported through multimodal cues, which encode different properties of numbers. To offer optimal learning conditions, a user model completes the program and allows flexible adaptation to a child's individual learning and knowledge profile. Thirty-two children with difficulties in learning mathematics completed the 6-12-weeks computer training. The children played the game for 20 min per day for 5 days a week. The training effects were evaluated using neuropsychological tests. Generally, children benefited significantly from the training regarding number representation and arithmetic operations. Furthermore, children liked to play with the program and reported that the training improved their mathematical abilities.

  3. Design and evaluation of the computer-based training program Calcularis for enhancing numerical cognition

    PubMed Central

    Käser, Tanja; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; Richtmann, Verena; Grond, Ursina; Gross, Markus; von Aster, Michael

    2013-01-01

    This article presents the design and a first pilot evaluation of the computer-based training program Calcularis for children with developmental dyscalculia (DD) or difficulties in learning mathematics. The program has been designed according to insights on the typical and atypical development of mathematical abilities. The learning process is supported through multimodal cues, which encode different properties of numbers. To offer optimal learning conditions, a user model completes the program and allows flexible adaptation to a child's individual learning and knowledge profile. Thirty-two children with difficulties in learning mathematics completed the 6–12-weeks computer training. The children played the game for 20 min per day for 5 days a week. The training effects were evaluated using neuropsychological tests. Generally, children benefited significantly from the training regarding number representation and arithmetic operations. Furthermore, children liked to play with the program and reported that the training improved their mathematical abilities. PMID:23935586

  4. Numerical modeling of debris avalanches at Nevado de Toluca (Mexico): implications for hazard evaluation and mapping

    NASA Astrophysics Data System (ADS)

    Grieco, F.; Capra, L.; Groppelli, G.; Norini, G.

    2007-05-01

    The present study concerns the numerical modeling of debris avalanches on the Nevado de Toluca Volcano (Mexico) using TITAN2D simulation software, and its application to create hazard maps. Nevado de Toluca is an andesitic to dacitic stratovolcano of Late Pliocene-Holocene age, located in central México near to the cities of Toluca and México City; its past activity has endangered an area with more than 25 million inhabitants today. The present work is based upon the data collected during extensive field work finalized to the realization of the geological map of Nevado de Toluca at 1:25,000 scale. The activity of the volcano has developed from 2.6 Ma until 10.5 ka with both effusive and explosive events; the Nevado de Toluca has presented long phases of inactivity characterized by erosion and emplacement of debris flow and debris avalanche deposits on its flanks. The largest epiclastic events in the history of the volcano are wide debris flows and debris avalanches, occurred between 1 Ma and 50 ka, during a prolonged hiatus in eruptive activity. Other minor events happened mainly during the most recent volcanic activity (less than 50 ka), characterized by magmatic and tectonic-induced instability of the summit dome complex. According to the most recent tectonic analysis, the active transtensive kinematics of the E-W Tenango Fault System had a strong influence on the preferential directions of the last three documented lateral collapses, which generated the Arroyo Grande and Zaguàn debris avalanche deposits towards E and Nopal debris avalanche deposit towards W. The analysis of the data collected during the field work permitted to create a detailed GIS database of the spatial and temporal distribution of debris avalanche deposits on the volcano. Flow models, that have been performed with the software TITAN2D, developed by GMFG at Buffalo, were entirely based upon the information stored in the geological database. The modeling software is built upon equations

  5. Determining the optimal planting density and land expectation value -- a numerical evaluation of decision model

    SciTech Connect

    Gong, P. . Dept. of Forest Economics)

    1998-08-01

    Different decision models can be constructed and used to analyze a regeneration decision in even-aged stand management. However, the optimal decision and management outcomes determined in an analysis may depend on the decision model used in the analysis. This paper examines the proper choice of decision model for determining the optimal planting density and land expectation value (LEV) for a Scots pine (Pinus sylvestris L.) plantation in northern Sweden. First, a general adaptive decision model for determining the regeneration alternative that maximizes the LEV is presented. This model recognizes future stand state and timber price uncertainties by including multiple stand state and timber price scenarios, and assumes that the harvest decision in each future period will be made conditional on the observed stand state and timber prices. Alternative assumptions about future stand states, timber prices, and harvest decisions can be incorporated into this general decision model, resulting in several different decision models that can be used to analyze a specific regeneration problem. Next, the consequences of choosing different modeling assumptions are determined using the example Scots pine plantation problem. Numerical results show that the most important sources of uncertainty that affect the optimal planting density and LEV are variations of the optimal clearcut time due to short-term fluctuations of timber prices. It is appropriate to determine the optimal planting density and harvest policy using an adaptive decision model that recognizes uncertainty only in future timber prices. After the optimal decisions have been found, however, the LEV should be re-estimated by incorporating both future stand state and timber price uncertainties.

  6. Evaluation of the numeric rating scale for perception of effort during isometric elbow flexion exercise.

    PubMed

    Lampropoulou, Sofia; Nowicky, Alexander V

    2012-03-01

    The aim of the study was to examine the reliability and validity of the numerical rating scale (0-10 NRS) for rating perception of effort during isometric elbow flexion in healthy people. 33 individuals (32 ± 8 years) participated in the study. Three re-test measurements within one session and three weekly sessions were undertaken to determine the reliability of the scale. The sensitivity of the scale following 10 min isometric fatiguing exercise of the elbow flexors as well as the correlation of the effort with the electromyographic (EMG) activity of the flexor muscles were tested. Perception of effort was tested during isometric elbow flexion at 10, 30, 50, 70, 90, and 100% MVC. The 0-10 NRS demonstrated an excellent test-retest reliability [intra class correlation (ICC) = 0.99 between measurements taken within a session and 0.96 between 3 consecutive weekly sessions]. Exploratory curve fitting for the relationship between effort ratings and voluntary force, and underlying EMG showed that both are best described by power functions (y = ax ( b )). There were also strong correlations (range 0.89-0.95) between effort ratings and EMG recordings of all flexor muscles supporting the concurrent criterion validity of the measure. The 0-10 NRS was sensitive enough to detect changes in the perceived effort following fatigue and significantly increased at the level of voluntary contraction used in its assessment (p < 0.001). These findings suggest the 0-10 NRS is a valid and reliable scale for rating perception of effort in healthy individuals. Future research should seek to establish the validity of the 0-10 NRS in clinical settings.

  7. Numerical and experimental evaluation of a new low-leakage labyrinth seal

    NASA Technical Reports Server (NTRS)

    Rhode, D. L.; Ko, S. H.; Morrison, G. L.

    1988-01-01

    The effectiveness of a recently developed leakage model for evaluating new design features of most seals is demonstrated. A preliminary assessment of the present stator groove feature shows that it gives approximately a 20 percent leakage reduction with no shaft speed effects. Also, detailed distributions of predicted streamlines, axial velocity, relative pressure and turbulence energy enhance one's physical insight. In addition, the interesting measured effect of axial position of the rotor/stator pair on leakage rate and stator wall axial pressure distribution is examined.

  8. Numerical model for the evaluation of Earthquake effects on a magmatic system.

    NASA Astrophysics Data System (ADS)

    Garg, Deepak; Longo, Antonella; Papale, Paolo

    2016-04-01

    A finite element numerical model is presented to compute the effect of an Earthquake on the dynamics of magma in reservoirs with deformable walls. The magmatic system is hit by a Mw 7.2 Earthquake (Petrolia/Capo Mendocina 1992) with hypocenter at 15 km diagonal distance. At subsequent times the seismic wave reaches the nearest side of the magmatic system boundary, travels through the magmatic fluid and arrives to the other side of the boundary. The modelled physical system consists in the magmatic reservoir with a thin surrounding layer of rocks. Magma is considered as an homogeneous multicomponent multiphase Newtonian mixture with exsolution and dissolution of volatiles (H2O+CO2). The magmatic reservoir is made of a small shallow magma chamber filled with degassed phonolite, connected by a vertical dike to a larger deeper chamber filled with gas-rich shoshonite, in condition of gravitational instability. The coupling between the Earthquake and the magmatic system is computed by solving the elastostatic equation for the deformation of the magmatic reservoir walls, along with the conservation equations of mass of components and momentum of the magmatic mixture. The characteristic elastic parameters of rocks are assigned to the computational domain at the boundary of magmatic system. Physically consistent Dirichlet and Neumann boundary conditions are assigned according to the evolution of the seismic signal. Seismic forced displacements and velocities are set on the part of the boundary which is hit by wave. On the other part of boundary motion is governed by the action of fluid pressure and deviatoric stress forces due to fluid dynamics. The constitutive equations for the magma are solved in a monolithic way by space-time discontinuous-in-time finite element method. To attain additional stability least square and discontinuity capturing operators are included in the formulation. A partitioned algorithm is used to couple the magma and thin layer of rocks. The

  9. Design of tissue engineering scaffolds based on hyperbolic surfaces: structural numerical evaluation.

    PubMed

    Almeida, Henrique A; Bártolo, Paulo J

    2014-08-01

    Tissue engineering represents a new field aiming at developing biological substitutes to restore, maintain, or improve tissue functions. In this approach, scaffolds provide a temporary mechanical and vascular support for tissue regeneration while tissue in-growth is being formed. These scaffolds must be biocompatible, biodegradable, with appropriate porosity, pore structure and distribution, and optimal vascularization with both surface and structural compatibility. The challenge is to establish a proper balance between porosity and mechanical performance of scaffolds. This work investigates the use of two different types of triple periodic minimal surfaces, Schwarz and Schoen, in order to design better biomimetic scaffolds with high surface-to-volume ratio, high porosity and good mechanical properties. The mechanical behaviour of these structures is assessed through the finite element method software Abaqus. The effect of two parametric parameters (thickness and surface radius) is also evaluated regarding its porosity and mechanical behaviour.

  10. Design of tissue engineering scaffolds based on hyperbolic surfaces: structural numerical evaluation.

    PubMed

    Almeida, Henrique A; Bártolo, Paulo J

    2014-08-01

    Tissue engineering represents a new field aiming at developing biological substitutes to restore, maintain, or improve tissue functions. In this approach, scaffolds provide a temporary mechanical and vascular support for tissue regeneration while tissue in-growth is being formed. These scaffolds must be biocompatible, biodegradable, with appropriate porosity, pore structure and distribution, and optimal vascularization with both surface and structural compatibility. The challenge is to establish a proper balance between porosity and mechanical performance of scaffolds. This work investigates the use of two different types of triple periodic minimal surfaces, Schwarz and Schoen, in order to design better biomimetic scaffolds with high surface-to-volume ratio, high porosity and good mechanical properties. The mechanical behaviour of these structures is assessed through the finite element method software Abaqus. The effect of two parametric parameters (thickness and surface radius) is also evaluated regarding its porosity and mechanical behaviour. PMID:24935150

  11. Numerical evaluation of the production of radionuclides in a nuclear reactor (Part II).

    PubMed

    Mirzadeh, S; Walsh, P

    1998-04-01

    A computer program called LAURA has been developed to predict the production rates of any member of a nuclei network undergoing spontaneous decay and/or induced neutron transformation in a nuclear reactor. The theoretical bases for the development of LAURA were discussed in Part I. In particular, in Part I, we described how an expression based on the Rubinson (1949) approach is used to evaluate the depletion function. In this paper (Part II), we describe the full simulation of radionuclide production including the decomposition of a reaction network into independent linear chains, provisions for periodic reactor shutdown and restart, and implementation of an approximate solution given by Raykin and Shlyakhter (1989) to account for the effect of feedback due to alpha decay. Also included are some examples which demonstrate possible uses for LAURA.

  12. Numerical evaluation of the light transport properties of alternative He-3 neutron detectors using ceramic scintillators

    NASA Astrophysics Data System (ADS)

    Ohzu, A.; Takase, M.; Haruyama, M.; Kurata, N.; Kobayashi, N.; Kureta, M.; Nakamura, T.; Toh, K.; Sakasai, K.; Suzuki, H.; Soyama, K.; Seya, M.

    2015-10-01

    The light transport properties of scintillator light inside alternative He-3 neutron detectors using scintillator sheets have been investigated by a ray-tracing simulation code. The detector consists of a light-reflecting tube, a thin rectangular ceramic scintillator sheet laminated on a glass plate, and two photo-multiplier tubes (PMTs) mounted at both ends of the detector tube. The flashes of light induced on the surface of the scintillator sheet via nuclear interaction between the scintillator and neutrons are detected by the two PMTs. The light output at both ends of various detectors in which the scintillator sheets are installed with several different arrangements were examined and evaluated in comparison with experimental results. The results derived from the simulation reveal that the light transport property is strongly dependent on the arrangement of the scintillator sheet inside the tube and the shape of the tube.

  13. Evaluation of the role of heterogeneities on transverse mixing in bench-scale tank experiments by numerical modeling.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2014-01-01

    In this work, numerical modeling is used to evaluate and interpret a series of detailed and well-controlled two-dimensional bench-scale conservative tracer tank experiments performed to investigate transverse mixing in porous media. The porous medium used consists of a fine matrix and a more permeable lens vertically aligned with the tracer source and the flow direction. A sensitivity analysis shows that the tracer distribution after passing the lens is only slightly sensitive to variations in transverse dispersivity, but strongly sensitive to the contrast of hydraulic conductivities. A unique parameter set could be calibrated to closely fit the experimental observations. On the basis of calibrated and validated model, synthetic experiments with different contrasts in hydraulic conductivity and more complex setups were performed and the efficiency of mixing evaluated. Flux-related dilution indices derived from these simulations show that the contrasts in hydraulic conductivity between matrix and high-permeable lenses as well as the spatial configuration of tracer plumes and lenses dominate mixing, rather than the actual pore scale dispersivities. These results indicate that local material distributions, the magnitude of permeability contrasts, and their spatial and scale relation to solute plumes are more important for macro-scale transverse dispersion than the micro-scale dispersivities of individual materials. Local material characterization by thorough site investigation hence is of utmost importance for the evaluation of mixing-influenced or -governed problems in groundwater, such as tracer test evaluation or an assessment of contaminant natural attenuation. PMID:23675977

  14. Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0

    NASA Astrophysics Data System (ADS)

    Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun

    2013-02-01

    We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization

  15. A Numerical Study of Some Potential Sources of Error in Side-by-Side Seismometer Evaluations

    USGS Publications Warehouse

    Holcomb, L. Gary

    1990-01-01

    INTRODUCTION This report presents the results of a series of computer simulations of potential errors in test data, which might be obtained when conducting side-by-side comparisons of seismometers. These results can be used as guides in estimating potential sources and magnitudes of errors one might expect when analyzing real test data. First, the derivation of a direct method for calculating the noise levels of two sensors in a side-by-side evaluation is repeated and extended slightly herein. This bulk of this derivation was presented previously (see Holcomb 1989); it is repeated here for easy reference. This method is applied to the analysis of a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of white noise spectra with known signal-to-noise ratios (SNR's). This report extends this analysis to high SNR's to determine the limitations of the direct method for calculating the noise levels at signal-to-noise levels which are much higher than presented previously (see Holcomb 1989). Next, the method is used to analyze a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of bandshaped noise spectra with known signal-to-noise ratios. This is a much more realistic representation of real world data because the earth's background spectrum is certainly not flat. Finally, the results of the analysis of simulated white and bandshaped side-by-side test data are used to assist in interpreting the analysis of the effects of simulated azimuthal misalignment in side-by-side sensor evaluations. A thorough understanding of azimuthal misalignment errors is important because of the physical impossibility of perfectly aligning two sensors in a real world situation. The analysis herein indicates that alignment errors place lower limits on the levels of system noise which can be resolved in a side-by-side measurement. It also indicates that alignment errors are the source of the fact that

  16. Numerical evaluation of sequential bone drilling strategies based on thermal damage.

    PubMed

    Tai, Bruce L; Palmisano, Andrew C; Belmont, Barry; Irwin, Todd A; Holmes, James; Shih, Albert J

    2015-09-01

    Sequentially drilling multiple holes in bone is used clinically for surface preparation to aid in fusion of a joint, typically under non-irrigated conditions. Drilling induces a significant amount of heat and accumulates after multiple passes, which can result in thermal osteonecrosis and various complications. To understand the heat propagation over time, a 3D finite element model was developed to simulate sequential bone drilling. By incorporating proper material properties and a modified bone necrosis criteria, this model can visualize the propagation of damaged areas. For this study, comparisons between a 2.0 mm Kirschner wire and 2.0 mm twist drill were conducted with their heat sources determined using an inverse method and experimentally measured bone temperatures. Three clinically viable solutions to reduce thermally-induced bone damage were evaluated using finite element analysis, including tool selection, time interval between passes, and different drilling sequences. Results show that the ideal solution would be using twist drills rather than Kirschner wires if the situation allows. A shorter time interval between passes was also found to be beneficial as it reduces the total heat exposure time. Lastly, optimizing the drilling sequence reduced the thermal damage of bone, but the effect may be limited. This study demonstrates the feasibility of using the proposed model to study clinical issues and find potential solutions prior to clinical trials.

  17. Numerical evaluation of the capping tendency of microcrystalline cellulose tablets during a diametrical compression test.

    PubMed

    Furukawa, Ryoichi; Chen, Yuan; Horiguchi, Akio; Takagaki, Keisuke; Nishi, Junichi; Konishi, Akira; Shirakawa, Yoshiyuki; Sugimoto, Masaaki; Narisawa, Shinji

    2015-09-30

    Capping is one of the major problems that occur during the tabletting process in the pharmaceutical industry. This study provided an effective method for evaluating the capping tendency during diametrical compression test using the finite element method (FEM). In experiments, tablets of microcrystalline cellulose (MCC) were compacted with a single tabletting machine, and the capping tendency was determined by visual inspection of the tablet after a diametrical compression test. By comparing the effects of double-radius and single-radius concave punch shapes on the capping tendency, it was observed that the capping tendency of double-radius tablets occurred at a lower compaction force compared with single-radius tablets. Using FEM, we investigated the variation in plastic strain within tablets during the diametrical compression test and visualised it using the output variable actively yielding (AC YIELD) of ABAQUS. For both single-radius and double-radius tablets, a capping tendency is indicated if the variation in plastic strain was initiated from the centre of tablets, while capping does not occur if the variation began from the periphery of tablets. The compaction force estimated by the FEM analysis at which the capping tendency was observed was in reasonable agreement with the experimental results. PMID:26188313

  18. Evaluation of the successive approximations method for acoustic streaming numerical simulations.

    PubMed

    Catarino, S O; Minas, G; Miranda, J M

    2016-05-01

    This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.

  19. Evaluation of the successive approximations method for acoustic streaming numerical simulations.

    PubMed

    Catarino, S O; Minas, G; Miranda, J M

    2016-05-01

    This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly. PMID:27250122

  20. Evaluation of operational numerical weather predictions in relation to the prevailing synoptic conditions

    NASA Astrophysics Data System (ADS)

    Pytharoulis, Ioannis; Tegoulias, Ioannis; Karacostas, Theodore; Kotsopoulos, Stylianos; Kartsios, Stergios; Bampzelis, Dimitrios

    2015-04-01

    The Thessaly plain, which is located in central Greece, has a vital role in the financial life of the country, because of its significant agricultural production. The aim of DAPHNE project (http://www.daphne-meteo.gr) is to tackle the problem of drought in this area by means of Weather Modification in convective clouds. This problem is reinforced by the increase of population and the water demand for irrigation, especially during the warm period of the year. The nonhydrostatic Weather Research and Forecasting model (WRF), is utilized for research and operational purposes of DAPHNE project. The WRF output fields are employed by the partners in order to provide high-resolution meteorological guidance and plan the project's operations. The model domains cover: i) Europe, the Mediterranean sea and northern Africa, ii) Greece and iii) the wider region of Thessaly (at selected periods), at horizontal grid-spacings of 15km, 5km and 1km, respectively, using 2-way telescoping nesting. The aim of this research work is to investigate the model performance in relation to the prevailing upper-air synoptic circulation. The statistical evaluation of the high-resolution operational forecasts of near-surface and upper air fields is performed at a selected period of the operational phase of the project using surface observations, gridded fields and weather radar data. The verification is based on gridded, point and object oriented techniques. The 10 upper-air circulation types, which describe the prevailing conditions over Greece, are employed in the synoptic classification. This methodology allows the identification of model errors that occur and/or are maximized at specific synoptic conditions and may otherwise be obscured in aggregate statistics. Preliminary analysis indicates that the largest errors are associated with cyclonic conditions. Acknowledgments This research work of Daphne project (11SYN_8_1088) is co-funded by the European Union (European Regional Development Fund

  1. Evaluation of plasma density in RF CCP discharges from ion current to Langmuir probe: experiment and numerical simulation

    NASA Astrophysics Data System (ADS)

    Voloshin, Dmitry; Kovalev, Alexander; Mankelevich, Yuri; Proshina, Olga; Rakhimova, Tatyana; Vasilieva, Anna

    2015-01-01

    Experimental measurements of current-voltage relationship in RF CCP discharge in argon at 81 MHz have been performed by cylindrical Langmuir probes technique. Two different probe radii have been used: 50 and 250 μm. The high plasma density 1010-1011 cm-3 has been estimated at specific input power under study. The experimental data on nonmonotonic behavior of probe current with pressure were observed firstly for conditions of RF discharge plasmas. To analyze the probe measurements the fast numerical model for ion current collected by a cylindrical probe has been developed. This model is based on the particle-in-cell with Monte-Carlo collision method for ions motion and Boltzmann relation for electrons. The features of probe data at studied conditions were discussed. The comparative analysis of different collisionless approaches for plasma density calculation from ion probe current is done. It is shown that in general collisionless theories underestimate the plasma density value. For correct evaluation of plasma density experimental I- V probe measurement should be supplied by the numerical simulation. It was demonstrated that the collisionless analytical theory of orbital motion can formally give correct results on plasma density at some plasma conditions even when ion collisions take place. The physical reasons of this accidental validity are explained.

  2. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  3. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  4. Evaluation of methods for thermal management in a coal-based SOFC turbine hybrid through numerical simulation

    SciTech Connect

    Tucker, D.A.; VanOsdol, J.G.; Liese, E.A.; Lawson, L.; Zitney, S.E.; Gemmen, R.S.; Ford, J.C.; Haynes, C.

    2001-01-01

    Managing the temperatures and heat transfer in the fuel cell of a solid oxide fuel cell (SOFC) gas turbine (GT) hybrid fired on coal syngas presents certain challenges over a natural gas based system, in that the latter can take advantage of internal reforming to offset heat generated in the fuel cell. Three coal based SOFC/GT configuration designs for thermal management in the main power block are evaluated using steady state numerical simulations developed in ASPEN Plus. A comparison is made on the basis of efficiency, operability issues and component integration. To focus on the effects of different power block configurations, the analysis assumes a consistent syngas composition in each case, and does not explicitly include gasification or syngas cleanup. A fuel cell module rated at 240MW was used as a common basis for three different methods. Advantages and difficulties for each configuration are identified in the simulations.

  5. Error estimate evaluation in numerical approximations of partial differential equations: A pilot study using data mining methods

    NASA Astrophysics Data System (ADS)

    Assous, Franck; Chaskalovic, Joël

    2013-03-01

    In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.

  6. Ultrasonic field profile evaluation in acoustically inhomogeneous anisotropic materials using 2D ray tracing model: Numerical and experimental comparison.

    PubMed

    Kolkoori, S R; Rahman, M-U; Chinta, P K; Ktreutzbruck, M; Rethmeier, M; Prager, J

    2013-02-01

    Ultrasound propagation in inhomogeneous anisotropic materials is difficult to examine because of the directional dependency of elastic properties. Simulation tools play an important role in developing advanced reliable ultrasonic non destructive testing techniques for the inspection of anisotropic materials particularly austenitic cladded materials, austenitic welds and dissimilar welds. In this contribution we present an adapted 2D ray tracing model for evaluating ultrasonic wave fields quantitatively in inhomogeneous anisotropic materials. Inhomogeneity in the anisotropic material is represented by discretizing into several homogeneous layers. According to ray tracing model, ultrasonic ray paths are traced during its energy propagation through various discretized layers of the material and at each interface the problem of reflection and transmission is solved. The presented algorithm evaluates the transducer excited ultrasonic fields accurately by taking into account the directivity of the transducer, divergence of the ray bundle, density of rays and phase relations as well as transmission coefficients. The ray tracing model is able to calculate the ultrasonic wave fields generated by a point source as well as a finite dimension transducer. The ray tracing model results are validated quantitatively with the results obtained from 2D Elastodynamic Finite Integration Technique (EFIT) on several configurations generally occurring in the ultrasonic non destructive testing of anisotropic materials. Finally, the quantitative comparison of ray tracing model results with experiments on 32mm thick austenitic weld material and 62mm thick austenitic cladded material is discussed.

  7. Evaluating some indicators for identifying mountain waves situations in snow days by means of numerical modeling and continuous data

    NASA Astrophysics Data System (ADS)

    Sanchez, Jose Luis; Posada, Rafael; Hierro, Rodrigo; García-Ortega, Eduardo; Lopez, Laura; Gascón, Estibaliz

    2013-04-01

    Madrid - Barajas airport is placed at 70 km away from the Central System and snow days and mountains waves are considered as risks days for landing operations. This motivated the study of mesoscale factors affecting this type of situations. The availability of observational data gathered during three consecutives winter campaigns in the Central System along with data from high-resolution numerical models, have allowed the evaluation of the environmental conditions necessary for mountain waves formations in snow days and were characterized from observational data and numerical simulations. By means of Meteosat Second Generation satellite images, lee clouds were observed in 25 days corresponding to the 2008-2011 winter seasons. Six of them, which also presented NW low level flow over the mountain range, were analyzed. Necessary conditions for oscillations as well as vertical wave propagation were studied from radiometer data and MM5 model simulations. From radiometer data the presence of stable environment in the six selected events is confirmed. From MM5 model, dynamic conditions allowing the flow to cross the mountain range were evaluated in three different locations around the mountain range. Simulations of vertical velocity show that MM5 model is able to detect mountain waves. The waves present in the six selected events are examined. Tropospheric were able to forecast energy release associated with the mountain waves. The vertical wavelength presented a high variability due to intense background winds at high tropospheric levels. The average values estimated for λz were between 3 and 12 km. The intrinsic period estimated was around 30 and 12 km. The simulations were able to forecast energy release associated with mountain waves. Acknowledgments: This study was supported by the Plan Nacional de I+D of Spain, through the grants CGL2010-15930, Micrometeo IPT-310000-2010-022 and the Junta de Castilla y León through the grant LE220A11-2.

  8. Evidence, Evaluation and the "Tyranny of Effect Size": A Proposal to More Accurately Measure Programme Impacts in Adult and Family Literacy

    ERIC Educational Resources Information Center

    Carpentieri, J. D.

    2013-01-01

    As literacy grows in importance, policymakers' demands for programme quality grow, too. Evidence on the effectiveness of adult and family literacy programmes is limited at best: research gaps abound, and programme evaluations are more often than not based on flawed theories of programme impact. In the absence of robust evidence on the full…

  9. Evaluation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in Pyrus pyrifolia using different tissue samples and seasonal conditions.

    PubMed

    Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya

    2014-01-01

    We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out

  10. Use of new T-cell-based cell lines expressing two luciferase reporters for accurately evaluating susceptibility to anti-human immunodeficiency virus type 1 drugs.

    PubMed

    Chiba-Mizutani, Tomoko; Miura, Hideka; Matsuda, Masakazu; Matsuda, Zene; Yokomaku, Yoshiyuki; Miyauchi, Kosuke; Nishizawa, Masako; Yamamoto, Naoki; Sugiura, Wataru

    2007-02-01

    Two new T-cell-based reporter cell lines were established to measure human immunodeficiency virus type 1 (HIV-1) infectivity. One cell line naturally expresses CD4 and CXCR4, making it susceptible to X4-tropic viruses, and the other cell line, in which a CCR5 expression vector was introduced, is susceptible to both X4- and R5-tropic viruses. Reporter cells were constructed by transfecting the human T-cell line HPB-Ma, which demonstrates high susceptibility to HIV-1, with genomes expressing two different luciferase reporters, HIV-1 long terminal repeat-driven firefly luciferase and cytomegalovirus promoter-driven renilla luciferase. Upon HIV infection, the cells expressed firefly luciferase at levels that were highly correlated (r2=0.91 to 0.98) with the production of the capsid antigen p24. The cells also constitutively expressed renilla luciferase, which was used to monitor cell numbers and viability. The reliability of the cell lines for two in vitro applications, drug resistance phenotyping and drug screening, was confirmed. As HIV-1 efficiently replicated in these cells, they could be used for multiple-round replication assays as an alternative method to a single-cycle replication protocol. Coefficients of variation for drug susceptibility evaluated with the cell lines ranged from 17 to 41%. The new cell lines were beneficial for evaluating antiretroviral drug resistance. Firefly luciferase gave a wider dynamic range for evaluating virus infectivity, and the introduction of renilla luciferase improved assay reproducibility. The cell lines were also beneficial for screening new antiretroviral agents, as false inhibition caused by the cytotoxicity of test compounds was easily detected by monitoring renilla luciferase activity.

  11. Evaluation of the Oberbeck-Boussinesq Approximation for the numerical simulation of variable-density flow and solute transport in porous media

    NASA Astrophysics Data System (ADS)

    Guevara, Carlos; Graf, Thomas

    2013-04-01

    Subsurface water systems are endangered due to salt water intrusion in coastal aquifers, leachate infiltration from waste disposal sites and salt transport in agricultural sites. This leads to the situation where more dense fluid overlies a less dense fluid creating a density gradient. Under certain conditions this density gradient produces instabilities in form dense plume fingers that move downwards. This free convection increases solute transport over large distances and shorter times. In cases where a significantly larger density gradient exists, the effect of free convection on transport is non-negligible. The assumption of a constant density distribution in space and time is no longer valid. Therefore variable-density flow must be considered. The flow equation and the transport equation govern the numerical modeling of variable-density flow and solute transport. Computer simulation programs mathematically describe variable-density flow using the Oberbeck-Boussinesq Approximation (OBA). Three levels of simplifications can de considered, which are denoted by OB1, OB2 and OB3. OB1 is the usually applied simplification where variable density is taken into account in the hydraulic potential. In OB2 variable density is considered in the flow equation and in OB3 variable density is additionally considered in the transport equation. Using the results from a laboratory-scale experiment of variable-density flow and solute transport (Simmons et al., Transp. Porous Medium, 2002) it is investigated which level of mathematical accuracy is required to represent the physical experiment the most accurate. Differences between the physical and mathematical model are evaluated using qualitative indicators (e.g. mass fluxes, Nusselt number). Results show that OB1 is required for small density gradients and OB3 is required for large density gradients.

  12. Numerical Evaluation of Mode 1 Stress Intensity Factor as a Function of Material Orientation For BX-265 Foam Insulation Material

    NASA Technical Reports Server (NTRS)

    Knudsen, Erik; Arakere, Nagaraj K.

    2006-01-01

    Foam; a cellular material, is found all around us. Bone and cork are examples of biological cell materials. Many forms of man-made foam have found practical applications as insulating materials. NASA uses the BX-265 foam insulation material on the external tank (ET) for the Space Shuttle. This is a type of Spray-on Foam Insulation (SOFI), similar to the material used to insulate attics in residential construction. This foam material is a good insulator and is very lightweight, making it suitable for space applications. Breakup of segments of this foam insulation on the shuttle ET impacting the shuttle thermal protection tiles during liftoff is believed to have caused the space shuttle Columbia failure during re-entry. NASA engineers are very interested in understanding the processes that govern the breakup/fracture of this complex material from the shuttle ET. The foam is anisotropic in nature and the required stress and fracture mechanics analysis must include the effects of the direction dependence on material properties. Material testing at NASA MSFC has indicated that the foam can be modeled as a transversely isotropic material. As a first step toward understanding the fracture mechanics of this material, we present a general theoretical and numerical framework for computing stress intensity factors (SIFs), under mixed-mode loading conditions, taking into account the material anisotropy. We present mode I SIFs for middle tension - M(T) - test specimens, using 3D finite element stress analysis (ANSYS) and FRANC3D fracture analysis software, developed by the Cornel1 Fracture Group. Mode I SIF values are presented for a range of foam material orientations. Also, NASA has recorded the failure load for various M(T) specimens. For a linear analysis, the mode I SIF will scale with the far-field load. This allows us to numerically estimate the mode I fracture toughness for this material. The results represent a quantitative basis for evaluating the strength and

  13. Evaluation of a numerical simulation model for a system coupling atmospheric gas, surface water and unsaturated or saturated porous medium.

    PubMed

    Hibi, Yoshihiko; Tomigashi, Akira; Hirose, Masafumi

    2015-12-01

    Numerical simulations that couple flow in a surface fluid with that in a porous medium are useful for examining problems of pollution that involve interactions among the atmosphere, surface water and groundwater, including, for example, saltwater intrusion along coasts. We previously developed a numerical simulation method for simulating a coupled atmospheric gas, surface water, and groundwater system (called the ASG method) that employs a saturation equation for flow in a porous medium; this equation allows both the void fraction of water in the surface system and water saturation in the porous medium to be solved simultaneously. It remained necessary, however, to evaluate how global pressure, including gas pressure, water pressure, and capillary pressure, should be specified at the boundary between the surface and the porous medium. Therefore, in this study, we derived a new equation for global pressure and integrated it into the ASG method. We then simulated water saturation in a porous medium and the void fraction of water in a surface system by the ASG method and reproduced fairly well the results of two column experiments. Next, we simulated water saturation in a porous medium (sand) with a bank, by using both the ASG method and a modified Picard (MP) method. We found only a slight difference in water saturation between the ASG and MP simulations. This result confirmed that the derived equation for global pressure was valid for a porous medium, and that the global pressure value could thus be used with the saturation equation for porous media. Finally, we used the ASG method to simulate a system coupling atmosphere, surface water, and a porous medium (110m wide and 50m high) with a trapezoidal bank. The ASG method was able to simulate the complex flow of fluids in this system and the interaction between the porous medium and the surface water or the atmosphere.

  14. Evaluation of a numerical simulation model for a system coupling atmospheric gas, surface water and unsaturated or saturated porous medium

    NASA Astrophysics Data System (ADS)

    Hibi, Yoshihiko; Tomigashi, Akira; Hirose, Masafumi

    2015-12-01

    Numerical simulations that couple flow in a surface fluid with that in a porous medium are useful for examining problems of pollution that involve interactions among the atmosphere, surface water and groundwater, including, for example, saltwater intrusion along coasts. We previously developed a numerical simulation method for simulating a coupled atmospheric gas, surface water, and groundwater system (called the ASG method) that employs a saturation equation for flow in a porous medium; this equation allows both the void fraction of water in the surface system and water saturation in the porous medium to be solved simultaneously. It remained necessary, however, to evaluate how global pressure, including gas pressure, water pressure, and capillary pressure, should be specified at the boundary between the surface and the porous medium. Therefore, in this study, we derived a new equation for global pressure and integrated it into the ASG method. We then simulated water saturation in a porous medium and the void fraction of water in a surface system by the ASG method and reproduced fairly well the results of two column experiments. Next, we simulated water saturation in a porous medium (sand) with a bank, by using both the ASG method and a modified Picard (MP) method. We found only a slight difference in water saturation between the ASG and MP simulations. This result confirmed that the derived equation for global pressure was valid for a porous medium, and that the global pressure value could thus be used with the saturation equation for porous media. Finally, we used the ASG method to simulate a system coupling atmosphere, surface water, and a porous medium (110 m wide and 50 m high) with a trapezoidal bank. The ASG method was able to simulate the complex flow of fluids in this system and the interaction between the porous medium and the surface water or the atmosphere.

  15. Establishment of computerized numerical databases on thermophysical and other properties of molten as well as solid materials and data evaluation and validation for generating recommended reliable reference data

    NASA Technical Reports Server (NTRS)

    Ho, C. Y.

    1993-01-01

    The Center for Information and Numerical Data Analysis and Synthesis, (CINDAS), measures and maintains databases on thermophysical, thermoradiative, mechanical, optical, electronic, ablation, and physical properties of materials. Emphasis is on aerospace structural materials especially composites and on infrared detector/sensor materials. Within CINDAS, the Department of Defense sponsors at Purdue several centers: the High Temperature Material Information Analysis Center (HTMIAC), the Ceramics Information Analysis Center (CIAC) and the Metals Information Analysis Center (MIAC). The responsibilities of CINDAS are extremely broad encompassing basic and applied research, measurement of the properties of thin wires and thin foils as well as bulk materials, acquisition and search of world-wide literature, critical evaluation of data, generation of estimated values to fill data voids, investigation of constitutive, structural, processing, environmental, and rapid heating and loading effects, and dissemination of data. Liquids, gases, molten materials and solids are all considered. The responsibility of maintaining widely used databases includes data evaluation, analysis, correlation, and synthesis. Material property data recorded on the literature are often conflicting, diverging, and subject to large uncertainties. It is admittedly difficult to accurately measure materials properties. Systematic and random errors both enter. Some errors result from lack of characterization of the material itself (impurity effects). In some cases assumed boundary conditions corresponding to a theoretical model are not obtained in the experiments. Stray heat flows and losses must be accounted for. Some experimental methods are inappropriate and in other cases appropriate methods are carried out with poor technique. Conflicts in data may be resolved by curve fitting of the data to theoretical or empirical models or correlation in terms of various affecting parameters. Reasons (e.g. phase

  16. Seasonal variation of residence time in spring and groundwater evaluated by CFCs and numerical simulation in mountainous headwater catchment

    NASA Astrophysics Data System (ADS)

    Tsujimura, Maki; Watanabe, Yasuto; Ikeda, Koichi; Yano, Shinjiro; Abe, Yutaka

    2016-04-01

    Headwater catchments in mountainous region are the most important recharge area for surface and subsurface waters, additionally time information of the water is principal to understand hydrological processes in the catchments. However, there have been few researches to evaluate variation of residence time of subsurface water in time and space at the mountainous headwaters especially with steep slope. We investigated the temporal variation of the residence time of the spring and groundwater with tracing of hydrological flow processes in mountainous catchments underlain by granite, Yamanashi Prefecture, central Japan. We conducted intensive hydrological monitoring and water sampling of spring, stream and ground waters in high-flow and low-flow seasons from 2008 through 2013 in River Jingu Watershed underlain by granite, with an area of approximately 15 km2 and elevation ranging from 950 m to 2000 m. The CFCs, stable isotopic ratios of oxygen-18 and deuterium, inorganic solute constituent concentrations were determined on all water samples. Also, a numerical simulation was conducted to reproduce of the average residence times of the spring and groundwater. The residence time of the spring water estimated by the CFCs concentration ranged from 10 years to 60 years in space within the watershed, and it was higher (older) during the low flow season and lower (younger) during the high flow season. We tried to reproduce the seasonal change of the residence time in the spring water by numerical simulation, and the calculated residence time of the spring water and discharge of the stream agreed well with the observed values. The groundwater level was higher during the high flow season and the groundwater dominantly flowed through the weathered granite with higher permeability, whereas that was lower during the low flow season and that flowed dominantly through the fresh granite with lower permeability. This caused the seasonal variation of the residence time of the spring

  17. Numerical evaluation of community-scale aquifer storage, transfer and recovery technology: A case study from coastal Bangladesh

    NASA Astrophysics Data System (ADS)

    Barker, Jessica L. B.; Hassan, Md. Mahadi; Sultana, Sarmin; Ahmed, Kazi Matin; Robinson, Clare E.

    2016-09-01

    Aquifer storage, transfer and recovery (ASTR) may be an efficient low cost water supply technology for rural coastal communities that experience seasonal freshwater scarcity. The feasibility of ASTR as a water supply alternative is being evaluated in communities in south-western Bangladesh where the shallow aquifers are naturally brackish and severe seasonal freshwater scarcity is compounded by frequent extreme weather events. A numerical variable-density groundwater model, first evaluated against data from an existing community-scale ASTR system, was applied to identify the influence of hydrogeological as well as design and operational parameters on system performance. For community-scale systems, it is a delicate balance to achieve acceptable water quality at the extraction well whilst maintaining a high recovery efficiency (RE) as dispersive mixing can dominate relative to the small size of the injected freshwater plume. For the existing ASTR system configuration used in Bangladesh where the injection head is controlled and the extraction rate is set based on the community water demand, larger aquifer hydraulic conductivity, aquifer depth and injection head improve the water quality (lower total dissolved solids concentration) in the extracted water because of higher injection rates, but the RE is reduced. To support future ASTR system design in similar coastal settings, an improved system configuration was determined and relevant non-dimensional design criteria were identified. Analyses showed that four injection wells distributed around a central single extraction well leads to high RE provided the distance between the injection wells and extraction well is less than half the theoretical radius of the injected freshwater plume. The theoretical plume radius relative to the aquifer dispersivity is also an important design consideration to ensure adequate system performance. The results presented provide valuable insights into the feasibility and design

  18. Small and efficient basis sets for the evaluation of accurate interaction-induced linear and non-linear electric properties in model hydrogen-bonded complexes

    NASA Astrophysics Data System (ADS)

    Baranowska-Łączkowska, Angelika; Fernández, Berta

    2015-11-01

    Interaction-induced electric dipole moment, polarisability and first hyperpolarisability are investigated in model hydrogen-bonded clusters built of hydrogen fluoride molecules organised in three linear chains parallel to each other. The properties are evaluated within the finite field approach, using the second order Møller-Plesset method, and the LPol-m (m = ds, dl) and the optical rotation prediction (ORP) basis sets. These bases and correlation method are selected after a systematic basis set and correlation method convergence study carried out on the smallest of the complexes and taking properties obtained with Dunning's bases and the coupled cluster singles and doubles (CCSD) and the CCSD including connected triple corrections (CCSD(T)) methods as reference. Results are analysed in terms of many-body and cooperative effects.

  19. Accurate numerical solutions for elastic-plastic models. [LMFBR

    SciTech Connect

    Schreyer, H. L.; Kulak, R. F.; Kramer, J. M.

    1980-03-01

    The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated.

  20. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  1. Tillandsia stricta Sol (Bromeliaceae) leaves as monitors of airborne particulate matter-A comparative SEM methods evaluation: Unveiling an accurate and odd HP-SEM method.

    PubMed

    de Oliveira, Martha Lima; de Melo, Edésio José Tenório; Miguens, Flávio Costa

    2016-09-01

    Airborne particulate matter (PM) has been included among the most important air pollutants by governmental environment agencies and academy researchers. The use of terrestrial plants for monitoring PM has been widely accepted, particularly when it is coupled with SEM/EDS. Herein, Tillandsia stricta leaves were used as monitors of PM, focusing on a comparative evaluation of Environmental SEM (ESEM) and High-Pressure SEM (HPSEM). In addition, specimens air-dried at formaldehyde atmosphere (AD/FA) were introduced as an SEM procedure. Hydrated specimen observation by ESEM was the best way to get information from T. stricta leaves. If any artifacts were introduced by AD/FA, they were indiscernible from those caused by CPD. Leaf anatomy was always well preserved. PM density was determined on adaxial and abaxial leaf epidermis for each of the SEM proceedings. When compared with ESEM, particle extraction varied from 0 to 20% in air-dried leaves while 23-78% of particles deposited on leaves surfaces were extracted by CPD procedures. ESEM was obviously the best choice over other methods but morphological artifacts increased in function of operation time while HPSEM operation time was without limit. AD/FA avoided the shrinkage observed in the air-dried leaves and particle extraction was low when compared with CPD. Structural and particle density results suggest AD/FA as an important methodological approach to air pollution biomonitoring that can be widely used in all electron microscopy labs. Otherwise, previous PM assessments using terrestrial plants as biomonitors and performed by conventional SEM could have underestimated airborne particulate matter concentration. PMID:27357408

  2. Tillandsia stricta Sol (Bromeliaceae) leaves as monitors of airborne particulate matter-A comparative SEM methods evaluation: Unveiling an accurate and odd HP-SEM method.

    PubMed

    de Oliveira, Martha Lima; de Melo, Edésio José Tenório; Miguens, Flávio Costa

    2016-09-01

    Airborne particulate matter (PM) has been included among the most important air pollutants by governmental environment agencies and academy researchers. The use of terrestrial plants for monitoring PM has been widely accepted, particularly when it is coupled with SEM/EDS. Herein, Tillandsia stricta leaves were used as monitors of PM, focusing on a comparative evaluation of Environmental SEM (ESEM) and High-Pressure SEM (HPSEM). In addition, specimens air-dried at formaldehyde atmosphere (AD/FA) were introduced as an SEM procedure. Hydrated specimen observation by ESEM was the best way to get information from T. stricta leaves. If any artifacts were introduced by AD/FA, they were indiscernible from those caused by CPD. Leaf anatomy was always well preserved. PM density was determined on adaxial and abaxial leaf epidermis for each of the SEM proceedings. When compared with ESEM, particle extraction varied from 0 to 20% in air-dried leaves while 23-78% of particles deposited on leaves surfaces were extracted by CPD procedures. ESEM was obviously the best choice over other methods but morphological artifacts increased in function of operation time while HPSEM operation time was without limit. AD/FA avoided the shrinkage observed in the air-dried leaves and particle extraction was low when compared with CPD. Structural and particle density results suggest AD/FA as an important methodological approach to air pollution biomonitoring that can be widely used in all electron microscopy labs. Otherwise, previous PM assessments using terrestrial plants as biomonitors and performed by conventional SEM could have underestimated airborne particulate matter concentration.

  3. A Rapid and Accurate Method to Evaluate Helicobacter pylori Infection, Clarithromycin Resistance, and CYP2C19 Genotypes Simultaneously From Gastric Juice

    PubMed Central

    Kuo, Chao-Hung; Liu, Chung-Jung; Yang, Ching-Chia; Kuo, Fu-Chen; Hu, Huang-Ming; Shih, Hsiang-Yao; Wu, Meng-Chieh; Chen, Yen-Hsu; Wang, Hui-Min David; Ren, Jian-Lin; Wu, Deng-Chyang; Chang, Lin-Li

    2016-01-01

    Abstract Because Helicobacter pylori (H pylori) would cause carcinogenesis of the stomach, we need sufficient information for deciding on an appropriate strategy of eradication. Many factors affect the efficacy of eradication including antimicrobial resistance (especially clarithromycin resistance) and CYP2C19 polymorphism. This study was to survey the efficiency of gastric juice for detecting H pylori infection, clarithromycin resistance, and CYP2C19 polymorphism. The specimens of gastric juice were collected from all patients while receiving gastroscopy. DNA was extracted from gastric juice and then urease A and cag A were amplified by polymerase chain reaction (PCR) for detecting the existence of H pylori. By PCR-restriction fragment length polymorphism (PCR-RFLP), the 23S rRNA of H pylori and CYP2C19 genotypes of host were examined respectively. During endoscopy examination, biopsy-based specimens were also collected for rapid urease test, culture, and histology. The blood samples were also collected for analysis of CYP2C19 genotypes. We compared the results of gastric juice tests with the results of traditional clinical tests. When compared with the results from traditional clinical tests, our results from gastric juice showed that the sensitivity (SEN), specificity (SPE), positive predictive value (PPV), negative predictive value (NPV), and accuracy to detect H pylori infection were 92.1% (105/114), 92.9% (143/154), 90.5% (105/116), 94.1% (143/152), and 92.5% (248/268), respectively. The SEN, SPE, PPV, and NPV to detect clarithromycin resistance were 97.3% (36/37), 91.5% (43/47), 90.0% (36/40), and 97.7% (43/44), respectively. By using PCR-RFLP, the consistency of human CYP2C19 gene polymorphism from blood samples and gastric juice was as high as 94.9% (149/157). The manipulated gastric juice is actually an effective diagnostic sample for evaluation of H pylori existence, clarithromycin resistance, and host CYP2C19 polymorphism. PMID:27227911

  4. Evaluation of numerical models by FerryBox and fixed platform in situ data in the southern North Sea

    NASA Astrophysics Data System (ADS)

    Haller, M.; Janssen, F.; Siddorn, J.; Petersen, W.; Dick, S.

    2015-11-01

    For understanding and forecasting of hydrodynamics in coastal regions, numerical models have served as an important tool for many years. In order to assess the model performance, we compared simulations to observational data of water temperature and salinity. Observations were available from FerryBox transects in the southern North Sea and, additionally, from a fixed platform of the MARNET network. More detailed analyses have been made at three different stations, located off the English eastern coast, at the Oyster Ground and in the German Bight. FerryBoxes installed on ships of opportunity (SoO) provide high-frequency surface measurements along selected tracks on a regular basis. The results of two operational hydrodynamic models have been evaluated for two different time periods: BSHcmod v4 (January 2009 to April 2012) and FOAM AMM7 NEMO (April 2011 to April 2012). While they adequately simulate temperature, both models underestimate salinity, especially near the coast in the southern North Sea. Statistical errors differ between the two models and between the measured parameters. The root mean square error (RMSE) of water temperatures amounts to 0.72 °C (BSHcmod v4) and 0.44 °C (AMM7), while for salinity the performance of BSHcmod is slightly better (0.68 compared to 1.1). The study results reveal weaknesses in both models, in terms of variability, absolute levels and limited spatial resolution. Simulation of the transition zone between the coasts and the open sea is still a demanding task for operational modelling. Thus, FerryBox data, combined with other observations with differing temporal and spatial scales, can serve as an invaluable tool not only for model evaluation, but also for model optimization by assimilation of such high-frequency observations.

  5. Evaluation of Development of the Mitchell Creek Landslide, B.C., using Remote Sensing, Geomorphological Analysis and Numerical Modelling

    NASA Astrophysics Data System (ADS)

    Stead, D.; Clayton, A.

    2014-12-01

    The Mitchell Creek Landslide of northwestern British Columbia is a large structurally controlled bedrock instability, with an estimated volume of 75 Mm3. It has developed over the last 60 years in response to rapid deglaciation, offering insight into changing conditions at juvenile rock landslides. The landslide is located in altered volcanic & volcaniclastic rocks of the Stikine Terrane at the intersection of two major regional faults. Multiple failure mechanisms have been identified over the 0.8 km2 landslide area including toppling along steep foliation in the lower landslide and sliding along a well-defined rupture surface in the upper landslide. Geomorphological and engineering geological assessments have been completed at the site to characterize landslide properties and behaviour. Historic aerial photographs have been used to observe changes to the slope since 1956 and have captured the onset of surface deformation. Mapping of morphological and deformation features were undertaken on imagery from 1956, 1972, 1992, and 2010 to evaluate slope processes, failure mechanisms, and damage accumulation within the slope. Natural targets were also used to estimate landslide displacements over the past 60 years. Annual movement rates have been estimated to range from 0.1 to 0.8 m/yr over the landslide area. Displacement rates have been compared with historic glacial levels in the valley and modern environmental monitoring. A reconstruction of pre-failure geometry of the landslide was created from displacement estimates and structural geologic information. Rock mass properties and discontinuity orientations have been assessed from geotechnical investigations carried out between 2008 and 2013. A conceptual model of landslide behaviour and evolution has been developed and evaluated using both continuum and discontinuum numerical modeling techniques.

  6. Numerical and experimental evaluation of Nd:YAG laser welding efficiency in AZ31 magnesium alloy butt joints

    NASA Astrophysics Data System (ADS)

    Scintilla, Leonardo Daniele; Tricarico, Luigi

    2013-02-01

    In this paper, energy aspects related to the efficiency of laser welding process using a 2 kW Nd:YAG laser were investigated and reported. AZ31B magnesium alloy sheets 3.3 mm thick were butt-welded without filler using Helium and Argon as shielding gases. A three-dimensional and semi-stationary finite element model was developed to evaluate the effect of laser power and welding speed on the absorption coefficient, the melting and welding efficiencies. The modeled volumetric heat source took into account a scale factor, and the shape factors given by the attenuation of the beam within the workpiece and the beam intensity distribution. The numerical model was calibrated using experimental data on the basis of morphological parameters of the weld bead. Results revealed a good correspondence between experiment and simulation analysis of the energy aspects of welding. Considering results of mechanical characterization of butt joints previously obtained, the optimization of welding condition in terms of mechanical properties and energy parameters was performed. The best condition is represented by the lower laser power and higher welding speed that corresponds to the lower heat input given to the joint.

  7. Numerical modeling of the releases of (90)SR from Fukushima to the ocean: an evaluation of the source term.

    PubMed

    Periáñez, R; Suh, Kyung-Suk; Byung-Il, Min; Casacuberta, N; Masqué, P

    2013-01-01

    A numerical model consisting of a 3D advection/diffusion equation, including uptake/release reactions between water and sediments described in a dynamic way, has been applied to simulate the marine releases of (90)Sr from the Fukushima power plant after the March 2011 tsunami. This is a relevant issue since (90)Sr releases are still occurring. The model used here had been successfully applied to simulate (137)Cs releases. Assuming that the temporal trend of (90)Sr releases was the same as for (137)Cs during the four months after the accident simulated here, the source term could be evaluated, resulting in a total release of 80 TBq of (90)Sr until the end of June, which is in the lower range of previous estimates. Computed vertical profiles of (90)Sr in the water column have been compared with measured ones. The (90)Sr inventories within the model domain have also been calculated for the water column and for bed sediments. Maximum dissolved inventory (obtained for April 10th, 2011) within the model domain results in about 58 TBq. Inventories in bed sediments are 3 orders of magnitude lower than in the water column due to the low reactivity of this radionuclide. (90)Sr/(137)Cs ratios in the ocean have also been calculated and compared with measured values, showing both spatial and temporal variations.

  8. Numerical evaluation of the use of granulated coal ash to reduce an oxygen-deficient water mass.

    PubMed

    Yamamoto, Hironori; Yamamoto, Tamiji; Mito, Yugo; Asaoka, Satoshi

    2016-06-15

    Granulated coal ash (GCA), which is a by-product of coal thermal electric power stations, effectively decreases phosphate and hydrogen sulfide (H2S) concentrations in the pore water of coastal marine sediments. In this study, we developed a pelagic-benthic coupled ecosystem model to evaluate the effectiveness of GCA for diminishing the oxygen-deficient water mass formed in coastal bottom water of Hiroshima Bay in Japan. Numerical experiments revealed the application of GCA was effective for reducing the oxygen-deficient water masses, showing alleviation of the DO depletion in summer increased by 0.4-3mgl(-1). The effect of H2S adsorption onto the GCA lasted for 5.25years in the case in which GCA was mixed with the sediment in a volume ratio of 1:1. The application of this new GCA-based environmental restoration technique could also make a substantial contribution to form a recycling-oriented society. PMID:27143344

  9. Numerical Evaluation of P-Multigrid Method for the Solution of Discontinuous Galerkin Discretizations of Diffusive Equations

    NASA Technical Reports Server (NTRS)

    Atkins, H. L.; Helenbrook, B. T.

    2005-01-01

    This paper describes numerical experiments with P-multigrid to corroborate analysis, validate the present implementation, and to examine issues that arise in the implementations of the various combinations of relaxation schemes, discretizations and P-multigrid methods. The two approaches to implement P-multigrid presented here are equivalent for most high-order discretization methods such as spectral element, SUPG, and discontinuous Galerkin applied to advection; however it is discovered that the approach that mimics the common geometric multigrid implementation is less robust, and frequently unstable when applied to discontinuous Galerkin discretizations of di usion. Gauss-Seidel relaxation converges 40% faster than block Jacobi, as predicted by analysis; however, the implementation of Gauss-Seidel is considerably more expensive that one would expect because gradients in most neighboring elements must be updated. A compromise quasi Gauss-Seidel relaxation method that evaluates the gradient in each element twice per iteration converges at rates similar to those predicted for true Gauss-Seidel.

  10. A robust and accurate formulation of molecular and colloidal electrostatics

    NASA Astrophysics Data System (ADS)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  11. A robust and accurate formulation of molecular and colloidal electrostatics.

    PubMed

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y C

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics. PMID:27497538

  12. Numerical simulation of heat exchanger

    SciTech Connect

    Sha, W.T.

    1985-01-01

    Accurate and detailed knowledge of the fluid flow field and thermal distribution inside a heat exchanger becomes invaluable as a large, efficient, and reliable unit is sought. This information is needed to provide proper evaluation of the thermal and structural performance characteristics of a heat exchanger. It is to be noted that an analytical prediction method, when properly validated, will greatly reduce the need for model testing, facilitate interpolating and extrapolating test data, aid in optimizing heat-exchanger design and performance, and provide scaling capability. Thus tremendous savings of cost and time are realized. With the advent of large digital computers and advances in the development of computational fluid mechanics, it has become possible to predict analytically, through numerical solution, the conservation equations of mass, momentum, and energy for both the shellside and tubeside fluids. The numerical modeling technique will be a valuable, cost-effective design tool for development of advanced heat exchangers.

  13. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  14. Evaluation and Reduction of Machine Difference in Press Working with Utilization of Dedicated Die Support Structure and Numerical Methodologies

    NASA Astrophysics Data System (ADS)

    Ohashi, Takahiro

    2011-05-01

    In this study, support structures of a die for press working are discussed to solve the machine difference problems amongst presses. The developed multi-point die support structures are not only utilized for adjusting elastic deformation of a die, but also for in-process sensing of the behavior of a die. The structures have multiple support cells between a die and the slide of a press machine. The cell, known as `a support unit,' has the strain gauges attached on its side, and works in both ways as a kind of spring and a load and displacement sensor. The cell contacts on the die with a ball-contact, therefore it transmits only the vertical force at each support point. The isolation of a momentum and horizontal load at each support point contributes for a simple numerical model; it helps us to know the practical boundary condition at the points under an actual production. In addition, the momentum and horizontal forces at the points are useless for press working; the isolation of these forces contributes to reduce a jolt and related machine differences. The horizontal distribution of support units is changed to reduce elastic deformation of a die; it contributes to reduce a jolt, alignment errors of a die and geometrical errors of a product. The validity of those adjustments are confirmed with evaluating a product shape of a deep drawing and measuring jolts between upper and lower stamping dies. Furthermore, die deformation in a process is analyzed with using elastic FE analysis with actual bearing loads compiled from each support unit.

  15. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  16. Evaluation of approximate relations for Delta /Q/ using a numerical solution of the Boltzmann equation. [collision integral

    NASA Technical Reports Server (NTRS)

    Nathenson, M.; Baganoff, D.; Yen, S. M.

    1974-01-01

    Data obtained from a numerical solution of the Boltzmann equation for shock-wave structure are used to test the accuracy of accepted approximate expressions for the two moments of the collision integral Delta (Q) for general intermolecular potentials in systems with a large translational nonequilibrium. The accuracy of the numerical scheme is established by comparison of the numerical results with exact expressions in the case of Maxwell molecules. They are then used in the case of hard-sphere molecules, which are the furthest-removed inverse power potential from the Maxwell molecule; and the accuracy of the approximate expressions in this domain is gauged. A number of approximate solutions are judged in this manner, and the general advantages of the numerical approach in itself are considered.

  17. Numerical evaluation and optimization of depth-oriented temperature measurements for the investigation of thermal influences on groundwater

    NASA Astrophysics Data System (ADS)

    Köhler, Mandy; Haendel, Falk; Epting, Jannis; Binder, Martin; Müller, Matthias; Huggenberger, Peter; Liedl, Rudolf

    2015-04-01

    Increasing groundwater temperatures have been observed in many urban areas such as London (UK), Tokyo (Japan) and also in Basel (Switzerland). Elevated groundwater temperatures are a result of different direct and indirect thermal impacts. Groundwater heat pumps, building structures located within the groundwater and district heating pipes, among others, can be addressed to direct impacts, whereas indirect impacts result from the change in climate in urban regions (i.e. reduced wind, diffuse heat sources). A better understanding of the thermal processes within the subsurface is urgently needed for decision makers as a basis for the selection of appropriate measures to reduce the ongoing increase of groundwater temperatures. However, often only limited temperature data is available that derives from measurements in conventional boreholes, which differ in construction and instrumental setup resulting in measurements that are often biased and not comparable. For three locations in the City of Basel models were implemented to study selected thermal processes and to investigate if heat-transport models can reproduce thermal measurements. Therefore, and to overcome the limitations of conventional borehole measurements, high-resolution depth-oriented temperature measurement systems have been introduced in the urban area of Basel. In total seven devices were installed with up to 16 sensors which are located in the unsaturated and saturated zone (0.5 to 1 m separation distance). Measurements were performed over a period of 4 years (ongoing) and provide sufficient data to set up and calibrate high-resolution local numerical heat transport models which allow studying selected local thermal processes. In a first setup two- and three-dimensional models were created to evaluate the impact of the atmosphere boundary on groundwater temperatures (see EGU Poster EGU2013-9230: Modelling Strategies for the Thermal Management of Shallow Rural and Urban Groundwater bodies). For Basel

  18. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506

  19. Multiplatform Observations from DYNAMO and Deployment of a Comprehensive Dataset for Numerical Model Evaluation and other Applications

    NASA Astrophysics Data System (ADS)

    Guy, N.; Chen, S. S.; Zhang, C.

    2014-12-01

    A large number of observations were collected during the DYNAMO (Dynamics of the Madden-Julian Oscillation) field campaign in the tropical Indian Ocean during 2011. These data ranged from in-situ measurements of individual hydrometeors to regional precipitation distribution to large-scale precipitation and wind fields. Many scientific findings have been reported in the three years since project completion, leading to a better physical understanding of the Madden-Julian Oscillation (MJO) initiation and providing insight to a roadmap to better predictability. The NOAA P-3 instrumented aircraft was deployed from 11 November - 13 December 2011, embarking on 12 flights. This mobile platform provided high resolution, high quality in-situ and remotely sensed observations of the meso-γ to meso-α scale environment and offered coherent cloud dynamic and microphysical data in convective cloud systems where surface-based instruments were unable to reach. Measurements included cloud and precipitation microphysical observations via the Particle Measuring System 2D cloud and precipitation probes, aircraft altitude flux measurements, dropsonde vertical thermodynamic profiles, and 3D precipitation and wind field observations from the tail-mounted Doppler X-band weather radar. Existing satellite (infrared, visible, and water vapor) data allowed the characterization of the large-scale environment. These comprehensive data have been combined into an easily accesible product with special attention paid to comparing observations to future numerical simulations. The P-3 and French Falcon aircraft flew a coordinated mission, above and below the melting level, respectively, near Gan Island on 8 December 2011, acquiring coincident cloud microphysical and dynamics data. The Falcon aircraft is instrumented with vertically pointing W-band radar, with a focus on ice microphysical properties. We present this case in greater detail to show the optimal coincident measurements. Additional

  20. Experimental and numerical evaluation of freely spacing-tunable multiwavelength fiber laser based on two seeding light signals

    SciTech Connect

    Yuan, Yijun; Yao, Yong Guo, Bo; Yang, Yanfu; Tian, JiaJun; Yi, Miao

    2015-03-28

    A model of multiwavelength erbium-doped fiber laser (MEFL), which takes into account the impact of fiber attenuation on the four-wave-mixing (FWM), is proposed. Using this model, we numerically study the output characteristics of the MEFL based on FWM in a dispersion shift fiber with two seeding light signals (TSLS) and experimentally verify these characteristics. The numerical and experimental results show that the number of output channels can be increased with the increase of the erbium-doped fiber pump power. In addition, by decreasing the spacing of TSLS and increasing the power of TSLS, the number of output channels can be increased. However, when the power of TSLS exceeds a critical value, the number of output channels decreases. The results by numerical simulation are consistent with experimental observations from the MEFL.

  1. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; da Jornada, Felipe H.; Deslippe, Jack; Yang, Chao; Neaton, Jeffrey B.; Louie, Steven G.

    2015-04-01

    We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.

  2. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  3. Evaluation of the Faraday angle by numerical methods and comparison with the Tore Supra and JET polarimeter electronics.

    PubMed

    Brault, C; Gil, C; Boboc, A; Spuig, P

    2011-04-01

    On the Tore Supra tokamak, a far infrared polarimeter diagnostic has been routinely used for diagnosing the current density by measuring the Faraday rotation angle. A high precision of measurement is needed to correctly reconstruct the current profile. To reach this precision, electronics used to compute the phase and the amplitude of the detected signals must have a good resilience to the noise in the measurement. In this article, the analogue card's response to the noise coming from the detectors and their impact on the Faraday angle measurements are analyzed, and we present numerical methods to calculate the phase and the amplitude. These validations have been done using real signals acquired by Tore Supra and JET experiments. These methods have been developed to be used in real-time in the future numerical cards that will replace the Tore Supra present analogue ones. PMID:21678660

  4. Performance comparison of background-oriented schlieren and fringe deflection in temperature measurement: part I. Numerical evaluation

    NASA Astrophysics Data System (ADS)

    Blanco, Alan; Barrientos, Bernardino; Mares, Carlos

    2016-05-01

    Numerical comparisons of temperature measurement through background-oriented schlieren (BOS) and fringe deflection (FD) are presented. Both techniques are based on ray deflection and on the comparison of two different states of a region of observation. A background image displayed on a screen is used in both techniques: for BOS, randomly located spots, and for FD, sinusoidal straight fringes. When a phase object is incorporated into the layout, these spatial structures undergo displacements that are proportional to the gradient of the change of index of refraction. These displacement fields are calculated through digital correlation in BOS and by means of the Fourier phase extraction method in FD. Numerical simulations that model a flame issued by a gas nozzle are presented. The results show that FD presents a slightly larger accuracy for images that either contain relatively high temperature gradients or show low contrast.

  5. Evaluation of the Faraday angle by numerical methods and comparison with the Tore Supra and JET polarimeter electronics.

    PubMed

    Brault, C; Gil, C; Boboc, A; Spuig, P

    2011-04-01

    On the Tore Supra tokamak, a far infrared polarimeter diagnostic has been routinely used for diagnosing the current density by measuring the Faraday rotation angle. A high precision of measurement is needed to correctly reconstruct the current profile. To reach this precision, electronics used to compute the phase and the amplitude of the detected signals must have a good resilience to the noise in the measurement. In this article, the analogue card's response to the noise coming from the detectors and their impact on the Faraday angle measurements are analyzed, and we present numerical methods to calculate the phase and the amplitude. These validations have been done using real signals acquired by Tore Supra and JET experiments. These methods have been developed to be used in real-time in the future numerical cards that will replace the Tore Supra present analogue ones.

  6. A numerical model for CO effect evaluation in HT-PEMFCs: Part 2 - Application to different membranes

    NASA Astrophysics Data System (ADS)

    Cozzolino, R.; Chiappini, D.; Tribioli, L.

    2016-06-01

    In this paper, a self-made numerical model of a high temperature polymer electrolyte membrane fuel cell is presented. In particular, we focus on the impact of CO poisoning on fuel cell performance and its influence on electrochemical modelling. More specifically, the aim of this work is to demonstrate the effectiveness of our zero-dimensional electrochemical model of HT-PEMFCs, by comparing numerical and experimental results, obtained from two different commercial membranes electrode assemblies: the first one is based on polybenzimidazole (PBI) doped with phosphoric acid, while the second one uses a PBI electrolyte with aromatic polyether polymers/copolymers bearing pyridine units, always doped with H3PO4. The analysis has been carried out considering both the effect of CO poisoning and operating temperature for the two membranes above mentioned.

  7. Numerical evaluation of passive control of shock wave/boundary layer interaction on NACA0012 airfoil using jagged wall

    NASA Astrophysics Data System (ADS)

    Dehghan Manshadi, Mojtaba; Rabani, Ramin

    2016-09-01

    Shock formation due to flow compressibility and its interaction with boundary layers has adverse effects on aerodynamic characteristics, such as drag increase and flow separation. The objective of this paper is to appraise the practicability of weakening shock waves and, hence, reducing the wave drag in transonic flight regime using a two-dimensional jagged wall and thereby to gain an appropriate jagged wall shape for future empirical study. Different shapes of the jagged wall, including rectangular, circular, and triangular shapes, were employed. The numerical method was validated by experimental and numerical studies involving transonic flow over the NACA0012 airfoil, and the results presented here closely match previous experimental and numerical results. The impact of parameters, including shape and the length-to-spacing ratio of a jagged wall, was studied on aerodynamic forces and flow field. The results revealed that applying a jagged wall method on the upper surface of an airfoil changes the shock structure significantly and disintegrates it, which in turn leads to a decrease in wave drag. It was also found that the maximum drag coefficient decrease of around 17 % occurs with a triangular shape, while the maximum increase in aerodynamic efficiency (lift-to-drag ratio) of around 10 % happens with a rectangular shape at an angle of attack of 2.26°.

  8. Numerical simulation of tests for the evaluation of the performance of the reinforced concrete slabs strengthening by FRCM

    NASA Astrophysics Data System (ADS)

    Anania, Laura; Badalá, Antonio; D'Agata, Giuseppe

    2016-01-01

    In this work the attention is focused to the numerical simulation of the experimental bending tests carried out on a total of six reinforced concrete r.c. plates the latter aimed to provide a basic understanding of the its performance when strengthened by Fiber Reinforced Cementitius Matrix (FRCM) Composites. Three of those were used as control specimens. The numerical simulation was carried out by LUSAS software. A good correlation between the FE results and data obtained from the test, both in the load-deformation behavior and the failure load was highlighted. This permits to prove that applied strengthening system gives back an enhancement 2.5 times greater in respect of the unreinforced case. A greater energy dissipation ability and a residual load-bearing capacity makes the proposed system very useful in the retrofitting as well as in the case of strengthening of bridge structures. Based on the validation of the FE results in bending, the numerical analysis was also extended to characterize the behavior of this strengthening system in tensile.

  9. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  10. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  11. Application of 2D numerical model to unsteady performance evaluation of vertical-axis tidal current turbine

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Qu, Hengliang; Shi, Hongda; Hu, Gexing; Hyun, Beom-Soo

    2016-09-01

    Tidal current energy is renewable and sustainable, which is a promising alternative energy resource for the future electricity supply. The straight-bladed vertical-axis turbine is regarded as a useful tool to capture the tidal current energy especially under low-speed conditions. A 2D unsteady numerical model based on Ansys-Fluent 12.0 is established to conduct the numerical simulation, which is validated by the corresponding experimental data. For the unsteady calculations, the SST model, 2×105 and 0.01 s are selected as the proper turbulence model, mesh number, and time step, respectively. Detailed contours of the velocity distributions around the rotor blade foils have been provided for a flow field analysis. The tip speed ratio (TSR) determines the azimuth angle of the appearance of the torque peak, which occurs once for a blade in a single revolution. It is also found that simply increasing the incident flow velocity could not improve the turbine performance accordingly. The peaks of the averaged power and torque coefficients appear at TSRs of 2.1 and 1.8, respectively. Furthermore, several shapes of the duct augmentation are proposed to improve the turbine performance by contracting the flow path gradually from the open mouth of the duct to the rotor. The duct augmentation can significantly enhance the power and torque output. Furthermore, the elliptic shape enables the best performance of the turbine. The numerical results prove the capability of the present 2D model for the unsteady hydrodynamics and an operating performance analysis of the vertical tidal stream turbine.

  12. Numerical indices based on circulating tumor DNA for the evaluation of therapeutic response and disease progression in lung cancer patients

    PubMed Central

    Kato, Kikuya; Uchida, Junji; Kukita, Yoji; Kumagai, Toru; Nishino, Kazumi; Inoue, Takako; Kimura, Madoka; Oba, Shigeyuki; Imamura, Fumio

    2016-01-01

    Monitoring of disease/therapeutic conditions is an important application of circulating tumor DNA (ctDNA). We devised numerical indices, based on ctDNA dynamics, for therapeutic response and disease progression. 52 lung cancer patients subjected to the EGFR-TKI treatment were prospectively collected, and ctDNA levels represented by the activating and T790M mutations were measured using deep sequencing. Typically, ctDNA levels decreased sharply upon initiation of EGFR-TKI, however this did not occur in progressive disease (PD) cases. All 3 PD cases at initiation of EGFR-TKI were separated from other 27 cases in a two-dimensional space generated by the ratio of the ctDNA levels before and after therapy initiation (mutation allele ratio in therapy, MART) and the average ctDNA level. For responses to various agents after disease progression, PD/stable disease cases were separated from partial response cases using MART (accuracy, 94.7%; 95% CI, 73.5–100). For disease progression, the initiation of ctDNA elevation (initial positive point) was compared with the onset of objective disease progression. In 11 out of 28 eligible patients, both occurred within ±100 day range, suggesting a detection of the same change in disease condition. Our numerical indices have potential applicability in clinical practice, pending confirmation with designed prospective studies. PMID:27381430

  13. Numerical evaluation of moiré pattern in touch sensor module with electrode mesh structure in oblique view

    NASA Astrophysics Data System (ADS)

    Pournoury, M.; Zamiri, A.; Kim, T. Y.; Yurlov, V.; Oh, K.

    2016-03-01

    Capacitive touch sensor screen with the metal materials has recently become qualified for substitution of ITO; however several obstacles still have to be solved. One of the most important issues is moiré phenomenon. The visibility problem of the metal-mesh, in touch sensor module (TSM) is numerically considered in this paper. Based on human eye contract sensitivity function (CSF), moiré pattern of TSM electrode mesh structure is simulated with MATLAB software for 8 inch screen display in oblique view. Standard deviation of the generated moiré by the superposition of electrode mesh and screen image is calculated to find the optimal parameters which provide the minimum moiré visibility. To create the screen pixel array and mesh electrode, rectangular function is used. The filtered image, in frequency domain, is obtained by multiplication of Fourier transform of the finite mesh pattern (product of screen pixel and mesh electrode) with the calculated CSF function for three different observer distances (L=200, 300 and 400 mm). It is observed that the discrepancy between analytical and numerical results is less than 0.6% for 400 mm viewer distance. Moreover, in the case of oblique view due to considering the thickness of the finite film between mesh electrodes and screen, different points of minimum standard deviation of moiré pattern are predicted compared to normal view.

  14. Numerical modelling of a bromide-polysulphide redox flow battery. Part 2: Evaluation of a utility-scale system

    NASA Astrophysics Data System (ADS)

    Scamman, Daniel P.; Reade, Gavin W.; Roberts, Edward P. L.

    Numerical modelling of redox flow battery (RFB) systems allows the technical and commercial performance of different designs to be predicted without costly lab, pilot and full-scale testing. A numerical model of a redox flow battery was used in conjunction with a simple cost model incorporating capital and operating costs to predict the technical and commercial performance of a 120 MWh/15 MW utility-scale polysulphide-bromine (PSB) storage plant for arbitrage applications. Based on 2006 prices, the system was predicted to make a net loss of 0.45 p kWh -1 at an optimum current density of 500 A m -2 and an energy efficiency of 64%. The system was predicted to become economic for arbitrage (assuming no further costs were incurred) if the rate constants of both electrolytes could be increased to 10 -5 m s -1, for example by using a suitable (low cost) electrocatalyst. The economic viability was found to be strongly sensitive to the costs of the electrochemical cells and the electrical energy price differential.

  15. Evaluation of ground-penetrating radar to detect free-phase hydrocarbons in fractured rocks - Results of numerical modeling and physical experiments

    USGS Publications Warehouse

    Lane, J.W.; Buursink, M.L.; Haeni, F.P.; Versteeg, R.J.

    2000-01-01

    The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons

  16. Graphical arterial blood gas visualization tool supports rapid and accurate data interpretation.

    PubMed

    Doig, Alexa K; Albert, Robert W; Syroid, Noah D; Moon, Shaun; Agutter, Jim A

    2011-04-01

    A visualization tool that integrates numeric information from an arterial blood gas report with novel graphics was designed for the purpose of promoting rapid and accurate interpretation of acid-base data. A study compared data interpretation performance when arterial blood gas results were presented in a traditional numerical list versus the graphical visualization tool. Critical-care nurses (n = 15) and nursing students (n = 15) were significantly more accurate identifying acid-base states and assessing trends in acid-base data when using the graphical visualization tool. Critical-care nurses and nursing students using traditional numerical data had an average accuracy of 69% and 74%, respectively. Using the visualization tool, average accuracy improved to 83% for critical-care nurses and 93% for nursing students. Analysis of response times demonstrated that the visualization tool might help nurses overcome the "speed/accuracy trade-off" during high-stress situations when rapid decisions must be rendered. Perceived mental workload was significantly reduced for nursing students when they used the graphical visualization tool. In this study, the effects of implementing the graphical visualization were greater for nursing students than for critical-care nurses, which may indicate that the experienced nurses needed more training and use of the new technology prior to testing to show similar gains. Results of the objective and subjective evaluations support the integration of this graphical visualization tool into clinical environments that require accurate and timely interpretation of arterial blood gas data.

  17. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  18. Scaling the fractional advective-dispersive equation for numerical evaluation of microbial dynamics in confined geometries with sticky boundaries

    SciTech Connect

    Parashar, R.; Cushman, J.H.

    2008-06-20

    Microbial motility is often characterized by 'run and tumble' behavior which consists of bacteria making sequences of runs followed by tumbles (random changes in direction). As a superset of Brownian motion, Levy motion seems to describe such a motility pattern. The Eulerian (Fokker-Planck) equation describing these motions is similar to the classical advection-diffusion equation except that the order of highest derivative is fractional, {alpha} element of (0, 2]. The Lagrangian equation, driven by a Levy measure with drift, is stochastic and employed to numerically explore the dynamics of microbes in a flow cell with sticky boundaries. The Eulerian equation is used to non-dimensionalize parameters. The amount of sorbed time on the boundaries is modeled as a random variable that can vary over a wide range of values. Salient features of first passage time are studied with respect to scaled parameters.

  19. Numerical and experimental evaluation of laser forming process for the shape correction in ultra high strength steels

    SciTech Connect

    Song, J. H.; Lee, J.; Lee, S.; Kim, E. Z.; Lee, N. K.; Lee, G. A.; Park, S. J.; Chu, A.

    2013-12-16

    In this paper, laser forming characteristics in ultra high strength steel with ultimate strength of 1200MPa are investigated numerically and experimentally. FE simulation is conducted to identify the response related to deformation and characterize the effect of laser power, beam diameter and scanning speed with respect to the bending angle for a square sheet part. The thermo-mechanical behaviors during the straight-line heating process are presented in terms of temperature, stress and strain. An experimental setup including a fiber laser with maximum mean power of 3.0 KW is used in the experiments. From the results in this work, it would be easily adjustment the laser power and the scanning speed by controlling the line energy for a bending operation of CP1180 steel sheets.

  20. Evaluation of Soft Tissue Sarcoma Tumors Electrical Conductivity Anisotropy Using Diffusion Tensor Imaging for Numerical Modeling on Electroporation

    PubMed Central

    Ghazikhanlou-sani, K.; Firoozabadi, S. M. P.; Agha-ghazvini, L.; Mahmoodzadeh, H.

    2016-01-01

    Introduction There is many ways to assessing the electrical conductivity anisotropy of a tumor. Applying the values of tissue electrical conductivity anisotropy is crucial in numerical modeling of the electric and thermal field distribution in electroporation treatments. This study aims to calculate the tissues electrical conductivity anisotropy in patients with sarcoma tumors using diffusion tensor imaging technique. Materials and Method A total of 3 subjects were involved in this study. All of patients had clinically apparent sarcoma tumors at the extremities. The T1, T2 and DTI images were performed using a 3-Tesla multi-coil, multi-channel MRI system. The fractional anisotropy (FA) maps were performed using the FSL (FMRI software library) software regarding the DTI images. The 3D matrix of the FA maps of each area (tumor, normal soft tissue and bone/s) was reconstructed and the anisotropy matrix was calculated regarding to the FA values. Result The mean FA values in direction of main axis in sarcoma tumors were ranged between 0.475–0.690.  With assumption of isotropy of the electrical conductivity, the FA value of electrical conductivity at each X, Y and Z coordinate axes would be equal to 0.577. The gathered results showed that there is a mean error band of 20% in electrical conductivity, if the electrical conductivity anisotropy not concluded at the calculations. The comparison of FA values showed that there is a significant statistical difference between the mean FA value of tumor and normal soft tissues (P<0.05). Conclusion DTI is a feasible technique for the assessment of electrical conductivity anisotropy of tissues.  It is crucial to quantify the electrical conductivity anisotropy data of tissues for numerical modeling of electroporation treatments.

  1. Evaluation of Soft Tissue Sarcoma Tumors Electrical Conductivity Anisotropy Using Diffusion Tensor Imaging for Numerical Modeling on Electroporation

    PubMed Central

    Ghazikhanlou-sani, K.; Firoozabadi, S. M. P.; Agha-ghazvini, L.; Mahmoodzadeh, H.

    2016-01-01

    Introduction There is many ways to assessing the electrical conductivity anisotropy of a tumor. Applying the values of tissue electrical conductivity anisotropy is crucial in numerical modeling of the electric and thermal field distribution in electroporation treatments. This study aims to calculate the tissues electrical conductivity anisotropy in patients with sarcoma tumors using diffusion tensor imaging technique. Materials and Method A total of 3 subjects were involved in this study. All of patients had clinically apparent sarcoma tumors at the extremities. The T1, T2 and DTI images were performed using a 3-Tesla multi-coil, multi-channel MRI system. The fractional anisotropy (FA) maps were performed using the FSL (FMRI software library) software regarding the DTI images. The 3D matrix of the FA maps of each area (tumor, normal soft tissue and bone/s) was reconstructed and the anisotropy matrix was calculated regarding to the FA values. Result The mean FA values in direction of main axis in sarcoma tumors were ranged between 0.475–0.690.  With assumption of isotropy of the electrical conductivity, the FA value of electrical conductivity at each X, Y and Z coordinate axes would be equal to 0.577. The gathered results showed that there is a mean error band of 20% in electrical conductivity, if the electrical conductivity anisotropy not concluded at the calculations. The comparison of FA values showed that there is a significant statistical difference between the mean FA value of tumor and normal soft tissues (P<0.05). Conclusion DTI is a feasible technique for the assessment of electrical conductivity anisotropy of tissues.  It is crucial to quantify the electrical conductivity anisotropy data of tissues for numerical modeling of electroporation treatments. PMID:27672627

  2. An optimal scheme for numerical evaluation of Eshelby tensors and its implementation in a MATLAB package for simulating the motion of viscous ellipsoids in slow flows

    NASA Astrophysics Data System (ADS)

    Qu, Mengmeng; Jiang, Dazhi; Lu, Lucy X.

    2016-11-01

    To address the multiscale deformation and fabric development in Earth's ductile lithosphere, micromechanics-based self-consistent homogenization is commonly used to obtain macroscale rheological properties from properties of constituent elements. The homogenization is heavily based on the solution of an Eshelby viscous inclusion in a linear viscous medium and the extension of the solution to nonlinear viscous materials. The homogenization requires repeated numerical evaluation of Eshelby tensors for constituent elements and becomes ever more computationally challenging as the elements are deformed to more elongate or flattened shapes. In this paper, we develop an optimal scheme for evaluating Eshelby tensors, using a combination of a product Gaussian quadrature and the Lebedev quadrature. We first establish, through numerical experiments, an empirical relationship between the inclusion shape and the computational time it takes to evaluate its Eshelby tensors. We then use the relationship to develop an optimal scheme for selecting the most efficient quadrature to obtain the Eshelby tensors. The optimal scheme is applicable to general homogenizations. In this paper, it is implemented in a MATLAB package for investigating the evolution of solitary rigid or deformable inclusions and the development of shape preferred orientations in multi-inclusion systems during deformation. The MATLAB package, upgrading an earlier effort written in MathCad, can be downloaded online.

  3. A high order accurate difference scheme for complex flow fields

    SciTech Connect

    Dexun Fu; Yanwen Ma

    1997-06-01

    A high order accurate finite difference method for direct numerical simulation of coherent structure in the mixing layers is presented. The reason for oscillation production in numerical solutions is analyzed. It is caused by a nonuniform group velocity of wavepackets. A method of group velocity control for the improvement of the shock resolution is presented. In numerical simulation the fifth-order accurate upwind compact difference relation is used to approximate the derivatives in the convection terms of the compressible N-S equations, a sixth-order accurate symmetric compact difference relation is used to approximate the viscous terms, and a three-stage R-K method is used to advance in time. In order to improve the shock resolution the scheme is reconstructed with the method of diffusion analogy which is used to control the group velocity of wavepackets. 18 refs., 12 figs., 1 tab.

  4. Subjective evaluation of the combined influence of satellite temperature sounding data and increased model resolution on numerical weather forecasting

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Halem, M.; Ghil, M.

    1979-01-01

    The present evaluation is concerned with (1) the significance of prognostic differences resulting from the inclusion of satellite-derived temperature soundings, (2) how specific differences between the SAT and NOSAT prognoses evolve, and (3) comparison of two experiments using the Goddard Laboratory for Atmospheric Sciences general circulation model. The subjective evaluation indicates that the beneficial impact of sounding data is enhanced with increased resolution. It is suggested that satellite sounding data posses valuable information content which at times can correct gross analysis errors in data sparse regions.

  5. The NHLBI-Sponsored Consortium for preclinicAl assESsment of cARdioprotective Therapies (CAESAR): A New Paradigm for Rigorous, Accurate, and Reproducible Evaluation of Putative Infarct-Sparing Interventions in Mice, Rabbits, and Pigs

    PubMed Central

    Jones, Steven P.; Tang, Xian-Liang; Guo, Yiru; Steenbergen, Charles; Lefer, David J.; Kukreja, Rakesh C.; Kong, Maiying; Li, Qianhong; Bhushan, Shashi; Zhu, Xiaoping; Du, Junjie; Nong, Yibing; Stowers, Heather L.; Kondo, Kazuhisa; Hunt, Gregory N.; Goodchild, Traci T.; Orr, Adam; Chang, Carlos C.; Ockaili, Ramzi; Salloum, Fadi N.; Bolli, Roberto

    2014-01-01

    Rationale Despite four decades of intense effort and substantial financial investment, the cardioprotection field has failed to deliver a single drug that effectively reduces myocardial infarct size in patients. A major reason is insufficient rigor and reproducibility in preclinical studies. Objective To develop a multicenter randomized controlled trial (RCT)-like infrastructure to conduct rigorous and reproducible preclinical evaluation of cardioprotective therapies. Methods and Results With NHLBI support, we established the Consortium for preclinicAl assESsment of cARdioprotective therapies (CAESAR), based on the principles of randomization, investigator blinding, a priori sample size determination and exclusion criteria, appropriate statistical analyses, and assessment of reproducibility. To validate CAESAR, we tested the ability of ischemic preconditioning (IPC) to reduce infarct size in three species (at two sites/species): mice (n=22-25/group), rabbits (n=11-12/group), and pigs (n=13/group). During this validation phase, i) we established protocols that gave similar results between Centers and confirmed that IPC significantly reduced infarct size in all species, and ii) we successfully established a multi-center structure to support CAESAR’s operations, including two surgical Centers for each species, a Pathology Core (to assess infarct size), a Biomarker Core (to measure plasma cardiac troponin levels), and a Data Coordinating Center – all with the oversight of an external Protocol Review and Monitoring Committee. Conclusions CAESAR is operational, generates reproducible results, can detect cardioprotection, and provides a mechanism for assessing potential infarct-sparing therapies with a level of rigor analogous to multicenter RCTs. This is a revolutionary new approach to cardioprotection. Importantly, we provide state-of-the-art, detailed protocols (“CAESAR protocols”) for measuring infarct size in mice, rabbits, and pigs in a manner that is

  6. Photon migration through fetal head in utero using continuous wave, near infrared spectroscopy: development and evaluation of experimental and numerical models

    NASA Astrophysics Data System (ADS)

    Vishnoi, Gargi; Hielscher, Andreas H.; Ramanujam, Nirmala; Chance, Britton

    2000-04-01

    In this work experimental tissue phantoms and numerical models were developed to estimate photon migration through the fetal head in utero. The tissue phantoms incorporate a fetal head within an amniotic fluid sac surrounded by a maternal tissue layer. A continuous wave, dual-wavelength ((lambda) equals 760 and 850 nm) spectrometer was employed to make near-infrared measurements on the tissue phantoms for various source-detector separations, fetal-head positions, and fetal-head optical properties. In addition, numerical simulations of photon propagation were performed with finite-difference algorithms that provide solutions to the equation of radiative transfer as well as the diffusion equation. The simulations were compared with measurements on tissue phantoms to determine the best numerical model to describe photon migration through the fetal head in utero. Evaluation of the results indicates that tissue phantoms in which the contact between fetal head and uterine wall is uniform best simulates the fetal head in utero for near-term pregnancies. Furthermore, we found that maximum sensitivity to the head can be achieved if the source of the probe is positioned directly above the fetal head. By optimizing the source-detector separation, this signal originating from photons that have traveled through the fetal head can drastically be increased.

  7. A numerical algorithm to evaluate the transient response for a synchronous scanning streak camera using a time-domain Baum-Liu-Tesche equation

    NASA Astrophysics Data System (ADS)

    Pei, Chengquan; Tian, Jinshou; Wu, Shengli; He, Jiai; Liu, Zhen

    2016-10-01

    The transient response is of great influence on the electromagnetic compatibility of synchronous scanning streak cameras (SSSCs). In this paper we propose a numerical method to evaluate the transient response of the scanning deflection plate (SDP). First, we created a simplified circuit model for the SDP used in an SSSC, and then derived the Baum-Liu-Tesche (BLT) equation in the frequency domain. From the frequency-domain BLT equation, its transient counterpart was derived. These parameters, together with the transient-BLT equation, were used to compute the transient load voltage and load current, and then a novel numerical method to fulfill the continuity equation was used. Several numerical simulations were conducted to verify this proposed method. The computed results were then compared with transient responses obtained by a frequency-domain/fast Fourier transform (FFT) method, and the accordance was excellent for highly conducting cables. The benefit of deriving the BLT equation in the time domain is that it may be used with slight modifications to calculate the transient response and the error can be controlled by a computer program. The result showed that the transient voltage was up to 1000 V and the transient current was approximately 10 A, so some protective measures should be taken to improve the electromagnetic compatibility.

  8. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  9. Storm and fair-weather driven sediment-transport within Poverty Bay, New Zealand, evaluated using coupled numerical models

    NASA Astrophysics Data System (ADS)

    Bever, Aaron J.; Harris, Courtney K.

    2014-09-01

    The Waipaoa River Sedimentary System in New Zealand, a focus site of the MARGINS Source-to-Sink program, contains both a terrestrial and marine component. Poverty Bay serves as the interface between the fluvial and oceanic portions of this dispersal system. This study used a three-dimensional hydrodynamic and sediment-transport numerical model, the Regional Ocean Modeling System (ROMS), coupled to the Simulated WAves Nearshore (SWAN) wave model to investigate sediment-transport dynamics within Poverty Bay and the mechanisms by which sediment travels from the Waipaoa River to the continental shelf. Two sets of model calculations were analyzed; the first represented a winter storm season, January-September, 2006; and the second an approximately 40 year recurrence interval storm that occurred on 21-23 October 2005. Model results indicated that hydrodynamics and sediment-transport pathways within Poverty Bay differed during wet storms that included river runoff and locally generated waves, compared to dry storms driven by oceanic swell. During wet storms the model estimated significant deposition within Poverty Bay, although much of the discharged sediment was exported from the Bay during the discharge pulse. Later resuspension events generated by Southern Ocean swell reworked and modified the initial deposit, providing subsequent pulses of sediment from the Bay to the continental shelf. In this manner, transit through Poverty Bay modified the input fluvial signal, so that the sediment characteristics and timing of export to the continental shelf differed from the Waipaoa River discharge. Sensitivity studies showed that feedback mechanisms between sediment-transport, currents, and waves were important within the model calculations.

  10. Evaluation of cloud prediction and determination of critical relative humidity for a mesoscale numerical weather prediction model

    SciTech Connect

    Seaman, N.L.; Guo, Z.; Ackerman, T.P.

    1996-04-01

    Predictions of cloud occurrence and vertical location from the Pennsylvannia State University/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) were evaluated statistically using cloud observations obtained at Coffeyville, Kansas, as part of the Second International satellite Cloud Climatology Project Regional Experiment campaign. Seventeen cases were selected for simulation during a November-December 1991 field study. MM5 was used to produce two sets of 36-km simulations, one with and one without four-dimensional data assimilation (FDDA), and a set of 12-km simulations without FDDA, but nested within the 36-km FDDA runs.

  11. The spectral element method on variable resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE PAGES

    Guba, O.; Taylor, M. A.; Ullrich, P. A.; Overfelt, J. R.; Levy, M. N.

    2014-06-25

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  12. The spectral element method (SEM) on variable-resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE PAGES

    Guba, O.; Taylor, M. A.; Ullrich, P. A.; Overfelt, J. R.; Levy, M. N.

    2014-11-27

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  13. Evaluating the Impacts of NASA/SPoRT Daily Greenness Vegetation Fraction on Land Surface Model and Numerical Weather Forecasts

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Case, Jonathan L.; LaFontaine, Frank J.; Kumar, Sujay V.

    2012-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed a Greenness Vegetation Fraction (GVF) dataset, which is updated daily using swaths of Normalized Difference Vegetation Index data from the Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA EOS Aqua and Terra satellites. NASA SPoRT began generating daily real-time GVF composites at 1-km resolution over the Continental United States (CONUS) on 1 June 2010. The purpose of this study is to compare the National Centers for Environmental Prediction (NCEP) climatology GVF product (currently used in operational weather models) to the SPoRT-MODIS GVF during June to October 2010. The NASA Land Information System (LIS) was employed to study the impacts of the SPoRT-MODIS GVF dataset on a land surface model (LSM) apart from a full numerical weather prediction (NWP) model. For the 2010 warm season, the SPoRT GVF in the western portion of the CONUS was generally higher than the NCEP climatology. The eastern CONUS GVF had variations both above and below the climatology during the period of study. These variations in GVF led to direct impacts on the rates of heating and evaporation from the land surface. In the West, higher latent heat fluxes prevailed, which enhanced the rates of evapotranspiration and soil moisture depletion in the LSM. By late Summer and Autumn, both the average sensible and latent heat fluxes increased in the West as a result of the more rapid soil drying and higher coverage of GVF. The impacts of the SPoRT GVF dataset on NWP was also examined for a single severe weather case study using the Weather Research and Forecasting (WRF) model. Two separate coupled LIS/WRF model simulations were made for the 17 July 2010 severe weather event in the Upper Midwest using the NCEP and SPoRT GVFs, with all other model parameters remaining the same. Based on the sensitivity results, regions with higher GVF in the SPoRT model runs had higher evapotranspiration and

  14. Numerical evaluation of apparent transport parameters from forced-gradient tracer tests in statistically anisotropic heterogeneous formations

    NASA Astrophysics Data System (ADS)

    Pedretti, D.; Fernandez-Garcia, D.; Bolster, D.; Sanchez-Vila, X.; Benson, D.

    2012-04-01

    For risk assessment and adequate decision making regarding remediation strategies in contaminated aquifers, solute fate in the subsurface must be modeled correctly. In practical situations, hydrodynamic transport parameters are obtained by fitting procedures, that aim to mathematically reproduce solute breakthrough (BTC) observed in the field during tracer tests. In recent years, several methods have been proposed (curve-types, moments, nonlocal formulations) but none of them combine the two main characteristic effects of convergent flow tracer tests (which are the most used tests in the practice): the intrinsic non-stationarity of the convergent flow to a well and the ubiquitous multiscale hydraulic heterogeneity of geological formations. These two effects separately have been accounted for by a lot of methods that appear to work well. Here, we investigate both effects at the same time via numerical analysis. We focus on the influence that measurable statistical properties of the aquifers (such as the variance and the statistical geometry of correlation scales) have on the shape of BTCs measured at the pumping well during convergent flow tracer tests. We built synthetic multigaussian 3D fields of heterogeneous hydraulic conductivity fields with variable statistics. A well is located in the center of the domain to reproduce a forced gradient towards it. Constant-head values are imposed on the boundaries of the domains, which have 251x251x100 cells. Injections of solutes take place by releasing particles at different distances from the well and using a random walk particle tracking scheme with constant local coefficient of dispersivity. The results show that BTCs partially display the typical anomalous behavior that has been commonly referred to as the effect of heterogeneity and connectivity (early and late arrival times of solute differ from the one predicted by local formulations). Among the most salient features, the behaviors of BTCs after the peak (the slope

  15. Evaluating the Impacts of NASA/SPoRT Daily Greenness Vegetation Fraction on Land Surface Model and Numerical Weather Forecasts

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Case, Jonathan L.; Molthan, Andrew L.

    2011-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center develops new products and techniques that can be used in operational meteorology. The majority of these products are derived from NASA polar-orbiting satellite imagery from the Earth Observing System (EOS) platforms. One such product is a Greenness Vegetation Fraction (GVF) dataset, which is produced from Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA EOS Aqua and Terra satellites. NASA SPoRT began generating daily real-time GVF composites at 1-km resolution over the Continental United States (CONUS) on 1 June 2010. The purpose of this study is to compare the National Centers for Environmental Prediction (NCEP) climatology GVF product (currently used in operational weather models) to the SPoRT-MODIS GVF during June to October 2010. The NASA Land Information System (LIS) was employed to study the impacts of the new SPoRT-MODIS GVF dataset on land surface models apart from a full numerical weather prediction (NWP) model. For the 2010 warm season, the SPoRT GVF in the western portion of the CONUS was generally higher than the NCEP climatology. The eastern CONUS GVF had variations both above and below the climatology during the period of study. These variations in GVF led to direct impacts on the rates of heating and evaporation from the land surface. The second phase of the project is to examine the impacts of the SPoRT GVF dataset on NWP using the Weather Research and Forecasting (WRF) model. Two separate WRF model simulations were made for individual severe weather case days using the NCEP GVF (control) and SPoRT GVF (experimental), with all other model parameters remaining the same. Based on the sensitivity results in these case studies, regions with higher GVF in the SPoRT model runs had higher evapotranspiration and lower direct surface heating, which typically resulted in lower (higher) predicted 2-m temperatures (2-m dewpoint temperatures). The opposite was true

  16. [A case of pentalogy of Fallot with numerous collaterals evaluated by magnetic resonance imaging: a report of an adult case].

    PubMed

    Fukuzawa, S; Kagaya, A; Kuramoto, M; Kudo, K; Katagiri, M; Ozawa, S; Momata, S; Watanabe, S; Masuda, Y; Inagaki, Y

    1985-12-01

    In this report, the diagnostic value of magnetic resonance imaging (MRI) was compared with that of two-dimensional echocardiography, computed tomography and cardiac catheterization in a patient with pentalogy of Fallot who survived to her fortieth year. The advantages and disadvantages of MRI in diagnosing the present case were as follows: The cardiovascular system, with the exception of atrial septal defect, was evaluated precisely. Collateral vessels were detected using MRI, but impossible with other non-invasive methods. MRI was particularly suitable for imaging the cardiovascular system because of the high contrast between the lower intensity signal of the blood and higher intensity signal of the myocardium and blood vessel walls. Using MRI, data acquisition time was 1.5 min per section. Gated MRI required more time for data acquisition. However, various oblique tomographic projections and very clear static images could be obtained using gated MRI. MRI should be one of the best diagnostic techniques for diagnosing congenital heart disease.

  17. The use of available potential energy to evaluate the impact of satellite data on numerical model analysis during FGGE

    NASA Technical Reports Server (NTRS)

    Horn, Lyle H.; Koehler, Thomas L.; Whittaker, Linda M.

    1988-01-01

    To evaluate the effect of the FGGE satellite observing system, the following two data sets were compared by examining the available potential energy (APE) and extratropical cyclone activity within the entire global domain during the first Special Observing Period: (1) the complete FGGE IIIb set, which incorporates satellite soundings, and (2) a NOSAT set which incorporates only conventional data. The time series of the daily total APEs indicate that NOSAT values are larger than the FGGE values, although in the Northern Hemisphere the differences are negligible. Analyses of cyclone scale features revealed only minor differences between the Northern Hemisphere FGGE and NOSAT analyses. On the other hand, substantial differences were revealed in the two Southern Hemisphere analyses, where the satellite soundings apparently add detail to the FGGE set.

  18. Evaluation of numerical models by FerryBox and Fixed Platform in-situ data in the southern North Sea

    NASA Astrophysics Data System (ADS)

    Haller, M.; Janssen, F.; Siddorn, J.; Petersen, W.; Dick, S.

    2015-02-01

    FerryBoxes installed on ships of opportunity (SoO) provide high-frequency surface biogeochemical measurements along selected tracks on a regular basis. Within the European FerryBox Community, several FerryBoxes are operated by different institutions. Here we present a comparison of model simulations applied to the North Sea with FerryBox temperature and salinity data from a transect along the southern North Sea and a more detailed analysis at three different positions located off the English East coast, at the Oyster Ground and in the German Bight. In addition to the FerryBox data, data from a Fixed Platform of the MARNET network are applied. Two operational hydrodynamic models have been evaluated for different time periods: results of BSHcmod v4 are analysed for 2009-2012, while simulations of FOAM AMM7 NEMO have been available from MyOcean data base for 2011 and 2012. The simulation of water temperatures is satisfying; however, limitations of the models exist, especially near the coast in the southern North Sea, where both models are underestimating salinity. Statistical errors differ between the models and the measured parameters, as the root mean square error (rmse) accounts for BSHcmod v4 to 0.92 K, for AMM7 only to 0.44 K. For salinity, BSHcmod is slightly better than AMM7 (0.98 and 1.1 psu, respectively). The study results reveal weaknesses of both models, in terms of variability, absolute levels and limited spatial resolution. In coastal areas, where the simulation of the transition zone between the coasts and the open ocean is still a demanding task for operational modelling, FerryBox data, combined with other observations with differing temporal and spatial scales serve as an invaluable tool for model evaluation and optimization. The optimization of hydrodynamical models with high frequency regional datasets, like the FerryBox data, is beneficial for their subsequent integration in ecosystem modelling.

  19. Evaluation of the occurrence and biodegradation of parabens and halogenated by-products in wastewater by accurate-mass liquid chromatography-quadrupole-time-of-flight-mass spectrometry (LC-QTOF-MS).

    PubMed

    González-Mariño, Iria; Quintana, José Benito; Rodríguez, Isaac; Cela, Rafael

    2011-12-15

    An assessment of the sewage occurrence and biodegradability of seven parabens and three halogenated derivatives of methyl paraben (MeP) is presented. Several wastewater samples were collected at three different wastewater treatment plants (WWTPs) during April and May 2010, concentrated by solid-phase extraction (SPE) and analysed by liquid chromatography-electrospray-quadrupole-time-of-flight mass spectrometry (LC-QTOF-MS). The performance of the QTOF system proved to be comparable to triple-quadrupole instruments in terms of quantitative capabilities, with good linearity (R(2) > 0.99 in the 5-500 ng mL(-1) range), repeatability (RSD < 5.6%) and LODs (0.3-4.0 ng L(-1) after SPE). MeP and n-propyl paraben (n-PrP) were the most frequently detected and the most abundant analytes in raw wastewater (0.3-10 μg L(-1)), in accordance with the data displayed in the bibliography and reflecting their wider use in cosmetic formulations. Samples were also evaluated in search for potential halogenated by-products of parabens, formed as a result of their reaction with residual chlorine contained in tap water. Monochloro- and dichloro-methyl paraben (ClMeP and Cl(2)MeP) were found and quantified in raw wastewater at levels between 0.01 and 0.1 μg L(-1). Halogenated derivatives of n-PrP could not be quantified due to the lack of standards; nevertheless, the monochlorinated species (ClPrP) was identified in several samples from its accurate precursor and product ions mass/charge ratios (m/z). Removal efficiencies of parabens and MeP chlorinated by-products in WWTPs exceeded 90%, with the lowest percentages corresponding to the latter species. This trend was confirmed by an activated sludge biodegradation batch test, where non-halogenated parabens had half-lives lower than 4 days, whereas halogenated derivatives of MeP turned out to be more persistent, with up to 10 days of half-life in the case of dihalogenated derivatives. A further stability test performed with raw wastewater

  20. Calibration and Evaluation of a Flood Forecasting System: Utility of Numerical Weather Prediction Model, Data Assimilation and Satellite-based Rainfall

    NASA Astrophysics Data System (ADS)

    Yucel, Ismail; Onen, Alper; Yilmaz, Koray; Gochis, David

    2015-04-01

    A fully-distributed, multi-physics, multi-scale hydrologic and hydraulic modeling system, WRF-Hydro, is used to assess the potential for skillful flood forecasting based on precipitation inputs derived from the Weather Research and Forecasting (WRF) model and the EUMETSAT Multi-sensor Precipitation Estimates (MPEs). Similar to past studies it was found that WRF model precipitation forecast errors related to model initial conditions are reduced when the three dimensional atmospheric data assimilation (3DVAR) scheme in the WRF model simulations is used. The study then undertook a comparative evaluation of the impact of MPE versus WRF precipitation estimates, both with and without data assimilation, in driving WRF-Hydro simulated streamflow. Several flood events that occurred in the Black Sea region were used for testing and evaluation. Following model calibration, the WRF-Hydro system was capable of skillfully reproducing observed flood hydrographs in terms of the volume of the runoff produced and the overall shape of the hydrograph. Streamflow simulation skill was significantly improved for those WRF model simulations where storm precipitation was accurately depicted with respect to timing, location and amount. Accurate streamflow simulations were more evident in WRF model simulations where the 3DVAR scheme was used compared to when it was not used. Because of substantial dry bias feature of MPE, streamflow derived using this precipitation product is in general very poor. Overall, root mean squared errors for runoff were reduced by 22.2% when hydrological model calibration is performed with WRF precipitation. Errors were reduced by 36.9% (above uncalibrated model performance) when both WRF model data assimilation and hydrological model calibration was utilized. Our results also indicated that when assimilated precipitation and model calibration is performed jointly, the calibrated parameters at the gauged sites could be transferred to ungauged neighboring basins

  1. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  2. Evaluation of a coupled model for numerical simulation of a multiphase flow system in a porous medium and a surface fluid.

    PubMed

    Hibi, Yoshihiko; Tomigashi, Akira

    2015-09-01

    Numerical simulations that couple flow in a surface fluid with that in a porous medium are useful for examining problems of pollution that involve interactions among atmosphere, water, and groundwater, including saltwater intrusion along coasts. Coupled numerical simulations of such problems must consider both vertical flow between the surface fluid and the porous medium and complicated boundary conditions at their interface. In this study, a numerical simulation method coupling Navier-Stokes equations for surface fluid flow and Darcy equations for flow in a porous medium was developed. Then, the basic ability of the coupled model to reproduce (1) the drawdown of a surface fluid observed in square-pillar experiments, using pillars filled with only fluid or with fluid and a porous medium and (2) the migration of saltwater (salt concentration 0.5%) in the porous medium using the pillar filled with fluid and a porous medium was evaluated. Simulations that assumed slippery walls reproduced well the results with drawdowns of 10-30 cm when the pillars were filled with packed sand, gas, and water. Moreover, in the simulation of saltwater infiltration by the method developed in this study, velocity was precisely reproduced because the experimental salt concentration in the porous medium after saltwater infiltration was similar to that obtained in the simulation. Furthermore, conditions across the boundary between the porous medium and the surface fluid were satisfied in these numerical simulations of square-pillar experiments in which vertical flow predominated. Similarly, the velocity obtained by the simulation for a system coupling flow in surface fluid with that in a porous medium when horizontal flow predominated satisfied the conditions across the boundary. Finally, it was confirmed that the present simulation method was able to simulate a practical-scale surface fluid and porous medium system. All of these numerical simulations, however, required a great deal of

  3. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  4. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  5. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  6. Evaluation of wind-induced internal pressure in low-rise buildings: A multi scale experimental and numerical approach

    NASA Astrophysics Data System (ADS)

    Tecle, Amanuel Sebhatu

    Hurricane is one of the most destructive and costly natural hazard to the built environment and its impact on low-rise buildings, particularity, is beyond acceptable. The major objective of this research was to perform a parametric evaluation of internal pressure (IP) for wind-resistant design of low-rise buildings and wind-driven natural ventilation applications. For this purpose, a multi-scale experimental, i.e. full-scale at Wall of Wind (WoW) and small-scale at Boundary Layer Wind Tunnel (BLWT), and a Computational Fluid Dynamics (CFD) approach was adopted. This provided new capability to assess wind pressures realistically on internal volumes ranging from small spaces formed between roof tiles and its deck to attic to room partitions. Effects of sudden breaching, existing dominant openings on building envelopes as well as compartmentalization of building interior on the IP were systematically investigated. Results of this research indicated: (i) for sudden breaching of dominant openings, the transient overshooting response was lower than the subsequent steady state peak IP and internal volume correction for low-wind-speed testing facilities was necessary. For example a building without volume correction experienced a response four times faster and exhibited 30--40% lower mean and peak IP; (ii) for existing openings, vent openings uniformly distributed along the roof alleviated, whereas one sided openings aggravated the IP; (iii) larger dominant openings exhibited a higher IP on the building envelope, and an off-center opening on the wall exhibited (30--40%) higher IP than center located openings; (iv) compartmentalization amplified the intensity of IP and; (v) significant underneath pressure was measured for field tiles, warranting its consideration during net pressure evaluations. The study aimed at wind driven natural ventilation indicated: (i) the IP due to cross ventilation was 1.5 to 2.5 times higher for Ainlet/Aoutlet>1 compared to cases where Ainlet

  7. Hydrogeologic evaluation and numerical simulation of the Death Valley regional ground-water flow system, Nevada and California

    SciTech Connect

    D`Agnese, F.A.; Faunt, C.C.; Turner, A.K.; Hill, M.C.

    1997-12-31

    Yucca Mountain is being studied as a potential site for a high-level radioactive waste repository. In cooperation with the U.S. Department of Energy, the U.S. Geological Survey is evaluating the geologic and hydrologic characteristics of the ground-water system. The study area covers approximately 100,000 square kilometers between lat 35{degrees}N., long 115{degrees}W and lat 38{degrees}N., long 118{degrees}W and encompasses the Death Valley regional ground-water flow system. Hydrology in the region is a result of both the and climatic conditions and the complex described as dominated by interbasinal flow and may be conceptualized as having two main components: a series of relatively shallow and localized flow paths that are superimposed on deeper regional flow paths. A significant component of the regional ground-water flow is through a thick Paleozoic carbonate rock sequence. Throughout the regional flow system, ground-water flow is probably controlled by extensive and prevalent structural features that result from regional faulting and fracturing. Hydrogeologic investigations over a large and hydrogeologically complex area impose severe demands on data management. This study utilized geographic information systems and geoscientific information systems to develop, store, manipulate, and analyze regional hydrogeologic data sets describing various components of the ground-water flow system.

  8. Reliable numerical computation in an optimal output-feedback design

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.

  9. Sheet Hydroforming Process Numerical Model Improvement Through Experimental Results Analysis

    NASA Astrophysics Data System (ADS)

    Gabriele, Papadia; Antonio, Del Prete; Alfredo, Anglani

    2010-06-01

    The increasing application of numerical simulation in metal forming field has helped engineers to solve problems one after another to manufacture a qualified formed product reducing the required time [1]. Accurate simulation results are fundamental for the tooling and the product designs. The wide application of numerical simulation is encouraging the development of highly accurate simulation procedures to meet industrial requirements. Many factors can influence the final simulation results and many studies have been carried out about materials [2], yield criteria [3] and plastic deformation [4,5], process parameters [6] and their optimization. In order to develop a reliable hydromechanical deep drawing (HDD) numerical model the authors have been worked out specific activities based on the evaluation of the effective stiffness of the blankholder structure [7]. In this paper after an appropriate tuning phase of the blankholder force distribution, the experimental activity has been taken into account to improve the accuracy of the numerical model. In the first phase, the effective capability of the blankholder structure to transfer the applied load given by hydraulic actuators to the blank has been explored. This phase ended with the definition of an appropriate subdivision of the blankholder active surface in order to take into account the effective pressure map obtained for the given loads configuration. In the second phase the numerical results obtained with the developed subdivision have been compared with the experimental data of the studied model. The numerical model has been then improved, finding the best solution for the blankholder force distribution.

  10. Accurate momentum transfer cross section for the attractive Yukawa potential

    SciTech Connect

    Khrapak, S. A.

    2014-04-15

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  11. Numerical analysis of stress distribution in Cu-stabilized GdBCO CC tapes during anvil tests for the evaluation of transverse delamination strength

    NASA Astrophysics Data System (ADS)

    Dizon, John Ryan C.; Gorospe, Alking B.; Shin, Hyung-Seop

    2014-05-01

    Rare-earth-Ba-Cu-O (REBCO) based coated conductors (CCs) are now being used for electric device applications. For coil-based applications such as motors, generators and magnets, the CC tape needs to have robust mechanical strength along both the longitudinal and transverse directions. The CC tape in these coils is subjected to transverse tensile stresses during cool-down and operation, which results in delamination within and between constituent layers. In this study, in order to explain the behaviour observed in the evaluation of c-axis delamination strength in Cu-stabilized GdBCO CC tapes by anvil tests, numerical analysis of the mechanical stress distribution within the CC tape has been performed. The upper anvil size was varied in the analysis to understand the effect of anvil size on stress distribution within the multilayered CC tape, which is closely related to the delamination strength, delamination mode and delamination sites that were experimentally observed. The numerical simulation results showed that, when an anvil size covering the whole tape width was used, the REBCO coating film was subjected to the largest stress, which could result in low mechanical delamination and electromechanical delamination strengths. Meanwhile, when smaller-sized anvils were used, the copper stabilizer layer would experience the largest stress among all the constituent layers of the CC tape, which could result in higher mechanical and electromechanical delamination strengths, as well as high scattering of both of these delamination strengths. As a whole, the numerical simulation results could explain the damage evolution observed in CC tapes tested under transverse tensile stress, as well as the transverse tensile stress response of the critical current, Ic.

  12. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  13. Accurate near-field calculation in the rigorous coupled-wave analysis method

    NASA Astrophysics Data System (ADS)

    Weismann, Martin; Gallagher, Dominic F. G.; Panoiu, Nicolae C.

    2015-12-01

    The rigorous coupled-wave analysis (RCWA) is one of the most successful and widely used methods for modeling periodic optical structures. It yields fast convergence of the electromagnetic far-field and has been adapted to model various optical devices and wave configurations. In this article, we investigate the accuracy with which the electromagnetic near-field can be calculated by using RCWA and explain the observed slow convergence and numerical artifacts from which it suffers, namely unphysical oscillations at material boundaries due to the Gibbs phenomenon. In order to alleviate these shortcomings, we also introduce a mathematical formulation for accurate near-field calculation in RCWA, for one- and two-dimensional straight and slanted diffraction gratings. This accurate near-field computational approach is tested and evaluated for several representative test-structures and configurations in order to illustrate the advantages provided by the proposed modified formulation of the RCWA.

  14. Numerical predictions in acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1992-01-01

    Computational Aeroacoustics (CAA) involves the calculation of the sound produced by a flow as well as the underlying flowfield itself from first principles. This paper describes the numerical challenges of CAA and recent research efforts to overcome these challenges. In addition, it includes the benefits of CAA in removing restrictions of linearity, single frequency, constant parameters, low Mach numbers, etc. found in standard acoustic analyses as well as means for evaluating the validity of these numerical approaches. Finally, numerous applications of CAA to both classical as well as modern problems of concern to the aerospace industry are presented.

  15. Numerical predictions in acoustics

    NASA Astrophysics Data System (ADS)

    Hardin, Jay C.

    Computational Aeroacoustics (CAA) involves the calculation of the sound produced by a flow as well as the underlying flowfield itself from first principles. This paper describes the numerical challenges of CAA and recent research efforts to overcome these challenges. In addition, it includes the benefits of CAA in removing restrictions of linearity, single frequency, constant parameters, low Mach numbers, etc. found in standard acoustic analyses as well as means for evaluating the validity of these numerical approaches. Finally, numerous applications of CAA to both classical as well as modern problems of concern to the aerospace industry are presented.

  16. Using numerical analysis to develop and evaluate the method of high temperature sous-vide to soften carrot texture in different-sized packages.

    PubMed

    Hong, Yoon-Ki; Uhm, Joo-Tae; Yoon, Won Byong

    2014-04-01

    The high-temperature sous-vide (HTSV) method was developed to prepare carrots with a soft texture at the appropriate degree of pasteurization. The effect of heating conditions, such as temperature and time, was investigated on various package sizes. Heating temperatures of 70, 80, and 90 °C and heating times of 10 and 20 min were used to evaluate the HTSV method. A 3-dimensional conduction model and numerical simulations were used to estimate the temperature distribution and the rate of heat transfer to samples with various geometries. Four different-sized packages were prepared by stacking carrot sticks of identical size (9.6 × 9.6 × 90 mm) in a row. The sizes of the packages used were as follows: (1) 9.6 × 86.4 × 90, (2) 19.2 × 163.2 × 90, (3) 28.8 × 86.4 × 90, and (4) 38.4 × 86.4 × 90 mm. Although only a moderate change in color (L*, a*, and b*) was observed following HTSV cooking, there was a significant decrease in carrot hardness. The geometry of the package and the heating conditions significantly influenced the degree of pasteurization and the final texture of the carrots. Numerical simulations successfully described the effect of geometry on samples at different heating conditions. PMID:24689882

  17. Using numerical analysis to develop and evaluate the method of high temperature sous-vide to soften carrot texture in different-sized packages.

    PubMed

    Hong, Yoon-Ki; Uhm, Joo-Tae; Yoon, Won Byong

    2014-04-01

    The high-temperature sous-vide (HTSV) method was developed to prepare carrots with a soft texture at the appropriate degree of pasteurization. The effect of heating conditions, such as temperature and time, was investigated on various package sizes. Heating temperatures of 70, 80, and 90 °C and heating times of 10 and 20 min were used to evaluate the HTSV method. A 3-dimensional conduction model and numerical simulations were used to estimate the temperature distribution and the rate of heat transfer to samples with various geometries. Four different-sized packages were prepared by stacking carrot sticks of identical size (9.6 × 9.6 × 90 mm) in a row. The sizes of the packages used were as follows: (1) 9.6 × 86.4 × 90, (2) 19.2 × 163.2 × 90, (3) 28.8 × 86.4 × 90, and (4) 38.4 × 86.4 × 90 mm. Although only a moderate change in color (L*, a*, and b*) was observed following HTSV cooking, there was a significant decrease in carrot hardness. The geometry of the package and the heating conditions significantly influenced the degree of pasteurization and the final texture of the carrots. Numerical simulations successfully described the effect of geometry on samples at different heating conditions.

  18. Theoretical and numerical evaluation of polarimeter using counter-circularly-polarized-probing-laser under the coupling between Faraday and Cotton-Mouton effect

    NASA Astrophysics Data System (ADS)

    Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi

    2016-04-01

    This study evaluated an effect of an coupling between the Faraday and Cotton-Mouton effect to a measurement signal of the Dodel-Kunz method which uses counter-circular-polarized probing-laser for measuring the Faraday effect. When the coupling is small (the Faraday effect is dominant and the characteristic eigenmodes are approximately circularly polarized), the measurement signal can be algebraically expressed and it is shown that the finite effect of the coupling is still significant. When the Faraday effect is not dominant, a numerical calculation is necessary. The numerical calculation under an ITER-like condition (Bt = 5.3 T, Ip = 15 MA, a = 2 m, ne = 1020 m-3 and λ = 119 μm) showed that difference between the pure Faraday rotation and the measurement signal of the Dodel-Kunz method was an order of one degree, which exceeds allowable error of ITER poloidal polarimeter. In conclusion, similar to other polarimeter techniques, the Dodel-Kunz method is not free from the coupling between the Faraday and Cotton-Mouton effect.

  19. A novel fast and flexible technique of radical kinetic behaviour investigation based on pallet for plasma evaluation structure and numerical analysis

    NASA Astrophysics Data System (ADS)

    Malinowski, Arkadiusz; Takeuchi, Takuya; Chen, Shang; Suzuki, Toshiya; Ishikawa, Kenji; Sekine, Makoto; Hori, Masaru; Lukasiak, Lidia; Jakubowski, Andrzej

    2013-07-01

    This paper describes a new, fast, and case-independent technique for sticking coefficient (SC) estimation based on pallet for plasma evaluation (PAPE) structure and numerical analysis. Our approach does not require complicated structure, apparatus, or time-consuming measurements but offers high reliability of data and high flexibility. Thermal analysis is also possible. This technique has been successfully applied to estimation of very low value of SC of hydrogen radicals on chemically amplified ArF 193 nm photoresist (the main goal of this study). Upper bound of our technique has been determined by investigation of SC of fluorine radical on polysilicon (in elevated temperature). Sources of estimation error and ways of its reduction have been also discussed. Results of this study give an insight into the process kinetics, and not only they are helpful in better process understanding but additionally they may serve as parameters in a phenomenological model development for predictive modelling of etching for ultimate CMOS topography simulation.

  20. Fracture toughness evaluation of 20MnMoNi55 pressure vessel steel in the ductile to brittle transition regime: Experiment & numerical simulations

    NASA Astrophysics Data System (ADS)

    Gopalan, Avinash; Samal, M. K.; Chakravartty, J. K.

    2015-10-01

    In this work, fracture behaviour of 20MnMoNi55 reactor pressure vessel (RPV) steel in the ductile to brittle transition regime (DBTT) is characterised. Compact tension (CT) and single edged notched bend (SENB) specimens of two different sizes were tested in the DBTT regime. Reference temperature 'T0' was evaluated according to the ASTM E1921 standard. The effect of size and geometry on the T0 was studied and T0 was found to be lower for SENB geometry. In order to understand the fracture behaviour numerically, finite element (FE) simulations were performed using Beremin's model for cleavage and Rousselier's model for ductile failure mechanisms. The simulated fracture behaviour was found to be in good agreement with the experiment.

  1. 3D numerical test objects for the evaluation of a software used for an automatic analysis of a linear accelerator mechanical stability

    NASA Astrophysics Data System (ADS)

    Torfeh, Tarraf; Beaumont, Stéphane; Guédon, Jeanpierre; Benhdech, Yassine

    2010-04-01

    Mechanical stability of a medical LINear ACcelerator (LINAC), particularly the quality of the gantry, collimator and table rotations and the accuracy of the isocenter position, are crucial for the radiation therapy process, especially in stereotactic radio surgery and in Image Guided Radiation Therapy (IGRT) where this mechanical stability is perturbed due to the additional weight the kV x-ray tube and detector. In this paper, we present a new method to evaluate a software which is used to perform an automatic measurement of the "size" (flex map) and the location of the kV and the MV isocenters of the linear accelerator. The method consists of developing a complete numerical 3D simulation of a LINAC and physical phantoms in order to produce Electronic Portal Imaging Device (EPID) images including calibrated distortions of the mechanical movement of the gantry and isocenter misalignments.

  2. Numerical and theoretical evaluations of AC losses for single and infinite numbers of superconductor strips with direct and alternating transport currents in external AC magnetic field

    NASA Astrophysics Data System (ADS)

    Kajikawa, K.; Funaki, K.; Shikimachi, K.; Hirano, N.; Nagaya, S.

    2010-11-01

    AC losses in a superconductor strip are numerically evaluated by means of a finite element method formulated with a current vector potential. The expressions of AC losses in an infinite slab that corresponds to a simple model of infinitely stacked strips are also derived theoretically. It is assumed that the voltage-current characteristics of the superconductors are represented by Bean’s critical state model. The typical operation pattern of a Superconducting Magnetic Energy Storage (SMES) coil with direct and alternating transport currents in an external AC magnetic field is taken into account as the electromagnetic environment for both the single strip and the infinite slab. By using the obtained results of AC losses, the influences of the transport currents on the total losses are discussed quantitatively.

  3. Calibration and evaluation of a flood forecasting system: Utility of numerical weather prediction model, data assimilation and satellite-based rainfall

    NASA Astrophysics Data System (ADS)

    Yucel, I.; Onen, A.; Yilmaz, K. K.; Gochis, D. J.

    2015-04-01

    A fully-distributed, multi-physics, multi-scale hydrologic and hydraulic modeling system, WRF-Hydro, is used to assess the potential for skillful flood forecasting based on precipitation inputs derived from the Weather Research and Forecasting (WRF) model and the EUMETSAT Multi-sensor Precipitation Estimates (MPEs). Similar to past studies it was found that WRF model precipitation forecast errors related to model initial conditions are reduced when the three dimensional atmospheric data assimilation (3DVAR) scheme in the WRF model simulations is used. A comparative evaluation of the impact of MPE versus WRF precipitation estimates, both with and without data assimilation, in driving WRF-Hydro simulated streamflow is then made. The ten rainfall-runoff events that occurred in the Black Sea Region were used for testing and evaluation. With the availability of streamflow data across rainfall-runoff events, the calibration is only performed on the Bartin sub-basin using two events and the calibrated parameters are then transferred to other neighboring three ungauged sub-basins in the study area. The rest of the events from all sub-basins are then used to evaluate the performance of the WRF-Hydro system with the calibrated parameters. Following model calibration, the WRF-Hydro system was capable of skillfully reproducing observed flood hydrographs in terms of the volume of the runoff produced and the overall shape of the hydrograph. Streamflow simulation skill was significantly improved for those WRF model simulations where storm precipitation was accurately depicted with respect to timing, location and amount. Accurate streamflow simulations were more evident in WRF model simulations where the 3DVAR scheme was used compared to when it was not used. Because of substantial dry bias feature of MPE, as compared with surface rain gauges, streamflow derived using this precipitation product is in general very poor. Overall, root mean squared errors for runoff were reduced by

  4. Numerical nebulae

    NASA Astrophysics Data System (ADS)

    Rijkhorst, Erik-Jan

    2005-12-01

    The late stages of evolution of stars like our Sun are dominated by several episodes of violent mass loss. Space based observations of the resulting objects, known as Planetary Nebulae, show a bewildering array of highly symmetric shapes. The interplay between gasdynamics and radiative processes determines the morphological outcome of these objects, and numerical models for astrophysical gasdynamics have to incorporate these effects. This thesis presents new numerical techniques for carrying out high-resolution three-dimensional radiation hydrodynamical simulations. Such calculations require parallelization of computer codes, and the use of state-of-the-art supercomputer technology. Numerical models in the context of the shaping of Planetary Nebulae are presented, providing insight into their origin and fate.

  5. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  6. Numerical Prediction of Cold Season Fog Events over Complex Terrain: the Performance of the WRF Model During MATERHORN-Fog and Early Evaluation

    NASA Astrophysics Data System (ADS)

    Pu, Zhaoxia; Chachere, Catherine N.; Hoch, Sebastian W.; Pardyjak, Eric; Gultepe, Ismail

    2016-08-01

    A field campaign to study cold season fog in complex terrain was conducted as a component of the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program from 07 January to 01 February 2015 in Salt Lake City and Heber City, Utah, United States. To support the field campaign, an advanced research version of the Weather Research and Forecasting (WRF) model was used to produce real-time forecasts and model evaluation. This paper summarizes the model performance and preliminary evaluation of the model against the observations. Results indicate that accurately forecasting fog is challenging for the WRF model, which produces large errors in the near-surface variables, such as relative humidity, temperature, and wind fields in the model forecasts. Specifically, compared with observations, the WRF model overpredicted fog events with extended duration in Salt Lake City because it produced higher moisture, lower wind speeds, and colder temperatures near the surface. In contrast, the WRF model missed all fog events in Heber City, as it reproduced lower moisture, higher wind speeds, and warmer temperatures against observations at the near-surface level. The inability of the model to produce proper levels of near-surface atmospheric conditions under fog conditions reflects uncertainties in model physical parameterizations, such as the surface layer, boundary layer, and microphysical schemes.

  7. Numerical Prediction of Cold Season Fog Events over Complex Terrain: the Performance of the WRF Model During MATERHORN-Fog and Early Evaluation

    NASA Astrophysics Data System (ADS)

    Pu, Zhaoxia; Chachere, Catherine N.; Hoch, Sebastian W.; Pardyjak, Eric; Gultepe, Ismail

    2016-09-01

    A field campaign to study cold season fog in complex terrain was conducted as a component of the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program from 07 January to 01 February 2015 in Salt Lake City and Heber City, Utah, United States. To support the field campaign, an advanced research version of the Weather Research and Forecasting (WRF) model was used to produce real-time forecasts and model evaluation. This paper summarizes the model performance and preliminary evaluation of the model against the observations. Results indicate that accurately forecasting fog is challenging for the WRF model, which produces large errors in the near-surface variables, such as relative humidity, temperature, and wind fields in the model forecasts. Specifically, compared with observations, the WRF model overpredicted fog events with extended duration in Salt Lake City because it produced higher moisture, lower wind speeds, and colder temperatures near the surface. In contrast, the WRF model missed all fog events in Heber City, as it reproduced lower moisture, higher wind speeds, and warmer temperatures against observations at the near-surface level. The inability of the model to produce proper levels of near-surface atmospheric conditions under fog conditions reflects uncertainties in model physical parameterizations, such as the surface layer, boundary layer, and microphysical schemes.

  8. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  9. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  10. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  11. WAIS-IV reliable digit span is no more accurate than age corrected scaled score as an indicator of invalid performance in a veteran sample undergoing evaluation for mTBI.

    PubMed

    Spencer, Robert J; Axelrod, Bradley N; Drag, Lauren L; Waldron-Perrine, Brigid; Pangilinan, Percival H; Bieliauskas, Linas A

    2013-01-01

    Reliable Digit Span (RDS) is a measure of effort derived from the Digit Span subtest of the Wechsler intelligence scales. Some authors have suggested that the age-corrected scaled score provides a more accurate measure of effort than RDS. This study examined the relative diagnostic accuracy of the traditional RDS, an extended RDS including the new Sequencing task from the Wechsler Adult Intelligence Scale-IV, and the age-corrected scaled score, relative to performance validity as determined by the Test of Memory Malingering. Data were collected from 138 Veterans seen in a traumatic brain injury clinic. The traditional RDS (≤ 7), revised RDS (≤ 11), and Digit Span age-corrected scaled score ( ≤ 6) had respective sensitivities of 39%, 39%, and 33%, and respective specificities of 82%, 89%, and 91%. Of these indices, revised RDS and the Digit Span age-corrected scaled score provide the most accurate measure of performance validity among the three measures.

  12. How accurate are the weather forecasts for Bierun (southern Poland)?

    NASA Astrophysics Data System (ADS)

    Gawor, J.

    2012-04-01

    Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather prediction models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather prediction models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The prediction periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The results of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How accurate are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most predictable? 4) Why

  13. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  14. Evaluation of coal-mining impacts using numerical classification of benthic invertebrate data from streams draining a heavily mined basin in eastern Tennessee

    USGS Publications Warehouse

    Bradfield, A.D.

    1986-01-01

    Coal-mining impacts on Smoky Creek, eastern Tennessee were evaluated using water quality and benthic invertebrate data. Data from mined sites were also compared with water quality and invertebrate fauna found at Crabapple Branch, an undisturbed stream in a nearby basin. Although differences in water quality constituent concentrations and physical habitat conditions at sampling sites were apparent, commonly used measures of benthic invertebrate sample data such as number of taxa, sample diversity, number of organisms, and biomass were inadequate for determining differences in stream environments. Clustering algorithms were more useful in determining differences in benthic invertebrate community structure and composition. Normal (collections) and inverse (species) analyses based on presence-absence data of species of Ephemeroptera, Plecoptera, and Tricoptera were compared using constancy, fidelity, and relative abundance of species found at stations with similar fauna. These analyses identified differences in benthic community composition due to seasonal variations in invertebrate life histories. When data from a single season were examined, sites on tributary streams generally clustered separately from sites on Smoky Creek. These analyses compared with differences in water quality, stream size, and substrate characteristics between tributary sites and the more degraded main stem sites, indicated that numerical classification of invertebrate data can provide discharge-independent information useful in rapid evaluations of in-stream environmental conditions. (Author 's abstract)

  15. Experimental demonstration and numerical model of a point concentration solar receiver evaluation system using a 30 kWth sun simulator

    NASA Astrophysics Data System (ADS)

    Nakakura, M.; Ohtake, M.; Matsubara, K.; Yoshida, K.; Cho, H. S.; Kodama, T.; Gokon, N.

    2016-05-01

    In 2014, Niigata University and the Institute of Applied Energy developed a point-concentration receiver-evaluation system using a silicon carbide (SiC) honeycomb and a three-dimensional simulation method. The system includes several improvements over its forerunner, including the ability to increase/decrease the power-on aperture (POA), air-mass flow-rate (AMF) and the height of the focal surface. This paper will focus on the results of tests using an improved receiver evaluation system, at a focal height of 1600 mm. Maximum outlet air temperature reached as high as 800 K, but exerted no untoward effects on the system. Also, receiver efficiency rated between 45 % and 70 %, according to POA. A numerical, 3D method of analysis was created at the same time, in order to analyze temperature and flow-distribution, in detail. This simulation, based on the dual-cell approach, reproduced the thermal non-equilibrium between the solid and fluid domain of the receiver material. The most interesting feature of this simulation, is that, because it includes upper and lower computational domains, it can be used to analyze the influence of both inward- and outward-flowing receiver materials.

  16. Evaluating the hemodynamical response of a cardiovascular system under support of a continuous flow left ventricular assist device via numerical modeling and simulations.

    PubMed

    Bozkurt, Selim; Safak, Koray K

    2013-01-01

    Dilated cardiomyopathy is the most common type of the heart failure which can be characterized by impaired ventricular contractility. Mechanical circulatory support devices were introduced into practice for the heart failure patients to bridge the time between the decision to transplant and the actual transplantation which is not sufficient due to the state of donor organ supply. In this study, the hemodynamic response of a cardiovascular system that includes a dilated cardiomyopathic heart under support of a newly developed continuous flow left ventricular assist device--Heart Turcica Axial--was evaluated employing computer simulations. For the evaluation, a numerical model which describes the pressure-flow rate relations of Heart Turcica Axial, a cardiovascular system model describing the healthy and pathological hemodynamics, and a baroreflex model regulating the heart rate were used. Heart Turcica Axial was operated between 8000 rpm and 11,000 rpm speeds with 1000 rpm increments for assessing the pump performance and response of the cardiovascular system. The results also give an insight about the range of the possible operating speeds of Heart Turcica Axial in a clinical application. Based on the findings, operating speed of Heart Turcica Axial should be between 10,000 rpm and 11,000 rpm.

  17. Evaluation of coal-mining impacts using numerical classification of benthic invertebrate data from streams draining a heavily mined basin in eastern Tennessee

    SciTech Connect

    Bradfield, A.D.

    1986-01-01

    Coal-mining impacts on Smoky Creek, eastern Tennessee were evaluated using water quality and benthic invertebrate data. Data from mined sites were also compared with water quality and invertebrate fauna found at Crabapple Branch, an undisturbed stream in a nearby basin. Although differences in water quality constituent concentrations and physical habitat conditions at sampling sites were apparent, commonly used measures of benthic invertebrate sample data such as number of taxa, sample diversity, number of organisms, and biomass were inadequate for determining differences in stream environments. Clustering algorithms were more useful in determining differences in benthic invertebrate community structure and composition. When data from a single season were examined, sites on tributary streams generally clustered separately from sites on Smoky Creek. These analyses compared with differences in water quality, stream size, and substrate characteristics between tributary sites and the more degraded main stem sites, indicated that numerical classification of invertebrate data can provide discharge-independent information useful in rapid evaluations of in-stream environmental conditions. 25 refs., 14 figs., 22 tabs.

  18. Deterministic evaluation of collapse risk for a decomissioned flooded mine system: 3D numerical modelling of subsidence, roof collapse and impulse water flow.

    NASA Astrophysics Data System (ADS)

    Castellanza, Riccardo; Fernandez Merodo, Josè Antonio; di Prisco, Claudio; Frigerio, Gabriele; Crosta, Giovanni B.; Orlandi, Gianmarco

    2013-04-01

    Aim of the study is the assessment of stability conditions for an abandoned gypsum mine (Bologna , Italy). Mining was carried out til the end of the 70s by the room and pillar method. During mining a karst cave was crossed karstic waters flowed into the mine. As a consequence, the lower level of the mining is completely flooded and portions of the mining levels show critical conditions and are structurally prone to instability. Buildings and infrastructures are located above the first and second level and a large portion of the area below the mine area, and just above of the Savena river, is urbanised. Gypsum geomechanical properties change over time; water, or even air humidity, dissolves or weaken gypsum pillars, leading progressively to collapse. The mine is located in macro-crystalline gypsum beds belonging to the Messinian Gessoso Solfifera Formation. Selenitic gypsum beds are interlayered with by centimetre to meter thick shales layers. In order to evaluate the risk related to the collapse of the flooded level (level 3) a deterministic approach based on 3D numerical analyses has been considered. The entire abandoned mine system up to the ground surface has been generated in 3D. The considered critical scenario implies the collapse of the pillars and roof of the flooded level 3. In a first step, a sequential collapse starting from the most critical pillar has been simulated by means of a 3D Finite Element code. This allowed the definition of the subsidence basin at the ground surface and the interaction with the buildings in terms of ground displacements. 3D numerical analyses have been performed with an elasto-perfectly plastic constitutive model. In a second step, the effect of a simultaneous collapse of the entire level 3 has been considered in order to evaluate the risk of a flooding due to the water outflow from the mine system. Using a 3D CFD (Continuum Fluid Dynamics) finite element code the collapse of the level 3 has been simulated and the volume of

  19. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  20. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  1. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  2. Evaluating the Effect of Rainfall Infiltration on the Slope Stability of T16 tower of Taipei Mao-kong Gondola by Numerical Methods

    NASA Astrophysics Data System (ADS)

    RUNG, J.

    2013-12-01

    In this study, a series of rainfall-stability analyses were performed to simulate the failure mechanism and the function of remediation works of the down slope of T-16 tower pier, Mao-Kong gondola (or T-16 Slope) at the hillside of Taipei City using two-dimensional finite element method. The failure mechanism of T-16 Slope was simulated using the rainfall hyetograph of Jang-Mi typhoon in 2008 based on the field investigation data, monitoring data, soil/rock mechanical testing data and detail design plots of remediation works. Eventually, the numerical procedures and various input parameters in the analysis were verified by comparing the numerical results with the field observations. In addition, 48 hrs design rainfalls corresponding to 5, 10, 25 and 50 years return periods were prepared using the 20 years rainfall data of Mu-Zha rainfall observation station, Central Weather Bureau for the rainfall-stability analyses of T-16 Slope to inspect the effect of the compound stabilization works on the overall stability of the slope. At T-16 Slope, without considering the longitudinal and transverse drainages on the ground surface, there totally 4 types of stabilization works were installed to stabilize the slope. From the slope top to the slope toe, the stabilization works of T-16 Slope consists of RC-retaining wall with micro-pile foundation at the up-segment, earth anchor at the up-middle-segment, soil nailing at the middle-segment and retaining pile at the down-segment of the slope. The effect of each individual stabilization work on the slope stability under rainfall condition was examined and evaluated by raising field groundwater level.

  3. Accurately measuring dynamic coefficient of friction in ultraform finishing

    NASA Astrophysics Data System (ADS)

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  4. Accurate stress resultants equations for laminated composite deep thick shells

    SciTech Connect

    Qatu, M.S.

    1995-11-01

    This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.

  5. Why Breast Cancer Risk by the Numbers Is Not Enough: Evaluation of a Decision Aid in Multi-Ethnic, Low-Numerate Women

    PubMed Central

    Yi, Haeseung; Xiao, Tong; Thomas, Parijatham; Aguirre, Alejandra; Smalletz, Cindy; David, Raven; Crew, Katherine

    2015-01-01

    Background Breast cancer risk assessment including genetic testing can be used to classify people into different risk groups with screening and preventive interventions tailored to the needs of each group, yet the implementation of risk-stratified breast cancer prevention in primary care settings is complex. Objective To address barriers to breast cancer risk assessment, risk communication, and prevention strategies in primary care settings, we developed a Web-based decision aid, RealRisks, that aims to improve preference-based decision-making for breast cancer prevention, particularly in low-numerate women. Methods RealRisks incorporates experience-based dynamic interfaces to communicate risk aimed at reducing inaccurate risk perceptions, with modules on breast cancer risk, genetic testing, and chemoprevention that are tailored. To begin, participants learn about risk by interacting with two games of experience-based risk interfaces, demonstrating average 5-year and lifetime breast cancer risk. We conducted four focus groups in English-speaking women (age ≥18 years), a questionnaire completed before and after interacting with the decision aid, and a semistructured group discussion. We employed a mixed-methods approach to assess accuracy of perceived breast cancer risk and acceptability of RealRisks. The qualitative analysis of the semistructured discussions assessed understanding of risk, risk models, and risk appropriate prevention strategies. Results Among 34 participants, mean age was 53.4 years, 62% (21/34) were Hispanic, and 41% (14/34) demonstrated low numeracy. According to the Gail breast cancer risk assessment tool (BCRAT), the mean 5-year and lifetime breast cancer risk were 1.11% (SD 0.77) and 7.46% (SD 2.87), respectively. After interacting with RealRisks, the difference in perceived and estimated breast cancer risk according to BCRAT improved for 5-year risk (P=.008). In the qualitative analysis, we identified potential barriers to adopting risk

  6. Evaluation on double-wall-tube residual stress distribution of sodium-heated steam generator by neutron diffraction and numerical analysis

    SciTech Connect

    Kisohara, N.; Suzuki, H.; Akita, K.; Kasahara, N.

    2012-07-01

    A double-wall-tube is nominated for the steam generator heat transfer tube of future sodium fast reactors (SFRs) in Japan, to decrease the possibility of sodium/water reaction. The double-wall-tube consists of an inner tube and an outer tube, and they are mechanically contacted to keep the heat transfer of the interface between the inner and outer tubes by their residual stress. During long term SG operation, the contact stress at the interface gradually falls down due to stress relaxation. This phenomenon might increase the thermal resistance of the interface and degrade the tube heat transfer performance. The contact stress relaxation can be predicted by numerical analysis, and the analysis requires the data of the initial residual stress distributions in the tubes. However, unclear initial residual stress distributions prevent precious relaxation evaluation. In order to resolve this issue, a neutron diffraction method was employed to reveal the tri-axial (radius, hoop and longitudinal) initial residual stress distributions in the double-wall-tube. Strain gauges also were used to evaluate the contact stress. The measurement results were analyzed using a JAEA's structural computer code to determine the initial residual stress distributions. Based on the stress distributions, the structural computer code has predicted the transition of the relaxation and the decrease of the contact stress. The radial and longitudinal temperature distributions in the tubes were input to the structural analysis model. Since the radial thermal expansion difference between the inner (colder) and outer (hotter) tube reduces the contact stress and the tube inside steam pressure contributes to increasing it, the analytical model also took these effects into consideration. It has been conduced that the inner and outer tubes are contacted with sufficient stresses during the plant life time, and that effective heat transfer degradation dose not occur in the double-wall-tube SG. (authors)

  7. Highly Unstable Double-Diffusive Finger Convection in a Hele-Shaw Cell: Baseline Experimental Data for Evaluation of Numerical Models

    SciTech Connect

    PRINGLE,SCOTT E.; COOPER,CLAY A.; GLASS JR.,ROBERT J.

    2000-12-21

    An experimental investigation was conducted to study double-diffusive finger convection in a Hele-Shaw cell by layering a sucrose solution over a more-dense sodium chloride (NaCl) solution. The solutal Rayleigh numbers were on the order of 60,000, based upon the height of the cell (25 cm), and the buoyancy ratio was 1.2. A full-field light transmission technique was used to measure a dye tracer dissolved in the NaCl solution. They analyze the concentration fields to yield the temporal evolution of length scales associated with the vertical and horizontal finger structure as well as the mass flux. These measures show a rapid progression through two early stages to a mature stage and finally a rundown period where mass flux decays rapidly. The data are useful for the development and evaluation of numerical simulators designed to model diffusion and convection of multiple components in porous media. The results are useful for correct formulation at both the process scale (the scale of the experiment) and effective scale (where the lab-scale processes are averaged-up to produce averaged parameters). A fundamental understanding of the fine-scale dynamics of double-diffusive finger convection is necessary in order to successfully parameterize large-scale systems.

  8. Numerical evaluation of subsoil diffusion of (15) N labelled denitrification products during employment of the (15) N gas flux method in the field

    NASA Astrophysics Data System (ADS)

    Well, Reinhard; Buchen, Caroline; Lewicka-Szczebak, Dominika; Ruoss, Nicolas

    2016-04-01

    Common methods for measuring soil denitrification in situ include monitoring the accumulation of 15N labelled N2 and N2O evolved from 15N labelled soil nitrate pool in soil surface chambers. Gas diffusion is considered to be the main accumulation process. Because accumulation of the gases decreases concentration gradients between soil and chamber over time, gas production rates are underestimated if calculated from chamber concentrations. Moreover, concentration gradients to the non-labelled subsoil exist, inevitably causing downward diffusion of 15N labelled denitrification products. A numerical model for simulating gas diffusion in soil was used in order to determine the significance of this source of error. Results show that subsoil diffusion of 15N labelled N2 and N2O - and thus potential underestimation of denitrification derived from chamber fluxes - increases with cover closure time as well as with increasing diffusivity. Simulations based on the range of typical gas diffusivities of unsaturated soils show that the fraction of subsoil diffusion after chamber closure for 1 hour is always significant with values up to >30 % of total production of 15N labelled N2 and N2O. Field experiments for measuring denitrification with the 15N gas flux method were conducted. The ability of the model to predict the time pattern of gas accumulation was evaluated by comparing measured 15N2 concentrations and simulated values.

  9. Evaluation of double-moment representation of ice hydrometeors in bulk microphysical parameterization: comparison between WRF numerical simulations and UND-Citation data during MC3E

    NASA Astrophysics Data System (ADS)

    Pu, Zhaoxia; Lin, Chao

    2015-12-01

    The influence of double-moment representation of warm-rain and ice hydrometeors on the numerical simulations of a mesoscale convective system (MCS) over the US Southern Great Plains has been evaluated. The Weather Research and Forecasting (WRF) model is used to simulate the MCS with three different microphysical schemes, including the WRF single-moment 6-class (WSM6), WRF double-moment 6-class (WDM6), and Morrison double-moment (MORR) schemes. It is found that the double-moment schemes outperform the single-moment schemes in terms of the simulated structure, life cycle, cloud coverage, precipitation, and microphysical properties of the MCS. However, compared with UND-Citation observations, collected during the Midlatitude Continental Convective Clouds Experiment (MC3E), the WRF simulated ice hydrometeors with all three schemes do not agree well with the observations. Overall results from this study suggest that uncertainty in microphysical schemes could still be a productive area of future research from perspective of both model improvements and observations.

  10. Differentiated control of web traffic: a numerical analysis

    NASA Astrophysics Data System (ADS)

    Guo, Liang; Matta, Ibrahim

    2002-07-01

    Internet measurements show that the size distribution of Web-based transactions is usually very skewed; a few large requests constitute most of the total traffic. Motivated by the advantages of scheduling algorithms which favor short jobs, we propose to perform differentiated control over Web-based transactions to give preferential service to short web requests. The control is realized through service semantics provided by Internet Traffic Managers, a Diffserv-like architecture. To evaluate the performance of such a control system, it is necessary to have a fast but accurate analytical method. To this end, we model the Internet as a time-shared system and propose a numerical approach which utilizes Kleinrock's conservation law to solve the model. The numerical results are shown to match well those obtained by packet-level simulation, which runs orders of magnitude slower than our numerical method.

  11. Accurate SHAPE-directed RNA structure determination

    PubMed Central

    Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.

    2009-01-01

    Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441

  12. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  13. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  14. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  15. Accurate adiabatic correction in the hydrogen molecule.

    PubMed

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728

  16. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  17. Numerical solution of boundary-integral equations for molecular electrostatics.

    SciTech Connect

    Bardhan, J.; Mathematics and Computer Science; Rush Univ.

    2009-03-07

    Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived.

  18. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  19. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  20. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  1. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  3. Evaluation of a landscape evolution model to simulate stream piracies: Insights from multivariable numerical tests using the example of the Meuse basin, France

    NASA Astrophysics Data System (ADS)

    Benaïchouche, Abed; Stab, Olivier; Tessier, Bruno; Cojan, Isabelle

    2016-01-01

    In landscapes dominated by fluvial erosion, the landscape morphology is closely related to the hydrographic network system. In this paper, we investigate the hydrographic network reorganization caused by a headward piracy mechanism between two drainage basins in France, the Meuse and the Moselle. Several piracies occurred in the Meuse basin during the past one million years, and the basin's current characteristics are favorable to new piracies by the Moselle river network. This study evaluates the consequences over the next several million years of a relative lowering of the Moselle River (and thus of its basin) with respect to the Meuse River. The problem is addressed with a numerical modeling approach (landscape evolution model, hereafter LEM) that requires empirical determinations of parameters and threshold values. Classically, fitting of the parameters is based on analysis of the relationship between the slope and the drainage area and is conducted under the hypothesis of equilibrium. Application of this conventional approach to the capture issue yields incomplete results that have been consolidated by a parametric sensitivity analysis. The LEM equations give a six-dimensional parameter space that was explored with over 15,000 simulations using the landscape evolution model GOLEM. The results demonstrate that stream piracies occur in only four locations in the studied reach near the city of Toul. The locations are mainly controlled by the local topography and are model-independent. Nevertheless, the chronology of the captures depends on two parameters: the river concavity (given by the fluvial advection equation) and the hillslope erosion factor. Thus, the simulations lead to three different scenarios that are explained by a phenomenon of exclusion or a string of events.

  4. Evaluation of the Effect of the CO2 Ocean Sequestration on Marine Life in the Sea near Japan Using a Numerical Model

    NASA Astrophysics Data System (ADS)

    Nakamura, Tomoaki; Wada, Akira; Hasegawa, Kazuyuki; Ochiai, Minoru

    CO2 oceanic sequestration is one of the technologies for reducing the discharge of CO2 into the atmosphere, which is considered to cause the global warming, and consists in isolating industry-made CO2 gas within the depths of the ocean. This method is expected to enable industry-made CO2 to be separated from the atmosphere for a considerably long period of time. On the other hand, it is also feared that the CO2 injected in the ocean may lower pH of seawater surrounding the sequestration site, thus may adversely affect marine organisms. For evaluating the biological influences, we have studied to precisely predict the CO2 distribution around the CO2 injection site by a numerical simulation method. In previous studies, in which a 2 degree by 2 degree mesh was employed in the simulation, CO2 concentrations tended to be evenly dispersed within the grid, giving lower concentration values. Thus, the calculation accuracy within the area several hundred kilometers from the CO2 injection site was not satisfactory for the biological effect assessment. In the present study, we improved the accuracy of concentration distribution by changing the computational mesh resolution for a 0.2 by 0.2 degree. By the renewed method we could obtain detailed CO2 distribution in waters within several hundred kilometers of the injection site, and clarified that the Moving-ship procedure may have less effects of lowered pH on marine organisms than the fixed-point release procedure of CO2 sequestration.

  5. The historical pathway towards more accurate homogenisation

    NASA Astrophysics Data System (ADS)

    Domonkos, P.; Venema, V.; Auer, I.; Mestre, O.; Brunetti, M.

    2012-03-01

    In recent years increasing effort has been devoted to objectively evaluate the efficiency of homogenisation methods for climate data; an important effort was the blind benchmarking performed in the COST Action HOME (ES0601). The statistical characteristics of the examined series have significant impact on the measured efficiencies, thus it is difficult to obtain an unambiguous picture of the efficiencies, relying only on numerical tests. In this study the historical methodological development with focus on the homogenisation of surface temperature observations is presented in order to view the progress from the side of the development of statistical tools. The main stages of this methodological progress, such as for instance the fitting optimal step-functions when the number of change-points is known (1972), cutting algorithm (1995), Caussinus - Lyazrhi criterion (1997), are recalled and their effects on the quality-improvement of homogenisation is briefly discussed. This analysis of the theoretical properties together with the recently published numerical results jointly indicate that, MASH, PRODIGE, ACMANT and USHCN are the best statistical tools for homogenising climatic time series, since they provide the reconstruction and preservation of true climatic variability in observational time series with the highest reliability. On the other hand, skilled homogenizers may achieve outstanding reliability also with the combination of simple statistical methods such as the Craddock-test and visual expert decisions. A few efficiency results of the COST HOME experiments are presented to demonstrate the performance of the best homogenisation methods.

  6. Evaluation of comprehensive two-dimensional gas chromatography with accurate mass time-of-flight mass spectrometry for the metabolic profiling of plant-fungus interaction in Aquilaria malaccensis.

    PubMed

    Wong, Yong Foo; Chin, Sung-Tong; Perlmutter, Patrick; Marriott, Philip J

    2015-03-27

    To explore the possible obligate interactions between the phytopathogenic fungus and Aquilaria malaccensis which result in generation of a complex array of secondary metabolites, we describe a comprehensive two-dimensional gas chromatography (GC × GC) method, coupled to accurate mass time-of-flight mass spectrometry (TOFMS) for the untargeted and comprehensive metabolic profiling of essential oils from naturally infected A. malaccensis trees. A polar/non-polar column configuration was employed, offering an improved separation pattern of components when compared to other column sets. Four different grades of the oils displayed quite different metabolic patterns, suggesting the evolution of a signalling relationship between the host tree (emergence of various phytoalexins) and fungi (activation of biotransformation). In total, ca. 550 peaks/metabolites were detected, of which tentative identification of 155 of these compounds was reported, representing between 20.1% and 53.0% of the total ion count. These are distributed over the chemical families of monoterpenic and sesquiterpenic hydrocarbons, oxygenated monoterpenes and sesquiterpenes (comprised of ketone, aldehyde, oxide, alcohol, lactone, keto-alcohol and diol), norterpenoids, diterpenoids, short chain glycols, carboxylic acids and others. The large number of metabolites detected, combined with the ease with which they are located in the 2D separation space, emphasises the importance of a comprehensive analytical approach for the phytochemical analysis of plant metabolomes. Furthermore, the potential of this methodology in grading agarwood oils by comparing the obtained metabolic profiles (pattern recognition for unique metabolite chemical families) is discussed. The phytocomplexity of the agarwood oils signified the production of a multitude of plant-fungus mediated secondary metabolites as chemical signals for natural ecological communication. To the best of our knowledge, this is the most complete

  7. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  8. Learning numerical progressions.

    PubMed

    Vitz, P C; Hazan, D N

    1974-01-01

    Learning of simple numerical progressions and compound progressions formed by combining two or three simple progressions is investigated. In two experiments, time to solution was greater for compound vs simple progressions; greater the higher the progression's solution level; and greater if the progression consisted of large vs small numbers. A set of strategies is proposed to account for progression learning based on the assumption S computes differences between integers, differences between differences, etc., in a hierarchical fashion. Two measures of progression difficulty, each a summary of the strategies, are proposed; C1 is a count of the number of differences needed to solve a progression; C2 is the same count with higher level differences given more weight. The measures accurately predict in both experiments the mean time to solve 16 different progressions with C2 being somewhat superior. The measures also predict the learning difficulty of 10 other progressions reported by Bjork (1968).

  9. Accurate reservoir evaluation from borehole imaging techniques and thin bed log analysis: Case studies in shaly sands and complex lithologies in Lower Eocene Sands, Block III, Lake Maracaibo, Venezuela

    SciTech Connect

    Coll, C.; Rondon, L.

    1996-08-01

    Computer-aided signal processing in combination with different types of quantitative log evaluation techniques is very useful for predicting reservoir quality in complex lithologies and will help to increase the confidence level to complete and produce a reservoir. The Lower Eocene Sands in Block III are one of the largest reservoirs in Block III and it has produced light oil since 1960. Analysis of Borehole Images shows the reservoir heterogeneity by the presence of massive sands with very few shale laminations and thinnly bedded sands with a lot of laminations. The effect of these shales is a low resistivity that has been interpreted in most of the cases as water bearing sands. A reduction of the porosity due to diagenetic processes has produced a high-resistivity behaviour. The presence of bed boundaries and shales is detected by the microconductivity curves of the Borehole Imaging Tools allowing the estimation of the percentage of shale on these sands. Interactive computer-aided analysis and various image processing techniques are used to aid in log interpretation for estimating formation properties. Integration between these results, core information and production data was used for evaluating producibility of the reservoirs and to predict reservoir quality. A new estimation of the net pay thickness using this new technique is presented with the consequent improvement on the expectation of additional recovery. This methodology was successfully applied in a case by case study showing consistency in the area.

  10. Detection of recent faulting and evaluation of the vertical offsets from numerical analysis of SAR-ERS-1 images: the example of the Atacama fault zone in northern Chile

    NASA Astrophysics Data System (ADS)

    Mering, Catherine; Chorowicz, Jean; Vicente, Jean-Claude; Chalah, Cherif; Rafalli, Gaelle

    1995-11-01

    Usually the analysis of high resolution satellite images such as radar SAR ERS-1 images is undertaken by photo-interpretation techniques in order to reveal geological features. The numerical image processing is based on a filtering method designed for a better identification of geological structures on SAR images. The method leads to a mapping of recent faults on which the vertical offset is quantified. As examples, steeply dipping active faults with abrupt scarps are extracted from SAR-ERS1 images of the Central Andes (Atacama Fault zone, Northern Chile). The fault throws are then evaluated with a specific numerical image processing.

  11. A Numerical Testbed for Remote Sensing of Aerosols, and its Demonstration for Evaluating Retrieval Synergy from a Geostationary Satellite Constellation of GEO-CAPE and GOES-R

    NASA Technical Reports Server (NTRS)

    Wang, Jun; Xu, Xiaoguang; Ding, Shouguo; Zeng, Jing; Spurr, Robert; Liu, Xiong; Chance, Kelly; Mishchenko, Michael I.

    2014-01-01

    We present a numerical testbed for remote sensing of aerosols, together with a demonstration for evaluating retrieval synergy from a geostationary satellite constellation. The testbed combines inverse (optimal-estimation) software with a forward model containing linearized code for computing particle scattering (for both spherical and non-spherical particles), a kernel-based (land and ocean) surface bi-directional reflectance facility, and a linearized radiative transfer model for polarized radiance. Calculation of gas absorption spectra uses the HITRAN (HIgh-resolution TRANsmission molecular absorption) database of spectroscopic line parameters and other trace species cross-sections. The outputs of the testbed include not only the Stokes 4-vector elements and their sensitivities (Jacobians) with respect to the aerosol single scattering and physical parameters (such as size and shape parameters, refractive index, and plume height), but also DFS (Degree of Freedom for Signal) values for retrieval of these parameters. This testbed can be used as a tool to provide an objective assessment of aerosol information content that can be retrieved for any constellation of (planned or real) satellite sensors and for any combination of algorithm design factors (in terms of wavelengths, viewing angles, radiance and/or polarization to be measured or used). We summarize the components of the testbed, including the derivation and validation of analytical formulae for Jacobian calculations. Benchmark calculations from the forward model are documented. In the context of NASA's Decadal Survey Mission GEOCAPE (GEOstationary Coastal and Air Pollution Events), we demonstrate the use of the testbed to conduct a feasibility study of using polarization measurements in and around the O2 A band for the retrieval of aerosol height information from space, as well as an to assess potential improvement in the retrieval of aerosol fine and coarse mode aerosol optical depth (AOD) through the

  12. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  13. Numerical and experimental evaluation of road infrastructure perception in fog and/or night conditions using infrared and photometric vision systems

    NASA Astrophysics Data System (ADS)

    Dumoulin, Jean; Boucher, Vincent; Greffier, Florian

    2009-08-01

    Use of infrared vision in automotive industry has mainly focused on detection of pedestrians or animals at night or under poor weather conditions. In those approaches, the road infrastructure behavior in infrared range has not been investigated. So, research work was realized using numerical simulations associated with specific experiments in a fog tunnel. The present paper deals with numerical simulations developed for both visible spectrum (visibility in fog) and infrared vision applied to road infrastructure perception in foggy night conditions. Results obtained as a function of fog nature (radiation or advection) are presented and discussed.

  14. Numerical observer for cardiac motion assessment using machine learning

    NASA Astrophysics Data System (ADS)

    Marin, Thibault; Kalayeh, Mahdi M.; Pretorius, P. H.; Wernick, Miles N.; Yang, Yongyi; Brankov, Jovan G.

    2011-03-01

    In medical imaging, image quality is commonly assessed by measuring the performance of a human observer performing a specific diagnostic task. However, in practice studies involving human observers are time consuming and difficult to implement. Therefore, numerical observers have been developed, aiming to predict human diagnostic performance to facilitate image quality assessment. In this paper, we present a numerical observer for assessment of cardiac motion in cardiac-gated SPECT images. Cardiac-gated SPECT is a nuclear medicine modality used routinely in the evaluation of coronary artery disease. Numerical observers have been developed for image quality assessment via analysis of detectability of myocardial perfusion defects (e.g., the channelized Hotelling observer), but no numerical observer for cardiac motion assessment has been reported. In this work, we present a method to design a numerical observer aiming to predict human performance in detection of cardiac motion defects. Cardiac motion is estimated from reconstructed gated images using a deformable mesh model. Motion features are then extracted from the estimated motion field and used to train a support vector machine regression model predicting human scores (human observers' confidence in the presence of the defect). Results show that the proposed method could accurately predict human detection performance and achieve good generalization properties when tested on data with different levels of post-reconstruction filtering.

  15. Benchmarking accurate spectral phase retrieval of single attosecond pulses

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Le, Anh-Thu; Morishita, Toru; Yu, Chao; Lin, C. D.

    2015-02-01

    A single extreme-ultraviolet (XUV) attosecond pulse or pulse train in the time domain is fully characterized if its spectral amplitude and phase are both determined. The spectral amplitude can be easily obtained from photoionization of simple atoms where accurate photoionization cross sections have been measured from, e.g., synchrotron radiations. To determine the spectral phase, at present the standard method is to carry out XUV photoionization in the presence of a dressing infrared (IR) laser. In this work, we examine the accuracy of current phase retrieval methods (PROOF and iPROOF) where the dressing IR is relatively weak such that photoelectron spectra can be accurately calculated by second-order perturbation theory. We suggest a modified method named swPROOF (scattering wave phase retrieval by omega oscillation filtering) which utilizes accurate one-photon and two-photon dipole transition matrix elements and removes the approximations made in PROOF and iPROOF. We show that the swPROOF method can in general retrieve accurate spectral phase compared to other simpler models that have been suggested. We benchmark the accuracy of these phase retrieval methods through simulating the spectrogram by solving the time-dependent Schrödinger equation numerically using several known single attosecond pulses with a fixed spectral amplitude but different spectral phases.

  16. Assessing Probabilistic Reasoning in Verbal-Numerical and Graphical-Pictorial Formats: An Evaluation of the Psychometric Properties of an Instrument

    ERIC Educational Resources Information Center

    Agus, Mirian; Penna, Maria Pietronilla; Peró-Cebollero, Maribel; Guàrdia-Olmos, Joan

    2016-01-01

    Research on the graphical facilitation of probabilistic reasoning has been characterised by the effort expended to identify valid assessment tools. The authors developed an assessment instrument to compare reasoning performances when problems were presented in verbal-numerical and graphical-pictorial formats. A sample of undergraduate psychology…

  17. Evaluation of Model Complexity and Parameter Estimation: Indirect Inversion of a Numerical Model of Heat Conduction and Convection Using Subsurface Temperatures in Peat

    NASA Astrophysics Data System (ADS)

    Christensen, W.; Kamai, T.; Fogg, G. E.

    2012-12-01

    The presence of metal piezometers (thermal conductivity 16.0 W m-1 K-1) in peat (thermal conductivity 0.5 W m-1 K-1) can significantly influence temperatures recorded in the subsurface. Radially symmetrical 2D numerical models of heat conduction and convection that use a transient specified temperature boundary condition (Dirichlet) and explicitly account for the difference in thermal properties differ from the commonly used 1D analytical solution by as much as 2°C at 0.15m below ground surface. Field data from temperature loggers located inside and outside piezometers show similar differences, supporting the use of the more complex numerical model. In order to better simulate field data, an energy balance approach is used to calculate the temperature along the upper boundary using hourly radiation and air temperature data, along with daily average wind velocity and cloud cover data. Normally distributed random noise is added to recorded field data to address potential natural variation between conditions at the instrument site and the field site (piezometer). Five influential parameters are considered: albedo, crop coefficient, hydraulic conductivity, thermal diffusivity, and surface water depth. Ten sets of these five parameters are generated from a uniform random distribution and constrained by values reported in the literature or measured in the field. The ten parameter sets and noise are used to generate synthetic subsurface data in the numerical model. The synthetic temperature data is offset by a constant value determined from a uniform random distribution to represent potential offset in instrument accuracy (+/- 0.1 °C). The original parameter values are satisfactorily recovered by indirect inversion of the noise-free model using UCODE. Comparison of the parameter estimates from the homogeneous numerical model (equivalent to the analytical model) and the numerical model that explicitly models the metal piezometer are compared. The same inversion scheme is

  18. Detection and accurate localization of harmonic chipless tags

    NASA Astrophysics Data System (ADS)

    Dardari, Davide

    2015-12-01

    We investigate the detection and localization properties of harmonic tags working at microwave frequencies. A two-tone interrogation signal and a dedicated signal processing scheme at the receiver are proposed to eliminate phase ambiguities caused by the short signal wavelength and to provide accurate distance/position estimation even in the presence of clutter and multipath. The theoretical limits on tag detection and localization accuracy are investigated starting from a concise characterization of harmonic backscattered signals. Numerical results show that accuracies in the order of centimeters are feasible within an operational range of a few meters in the RFID UHF band.

  19. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454

  20. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  1. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  2. Accurately measuring MPI broadcasts in a computational grid

    SciTech Connect

    Karonis N T; de Supinski, B R

    1999-05-06

    An MPI library's implementation of broadcast communication can significantly affect the performance of applications built with that library. In order to choose between similar implementations or to evaluate available libraries, accurate measurements of broadcast performance are required. As we demonstrate, existing methods for measuring broadcast performance are either inaccurate or inadequate. Fortunately, we have designed an accurate method for measuring broadcast performance, even in a challenging grid environment. Measuring broadcast performance is not easy. Simply sending one broadcast after another allows them to proceed through the network concurrently, thus resulting in inaccurate per broadcast timings. Existing methods either fail to eliminate this pipelining effect or eliminate it by introducing overheads that are as difficult to measure as the performance of the broadcast itself. This problem becomes even more challenging in grid environments. Latencies a long different links can vary significantly. Thus, an algorithm's performance is difficult to predict from it's communication pattern. Even when accurate pre-diction is possible, the pattern is often unknown. Our method introduces a measurable overhead to eliminate the pipelining effect, regardless of variations in link latencies. choose between different available implementations. Also, accurate and complete measurements could guide use of a given implementation to improve application performance. These choices will become even more important as grid-enabled MPI libraries [6, 7] become more common since bad choices are likely to cost significantly more in grid environments. In short, the distributed processing community needs accurate, succinct and complete measurements of collective communications performance. Since successive collective communications can often proceed concurrently, accurately measuring them is difficult. Some benchmarks use knowledge of the communication algorithm to predict the

  3. Evaluating Large-Scale Studies to Accurately Appraise Children's Performance

    ERIC Educational Resources Information Center

    Ernest, James M.

    2012-01-01

    Educational policy is often developed using a top-down approach. Recently, there has been a concerted shift in policy for educators to develop programs and research proposals that evolve from "scientific" studies and focus less on their intuition, aided by professional wisdom. This article analyzes several national and international educational…

  4. Analytical modeling and numerical simulations of the thermal behavior of trench-isolated bipolar transistors

    NASA Astrophysics Data System (ADS)

    Marano, I.; d'Alessandro, V.; Rinaldi, N.

    2009-03-01

    The thermal behavior of trench-isolated bipolar transistors is thoroughly investigated. Fully 3D numerical simulations are performed to analyze the impact of all technological parameters of interest. Based on numerical results, a novel strategy to analytically evaluate the temperature field is proposed, which accounts for the heat propagation through the trench and the nonuniform heat flux distribution over the interface between the silicon box surrounded by trench and the underlying substrate. The resulting model is proved to compare with numerical simulations more favorably than the other approaches available from the literature. As a consequence, it can be employed for an accurate, yet fast evaluation of the thermal resistance of a trench-isolated device as well as of the temperature gradients within the silicon box.

  5. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  6. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    SciTech Connect

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  7. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  8. An articulated statistical shape model for accurate hip joint segmentation.

    PubMed

    Kainmueller, Dagmar; Lamecker, Hans; Zachow, Stefan; Hege, Hans-Christian

    2009-01-01

    In this paper we propose a framework for fully automatic, robust and accurate segmentation of the human pelvis and proximal femur in CT data. We propose a composite statistical shape model of femur and pelvis with a flexible hip joint, for which we extend the common definition of statistical shape models as well as the common strategy for their adaptation. We do not analyze the joint flexibility statistically, but model it explicitly by rotational parameters describing the bent in a ball-and-socket joint. A leave-one-out evaluation on 50 CT volumes shows that image driven adaptation of our composite shape model robustly produces accurate segmentations of both proximal femur and pelvis. As a second contribution, we evaluate a fine grain multi-object segmentation method based on graph optimization. It relies on accurate initializations of femur and pelvis, which our composite shape model can generate. Simultaneous optimization of both femur and pelvis yields more accurate results than separate optimizations of each structure. Shape model adaptation and graph based optimization are embedded in a fully automatic framework. PMID:19964159

  9. Paleoglaciological reconstructions for the Tibetan Plateau during the last glacial cycle: evaluating numerical ice sheet simulations driven by GCM-ensembles

    NASA Astrophysics Data System (ADS)

    Kirchner, Nina; Greve, Ralf; Stroeven, Arjen P.; Heyman, Jakob

    2011-01-01

    The Tibetan Plateau is a topographic feature of extraordinary dimension and has an important impact on regional and global climate. However, the glacial history of the Tibetan Plateau is more poorly constrained than that of most other formerly glaciated regions such as in North America and Eurasia. On the basis of some field evidence it has been hypothesized that the Tibetan Plateau was covered by an ice sheet during the Last Glacial Maximum (LGM). Abundant field- and chronological evidence for a predominance of local valley glaciation during the past 300,000 calendar years (that is, 300 ka), coupled to an absence of glacial landforms and sediments in extensive areas of the plateau, now refute this concept. This, furthermore, calls into question previous ice sheet modeling attempts which generally arrive at ice volumes considerably larger than allowed for by field evidence. Surprisingly, the robustness of such numerical ice sheet model results has not been widely queried, despite potentially important climate ramifications. We simulated the growth and decay of ice on the Tibetan Plateau during the last 125 ka in response to a large ensemble of climate forcings (90 members) derived from Global Circulation Models (GCMs), using a similar 3D thermomechanical ice sheet model as employed in previous studies. The numerical results include as extreme end members as an ice-free Tibetan Plateau and a plateau-scale ice sheet comparable, in volume, to the contemporary Greenland ice sheet. We further demonstrate that numerical simulations that acceptably conform to published reconstructions of Quaternary ice extent on the Tibetan Plateau cannot be achieved with the employed stand-alone ice sheet model when merely forced by paleoclimates derived from currently available GCMs. Progress is, however, expected if future investigations employ ice sheet models with higher resolution, bidirectional ice sheet-atmosphere feedbacks, improved treatment of the surface mass balance, and

  10. Gaussian short-time propagators and electron kinetics: Numerical evaluation of path-sum solutions to Fokker{endash}Planck equations for rf heating and current drive

    SciTech Connect

    Bizarro, J.P.; Belo, J.H.; Figueiredo, A.C.

    1997-06-01

    Knowing that short-time propagators for Fokker{endash}Planck equations are Gaussian, and based on a path-sum formulation, an efficient and simple numerical method is presented to solve the initial-value problem for electron kinetics during rf heating and current drive. The formulation is thoroughly presented and discussed, its advantages are stressed, and general, practical criteria for its implementation are derived regarding the time step and grid spacing. The new approach is illustrated and validated by solving the one-dimensional model for lower-hybrid current drive, which has a well-known steady-state analytical solution. {copyright} {ital 1997 American Institute of Physics.}

  11. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  12. A fast and accurate method for echocardiography strain rate imaging

    NASA Astrophysics Data System (ADS)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  13. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  14. Numerical evaluation of longitudinal motions of Wigley hulls advancing in waves by using Bessho form translating-pulsating source Green'S function

    NASA Astrophysics Data System (ADS)

    Xiao, Wenbin; Dong, Wencai

    2016-06-01

    In the framework of 3D potential flow theory, Bessho form translating-pulsating source Green's function in frequency domain is chosen as the integral kernel in this study and hybrid source-and-dipole distribution model of the boundary element method is applied to directly solve the velocity potential for advancing ship in regular waves. Numerical characteristics of the Green function show that the contribution of local-flow components to velocity potential is concentrated at the nearby source point area and the wave component dominates the magnitude of velocity potential in the far field. Two kinds of mathematical models, with or without local-flow components taken into account, are adopted to numerically calculate the longitudinal motions of Wigley hulls, which demonstrates the applicability of translating-pulsating source Green's function method for various ship forms. In addition, the mesh analysis of discrete surface is carried out from the perspective of ship-form characteristics. The study shows that the longitudinal motion results by the simplified model are somewhat greater than the experimental data in the resonant zone, and the model can be used as an effective tool to predict ship seakeeping properties. However, translating-pulsating source Green function method is only appropriate for the qualitative analysis of motion response in waves if the ship geometrical shape fails to satisfy the slender-body assumption.

  15. Triclinic Transpression in brittle shear zones evaluated via combined numerical and analogue modeling: the case of The Torcal de Antequera Massif, SE Spain.

    NASA Astrophysics Data System (ADS)

    Barcos, Leticia; Díaz-Azpiroz, Manuel; Faccenna, Claudio; Balanyá, Juan Carlos; Expósito, Inmaculada; Giménez-Bonilla, Alejandro

    2013-04-01

    Numerical kinematic models have been widely used to understand the parameters controlling the generation and evolution of ductile transpression zones. However, these models are based on continuum mechanics and therefore, are not as useful to analyse deformation partitioning and strain within brittle-ductile transpression zones. The combination of numerical and analogue models will potentially provide an effective approach for a better understanding of these processes and, to a broader extent, of high strain zones in general. In the present work, we follow a combined numerical and analogue approach to analyse a brittle dextral transpressive shear zone. The Torcal de Antequera Massif (TAM) is part of a roughly E-W oriented shear zone at the NE end of the Western Gibraltar Arc (Betic Cordillera). This shear zone presents, according to their structural and kinematic features, two types of domains i) Domain type 1 is located at both TAM margins, and is characterized by strike-slip structures subparallel to the main TAM boundaries (E-W). ii) Domain type 2 corresponds to the TAM inner part, and it presents SE-vergent open folds and reverse shear zones, as well as normal faults accommodating fold axis parallel extension. Both domains have been studied separately applying a model of triclinic transpression with inclined extrusion. The kinematic parameters obtained in this study (?, ? and Wk) allows us to constrain geometrical transpression parameters. As such, the angle of oblique convergence (α, the horizontal angle between the displacement vector and the strike of the shear zone) ranges between 10-17° (simple shear dominated) for domain type 1 and between 31-35° (coaxial dominated) for domain type 2. According to the results obtained from the numerical model and in order to validate its possible utility in brittle shear zones we develop two analogue models with α values representative of both domains defined in the TAM: 15° for type 1 and 30° for type 2. In the

  16. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  17. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  18. Numerical Relativity and Astrophysics

    NASA Astrophysics Data System (ADS)

    Lehner, Luis; Pretorius, Frans

    2014-08-01

    Throughout the Universe many powerful events are driven by strong gravitational effects that require general relativity to fully describe them. These include compact binary mergers, black hole accretion, and stellar collapse, where velocities can approach the speed of light and extreme gravitational fields (ΦNewt/c2≃1) mediate the interactions. Many of these processes trigger emission across a broad range of the electromagnetic spectrum. Compact binaries further source strong gravitational wave emission that could directly be detected in the near future. This feat will open up a gravitational wave window into our Universe and revolutionize our understanding of it. Describing these phenomena requires general relativity, and—where dynamical effects strongly modify gravitational fields—the full Einstein equations coupled to matter sources. Numerical relativity is a field within general relativity concerned with studying such scenarios that cannot be accurately modeled via perturbative or analytical calculations. In this review, we examine results obtained within this discipline, with a focus on its impact in astrophysics.

  19. Numerical methods for molecular dynamics

    SciTech Connect

    Skeel, R.D.

    1991-01-01

    This report summarizes our research progress to date on the use of multigrid methods for three-dimensional elliptic partial differential equations, with particular emphasis on application to the Poisson-Boltzmann equation of molecular biophysics. This research is motivated by the need for fast and accurate numerical solution techniques for three-dimensional problems arising in physics and engineering. In many applications these problems must be solved repeatedly, and the extremely large number of discrete unknowns required to accurately approximate solutions to partial differential equations in three-dimensional regions necessitates the use of efficient solution methods. This situation makes clear the importance of developing methods which are of optimal order (or nearly so), meaning that the number of operations required to solve the discrete problem is on the order of the number of discrete unknowns. Multigrid methods are generally regarded as being in this class of methods, and are in fact provably optimal order for an increasingly large class of problems. The fundamental goal of this research is to develop a fast and accurate numerical technique, based on multi-level principles, for the solutions of the Poisson-Boltzmann equation of molecular biophysics and similar equations occurring in other applications. An outline of the report is as follows. We first present some background material, followed by a survey of the literature on the use of multigrid methods for solving problems similar to the Poisson-Boltzmann equation. A short description of the software we have developed so far is then given, and numerical results are discussed. Finally, our research plans for the coming year are presented.

  20. Pre-Stall Behavior of a Transonic Axial Compressor Stage via Time-Accurate Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Chen, Jen-Ping; Hathaway, Michael D.; Herrick, Gregory P.

    2008-01-01

    CFD calculations using high-performance parallel computing were conducted to simulate the pre-stall flow of a transonic compressor stage, NASA compressor Stage 35. The simulations were run with a full-annulus grid that models the 3D, viscous, unsteady blade row interaction without the need for an artificial inlet distortion to induce stall. The simulation demonstrates the development of the rotating stall from the growth of instabilities. Pressure-rise performance and pressure traces are compared with published experimental data before the study of flow evolution prior to the rotating stall. Spatial FFT analysis of the flow indicates a rotating long-length disturbance of one rotor circumference, which is followed by a spike-type breakdown. The analysis also links the long-length wave disturbance with the initiation of the spike inception. The spike instabilities occur when the trajectory of the tip clearance flow becomes perpendicular to the axial direction. When approaching stall, the passage shock changes from a single oblique shock to a dual-shock, which distorts the perpendicular trajectory of the tip clearance vortex but shows no evidence of flow separation that may contribute to stall.

  1. Numerical Simulation of Cocontinuous Blends

    NASA Astrophysics Data System (ADS)

    Kim, Junseok; Lowengrub, John

    2004-11-01

    In strongly sheared emulsions, experiments (Galloway and Macosko 2002) have shown that systems consisting of one continuous (matrix) and one dispersed (drops) phase may undergo a coalescence cascade leading to a system in which both phases are continuous, (sponge-like). Such configurations may have desirable mechanical and electrical properties and thus have wide ranging applications. Using a new and improved diffuse-inteface method (accurate surface tension force formulation, volume-preservation, and efficient nonlinear multigrid solver) developed by Kim and Lowengrub 2004, we perform numerical simulations of cocontinuous blends and determine the conditions for formation. We also characterize their rheology.

  2. Numerical evaluation of the effects of planform geometry and inflow conditions on flow, turbulence structure, and bed shear velocity at a stream confluence with a concordant bed

    NASA Astrophysics Data System (ADS)

    Constantinescu, George; Miyawaki, Shinjiro; Rhoads, Bruce; Sukhodolov, Alexander

    2014-10-01

    This study numerically investigates the effects of variations in inflow conditions and planform geometry on large-scale coherent flow structures and bed friction velocities at a stream confluence with natural bathymetry and concordant bed morphology. Several numerical experiments are conducted in which either the Kelvin-Helmholtz mode or the wake mode dominates within the mixing interface (MI) between the two confluent streams as the junction angle and alignments of the tributaries are altered. In the Kelvin-Helmholtz mode, the MI contains mostly corotating vortices driven by the mean transverse shear across the MI, while in the wake mode the MI contains counterrotating vortices forming by the interaction of the separated shear layers on the two sides of a zone of stagnant fluid near the junction corner. A large angle between the two incoming streams is not necessary for the development of strongly coherent streamwise-oriented vortical (SOV) cells in the immediate vicinity of the MI. Results show that such SOV cells can develop and produce high bed friction velocities even for cases with a low angle between the two tributaries and for cases where the downstream channel is approximately aligned with the axes of the two tributaries (low-curvature cases). SOV cells tend not to develop only when the incoming streams are parallel and aligned with the downstream channel (junction angle of zero), and the incoming flows produce a strong Kelvin-Helmholtz mode. Under such conditions, quasi 2-D MI vortices play the primary role in mixing and the production of high bed shear velocities. Simulations with and without natural bed morphology/local bank line irregularities indicate that planform geometry and inflow conditions primarily govern the development of coherent flow structures, but that bathymetric and bank line effects can locally modify details of these structures.

  3. Evaluation of Analytical and Numerical Techniques for Defining the Radius of Influence for an Open-Loop Ground Source Heat Pump System

    SciTech Connect

    Freedman, Vicky L.; Mackley, Rob D.; Waichler, Scott R.; Horner, Jacob A.

    2013-09-26

    In an open-loop groundwater heat pump (GHP) system, groundwater is extracted, run through a heat exchanger, and injected back into the ground, resulting in no mass balance changes to the flow system. Although the groundwater use is non-consumptive, the withdrawal and injection of groundwater may cause negative hydraulic and thermal impacts to the flow system. Because GHP is a relatively new technology and regulatory guidelines for determining environmental impacts for GHPs may not exist, consumptive use metrics may need to be used for permit applications. For consumptive use permits, a radius of influence is often used, which is defined as the radius beyond which hydraulic impacts to the system are considered negligible. In this paper, the hydraulic radius of influence concept was examined using analytical and numerical methods for a non-consumptive GHP system in southeastern Washington State. At this location, the primary hydraulic concerns were impacts to nearby contaminant plumes and a water supply well field. The results of this study showed that the analytical techniques with idealized radial flow were generally unsuited because they over predicted the influence of the well system. The numerical techniques yielded more reasonable results because they could account for aquifer heterogeneities and flow boundaries. In particular, the use of a capture zone analysis was identified as the best method for determining potential changes in current contaminant plume trajectories. The capture zone analysis is a more quantitative and reliable tool for determining the radius of influence with a greater accuracy and better insight for a non-consumptive GHP assessment.

  4. Modified chemiluminescent NO analyzer accurately measures NOX

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1978-01-01

    Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.

  5. Evaluation.

    ERIC Educational Resources Information Center

    McAnany, Emile G.; And Others

    1980-01-01

    Two lead articles set the theme for this issue devoted to evaluation as Emile G. McAnany examines the usefulness of evaluation and Robert C. Hornik addresses four widely accepted myths about evaluation. Additional articles include a report of a field evaluation done by the Accion Cultural Popular (ACPO); a study of the impact of that evaluation by…

  6. The use of FLO2D numerical code in lahar hazard evaluation at Popocatépetl volcano: a 2001-lahar scenario

    NASA Astrophysics Data System (ADS)

    Caballero, L.; Capra, L.

    2014-07-01

    Lahar modelling represents an excellent tool to design hazard maps. It allows the definition of potential inundation zones for different lahar magnitude scenarios and sediment concentrations. Here we present the results obtained for the 2001 syneruptive lahar at Popocatépetl volcano, based on simulations performed with FLO2D software. An accurate delineation of this event is needed since it is one of the possible scenarios considered during a volcanic crisis. One of the main issues for lahar simulation using FLO2D is the calibration of the input hydrograph and rheologic flow properties. Here we verified that geophone data can be properly calibrated by means of peak discharge calculations obtained by superelevation method. Simulation results clearly show the influence of concentration and rheologic properties on lahar depth and distribution. Modifying rheologic properties during lahar simulation strongly affect lahar distribution. More viscous lahars have a more restricted aerial distribution, thicker depths, and resulting velocities are noticeable smaller. FLO2D proved to be a very successful tool to delimitate lahar inundation zones as well as to generate different lahar scenarios not only related to lahar volume or magnitude but also to take into account different sediment concentrations and rheologies widely documented to influence lahar prone areas.

  7. The use of FLO2D numerical code in lahar hazard evaluation at Popocatépetl volcano: a 2001 lahar scenario

    NASA Astrophysics Data System (ADS)

    Caballero, L.; Capra, L.

    2014-12-01

    Lahar modeling represents an excellent tool for designing hazard maps. It allows the definition of potential inundation zones for different lahar magnitude scenarios and sediment concentrations. Here, we present the results obtained for the 2001 syneruptive lahar at Popocatépetl volcano, based on simulations performed with FLO2D software. An accurate delineation of this event is needed, since it is one of the possible scenarios considered if magmatic activity increases its magnitude. One of the main issues for lahar simulation using FLO2D is the calibration of the input hydrograph and rheological flow properties. Here, we verified that geophone data can be properly calibrated by means of peak discharge calculations obtained by the superelevation method. Digital elevation model resolution also resulted as an important factor in defining the reliability of the simulated flows. Simulation results clearly show the influence of sediment concentrations and rheological properties on lahar depth and distribution. Modifying rheological properties during lahar simulation strongly affects lahar distribution. More viscous lahars have a more restricted aerial distribution and thicker depths, and resulting velocities are noticeably smaller. FLO2D proved to be a very successful tool for delimitating lahar inundation zones as well as generating different lahar scenarios not only related to lahar volume or magnitude, but also taking into account different sediment concentrations and rheologies widely documented as influencing lahar-prone areas.

  8. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    SciTech Connect

    Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  9. Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2

    NASA Technical Reports Server (NTRS)

    Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.

    1988-01-01

    The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.

  10. Can Appraisers Rate Work Performance Accurately?

    ERIC Educational Resources Information Center

    Hedge, Jerry W.; Laue, Frances J.

    The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…

  11. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  12. Design and numerical evaluation of full-authority flight control systems for conventional and thruster-augmented helicopters employed in NOE operations

    NASA Technical Reports Server (NTRS)

    Perri, Todd A.; Mckillip, R. M., Jr.; Curtiss, H. C., Jr.

    1987-01-01

    The development and methodology is presented for development of full-authority implicit model-following and explicit model-following optimal controllers for use on helicopters operating in the Nap-of-the Earth (NOE) environment. Pole placement, input-output frequency response, and step input response were used to evaluate handling qualities performance. The pilot was equipped with velocity-command inputs. A mathematical/computational trajectory optimization method was employed to evaluate the ability of each controller to fly NOE maneuvers. The method determines the optimal swashplate and thruster input histories from the helicopter's dynamics and the prescribed geometry and desired flying qualities of the maneuver. Three maneuvers were investigated for both the implicit and explicit controllers with and without auxiliary propulsion installed: pop-up/dash/descent, bob-up at 40 knots, and glideslope. The explicit controller proved to be superior to the implicit controller in performance and ease of design.

  13. Numerical simulations of cryogenic cavitating flows

    NASA Astrophysics Data System (ADS)

    Kim, Hyunji; Kim, Hyeongjun; Min, Daeho; Kim, Chongam

    2015-12-01

    The present study deals with a numerical method for cryogenic cavitating flows. Recently, we have developed an accurate and efficient baseline numerical scheme for all-speed water-gas two-phase flows. By extending such progress, we modify the numerical dissipations to be properly scaled so that it does not show any deficiencies in low Mach number regions. For dealing with cryogenic two-phase flows, previous EOS-dependent shock discontinuity sensing term is replaced with a newly designed EOS-free one. To validate the proposed numerical method, cryogenic cavitating flows around hydrofoil are computed and the pressure and temperature depression effect in cryogenic cavitation are demonstrated. Compared with Hord's experimental data, computed results are turned out to be satisfactory. Afterwards, numerical simulations of flow around KARI turbopump inducer in liquid rocket are carried out under various flow conditions with water and cryogenic fluids, and the difference in inducer flow physics depending on the working fluids are examined.

  14. Numerically Controlled Machining Of Wind-Tunnel Models

    NASA Technical Reports Server (NTRS)

    Kovtun, John B.

    1990-01-01

    New procedure for dynamic models and parts for wind-tunnel tests or radio-controlled flight tests constructed. Involves use of single-phase numerical control (NC) technique to produce highly-accurate, symmetrical models in less time.

  15. Accurate deterministic solutions for the classic Boltzmann shock profile

    NASA Astrophysics Data System (ADS)

    Yue, Yubei

    The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.

  16. Numerical evaluation of the PERTH (PERiodic Tracer Hierarchy) method for estimating time-variable travel time distribution in variably saturated soils

    NASA Astrophysics Data System (ADS)

    Kim, M.; Harman, C. J.

    2013-12-01

    The distribution of water travel times is one of the crucial hydrologic characteristics of the catchment. Recently, it has been argued that a rigorous treatment of travel time distributions should allow for their variability in time because of the variable fluxes and partitioning of water in the water balance, and the consequent variable storage of a catchment. We would like to be able to observe the structure of the temporal variations in travel time distributions under controlled conditions, such as in a soil column or under irrigation experiments. However, time-variable travel time distributions are difficult to observe using typical active and passive tracer approaches. Time-variability implies that tracers introduced at different times will have different travel time distributions. The distribution may also vary during injection periods. Moreover, repeat application of a single tracer in a system with significant memory leads to overprinting of break-through curves, which makes it difficult to extract the original break-through curves, and the number of ideal tracers that can be applied is usually limited. Recognizing these difficulties, the PERTH (PERiodic Tracer Hierarchy) method has been developed. The method provides a way to estimate time-variable travel time distributions by tracer experiments under controlled conditions by employing a multi-tracer hierarchy under periodical hydrologic forcing inputs. The key assumption of the PERTH method is that as time gets sufficiently large relative to injection time, the average travel time distribution of two distinct ideal tracers injected during overlapping periods become approximately equal. Thus one can be used as a proxy for the other, and the breakthrough curves of tracers applied at different times in a periodic forcing condition can be separated from one another. In this study, we tested the PERTH method numerically for the case of infiltration at the plot scale using HYDRUS-1D and a particle

  17. Basin evaluation in deltaic series using 2-D numerical modeling a comparison of Mahakam delta and south Louisiana/Gulf of Mexico case histories

    SciTech Connect

    Burrus, J. ); De Choppin, J.G.; Grosjean, J.L.; Oudin, J.L. ); Schwarzer, T. ); Schroeder, F.; Lander, R. )

    1993-09-01

    Integrated numerical modeling of petroleum, generation and migration is difficult to apply in deltaic series. Using Institut Francais du Petrole's two-dimensional model TEMISPACK, we tried to simulate the petroleum history along a 70 km long east-west regional section in the Mahakam delta (Indonesia) and a 800 km long north-south section in south Louisiana/Gulf of Mexico. The two basins contain thick (>10 km) accumulations of the post middle miocene. The principal results are as follows (1) Both basins have similar overpressure profiles caused by thick shales with nano-darcy permeabilities. Compaction, not oil or gas generation, controls the overpressure histories. (2) In both basins, the thermal history is dominated by burial rate, thermal blanketing, and undercompaction. Basinward increases in thermal gradients are probably due to basinward increases in shale content and undercompaction, rather than geodynamic processes. (3) We used an upscaling procedure to define sedimentary facies and properties for each cell in the models. In both cases, we found a huge permeability anisotropy of interbedded facies was necessary to match observed pressure profiles and hydrocarbon distributions. This anisotropy results in a dominant [open quotes]parallel-to-bedding[close quotes] migration pattern, with only a moderate (<0.5 km) vertical migration component. (4) A fundamental difference between the Mahakam and the Gulf coast petroleum systems is the hole of growth faults. In the Gulf Coast, huge growth faults connect deep overpressured, overmature Tertiary source facies with shallow, interbedded sandy reservoirs. Enhanced vertical permeability in the vicinity of these fault zones allows for several kilometers of vertical migration. In the Mahakam delta, where growth faults are less prevalent, deep overpressured shales have very poor expulsion efficiency; gas and oil in shallow reservoirs are shown to be fed mostly by coals located above, and not within, the overpressured zone.

  18. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  19. Light Field Imaging Based Accurate Image Specular Highlight Removal.

    PubMed

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into "unsaturated" and "saturated" category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  20. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  1. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  2. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  3. Direct computation of parameters for accurate polarizable force fields

    SciTech Connect

    Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.

    2014-11-21

    We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.

  4. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  5. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  6. Preparation and accurate measurement of pure ozone.

    PubMed

    Janssen, Christof; Simone, Daniela; Guinet, Mickaël

    2011-03-01

    Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766

  7. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  8. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  9. Line gas sampling system ensures accurate analysis

    SciTech Connect

    Not Available

    1992-06-01

    Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.

  10. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  11. Numerical Algorithms for Acoustic Integrals - The Devil is in the Details

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    1996-01-01

    The accurate prediction of the aeroacoustic field generated by aerospace vehicles or nonaerospace machinery is necessary for designers to control and reduce source noise. Powerful computational aeroacoustic methods, based on various acoustic analogies (primarily the Lighthill acoustic analogy) and Kirchhoff methods, have been developed for prediction of noise from complicated sources, such as rotating blades. Both methods ultimately predict the noise through a numerical evaluation of an integral formulation. In this paper, we consider three generic acoustic formulations and several numerical algorithms that have been used to compute the solutions to these formulations. Algorithms for retarded-time formulations are the most efficient and robust, but they are difficult to implement for supersonic-source motion. Collapsing-sphere and emission-surface formulations are good alternatives when supersonic-source motion is present, but the numerical implementations of these formulations are more computationally demanding. New algorithms - which utilize solution adaptation to provide a specified error level - are needed.

  12. A numerical study of three-dimensional surface tension driven convection with fre surface deformation

    NASA Technical Reports Server (NTRS)

    Hsieh, Kwang-Chung

    1992-01-01

    The steady three-dimensional thermocapillary motion with a deformable free surface is studied numerically in both normal and zero gravity environments. Flow configurations consist of a square cavity heated from the side. In the analysis, the free surface is allowed to deform and the grid distribution is adapted to the surface deformation. The divergence-free condition is satisfied by using a dual time-stepping approach in the numerical scheme. Convective flux derivatives are evaluated using a third-order accurate upwind-biased flux-split differencing technique. The numerical solutions at the midplane of the square cavity are compared with the results from two-dimensional calculations. In addition, numerial results for cases under zero and normal gravity conditions are compared. Significantly different flow structures and surface deformation have been observed. The comparison of calculated results will be compared with experimental data in the updated version of this paper.

  13. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  14. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-04-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  15. Accurate Assessment--Compelling Evidence for Practice

    ERIC Educational Resources Information Center

    Flynn, Regina T.; Anderson, Ludmila; Martin, Nancy R.

    2010-01-01

    Childhood overweight and obesity is a public health concern not just because of its growing prevalence but also for its serious and lasting health consequences. Though height and weight measures are easy to obtain and New Hampshire Head Start sites measure height and weight of their enrollees, there are numerous challenges related to accurate…

  16. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  17. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  18. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  19. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  20. Numerical integration of subtraction terms

    NASA Astrophysics Data System (ADS)

    Seth, Satyajit; Weinzierl, Stefan

    2016-06-01

    Numerical approaches to higher-order calculations often employ subtraction terms, both for the real emission and the virtual corrections. These subtraction terms have to be added back. In this paper we show that at NLO the real subtraction terms, the virtual subtraction terms, the integral representations of the field renormalization constants and—in the case of initial-state partons—the integral representation for the collinear counterterm can be grouped together to give finite integrals, which can be evaluated numerically. This is useful for an extension towards next-to-next-to-leading order.

  1. Numerical 3D flow simulation of attached cavitation structures at ultrasonic horn tips and statistical evaluation of flow aggressiveness via load collectives

    NASA Astrophysics Data System (ADS)

    Mottyll, S.; Skoda, R.

    2015-12-01

    A compressible inviscid flow solver with barotropic cavitation model is applied to two different ultrasonic horn set-ups and compared to hydrophone, shadowgraphy as well as erosion test data. The statistical analysis of single collapse events in wall-adjacent flow regions allows the determination of the flow aggressiveness via load collectives (cumulative event rate vs collapse pressure), which show an exponential decrease in agreement to studies on hydrodynamic cavitation [1]. A post-processing projection of event rate and collapse pressure on a reference grid reduces the grid dependency significantly. In order to evaluate the erosion-sensitive areas a statistical analysis of transient wall loads is utilised. Predicted erosion sensitive areas as well as temporal pressure and vapour volume evolution are in good agreement to the experimental data.

  2. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  3. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  4. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.

  5. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  6. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241

  7. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  8. Numerical Capture of Wing-tip Vortex Using Vorticity Confinement

    NASA Astrophysics Data System (ADS)

    Zhang, Baili; Lou, Jing; Kang, Chang Wei; Wilson, Alexander; Lundberg, Johan; Bensow, Rickard

    2012-11-01

    Tracking vortices accurately over large distances is very important in many areas of engineering, for instance flow over rotating helicopter blades, ship propeller blades and aircraft wings. However, due to the inherent numerical dissipation in the advection step of flow simulation, current Euler and RANS field solvers tend to damp these vortices too fast. One possible solution to reduce the unphysical decay of these vortices is the application of vorticity confinement methods. In this study, a vorticity confinement term is added to the momentum conservation equations which is a function of the local element size, the vorticity and the gradient of the absolute value of vorticity. The approach has been evaluated by a systematic numerical study on the tip vortex trailing from a rectangular NACA0012 half-wing. The simulated structure and development of the wing-tip vortex agree well with experiments both qualitatively and quantitatively without any adverse effects on the global flow field. It is shown that vorticity confinement can negate the effect of numerical dissipation, leading to a more or less constant vortex strength. This is an approximate method in that genuine viscous diffusion of the vortex is not modeled, but it can be appropriate for vortex dominant flows over short to medium length scales where viscous diffusion can be neglected.

  9. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  10. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  11. Accurate and occlusion-robust multi-view stereo

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaokun; Stamatopoulos, Christos; Fraser, Clive S.

    2015-11-01

    This paper proposes an accurate multi-view stereo method for image-based 3D reconstruction that features robustness in the presence of occlusions. The new method offers improvements in dealing with two fundamental image matching problems. The first concerns the selection of the support window model, while the second centers upon accurate visibility estimation for each pixel. The support window model is based on an approximate 3D support plane described by a depth and two per-pixel depth offsets. For the visibility estimation, the multi-view constraint is initially relaxed by generating separate support plane maps for each support image using a modified PatchMatch algorithm. Then the most likely visible support image, which represents the minimum visibility of each pixel, is extracted via a discrete Markov Random Field model and it is further augmented by parameter clustering. Once the visibility is estimated, multi-view optimization taking into account all redundant observations is conducted to achieve optimal accuracy in the 3D surface generation for both depth and surface normal estimates. Finally, multi-view consistency is utilized to eliminate any remaining observational outliers. The proposed method is experimentally evaluated using well-known Middlebury datasets, and results obtained demonstrate that it is amongst the most accurate of the methods thus far reported via the Middlebury MVS website. Moreover, the new method exhibits a high completeness rate.

  12. Accurate thermochemistry for medium-sized and large molecules

    SciTech Connect

    Raghavachari, K.; Stefanov, B.B.; Curtiss, L.A.

    1997-12-31

    Accurate techniques such as Gaussian-2 (G2) theory have been proposed in recent years to evaluate the thermochemistry of small molecules from first-principles. However, as the molecules get larger, the errors in G2 theory and similar approaches tend to accumulate. For example, the computed heats of formation of benzene and naphthalene with G2 and G2(MP2) theories, respectively, have errors of 3.9 and 7.2 kcal/mol. In this work, we explore strategies for computing accurate heats of formation for medium-sized and large molecules. In our first scheme, G2 theory is combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. For a test set of 40 molecules composed of H, C, O, and N, our method yields enthalpies of formation, {Delta}H{sub f}{sup 0}(298 K), with a mean absolute deviation from experiment of only 0.5 kcal/mol. This is an improvement of a factor of three over the deviation of 1.5 kcal/mol seen in standard G2 theory.

  13. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  14. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  15. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  16. Universality: Accurate Checks in Dyson's Hierarchical Model

    NASA Astrophysics Data System (ADS)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  17. Multisensory Information Boosts Numerical Matching Abilities in Young Children

    ERIC Educational Resources Information Center

    Jordan, Kerry E.; Baker, Joseph

    2011-01-01

    This study presents the first evidence that preschool children perform more accurately in a numerical matching task when given multisensory rather than unisensory information about number. Three- to 5-year-old children learned to play a numerical matching game on a touchscreen computer, which asked them to match a sample numerosity with a…

  18. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: 1) representing increasingly precisely the magnitudes of non-symbolic…

  19. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic…

  20. Accurate vessel segmentation with constrained B-snake.

    PubMed

    Yuanzhi Cheng; Xin Hu; Ji Wang; Yadong Wang; Tamura, Shinichi

    2015-08-01

    We describe an active contour framework with accurate shape and size constraints on the vessel cross-sectional planes to produce the vessel segmentation. It starts with a multiscale vessel axis tracing in a 3D computed tomography (CT) data, followed by vessel boundary delineation on the cross-sectional planes derived from the extracted axis. The vessel boundary surface is deformed under constrained movements on the cross sections and is voxelized to produce the final vascular segmentation. The novelty of this paper lies in the accurate contour point detection of thin vessels based on the CT scanning model, in the efficient implementation of missing contour points in the problematic regions and in the active contour model with accurate shape and size constraints. The main advantage of our framework is that it avoids disconnected and incomplete segmentation of the vessels in the problematic regions that contain touching vessels (vessels in close proximity to each other), diseased portions (pathologic structure attached to a vessel), and thin vessels. It is particularly suitable for accurate segmentation of thin and low contrast vessels. Our method is evaluated and demonstrated on CT data sets from our partner site, and its results are compared with three related methods. Our method is also tested on two publicly available databases and its results are compared with the recently published method. The applicability of the proposed method to some challenging clinical problems, the segmentation of the vessels in the problematic regions, is demonstrated with good results on both quantitative and qualitative experimentations; our segmentation algorithm can delineate vessel boundaries that have level of variability similar to those obtained manually.