Meek, Garrett A; Levine, Benjamin G
2014-07-01
Spikes in the time-derivative coupling (TDC) near surface crossings make the accurate integration of the time-dependent Schrödinger equation in nonadiabatic molecular dynamics simulations a challenge. To address this issue, we present an approximation to the TDC based on a norm-preserving interpolation (NPI) of the adiabatic electronic wave functions within each time step. We apply NPI and two other schemes for computing the TDC in numerical simulations of the Landau-Zener model, comparing the simulated transfer probabilities to the exact solution. Though NPI does not require the analytical calculation of nonadiabatic coupling matrix elements, it consistently yields unsigned population transfer probability errors of ∼0.001, whereas analytical calculation of the TDC yields errors of 0.0-1.0 depending on the time step, the offset of the maximum in the TDC from the beginning of the time step, and the coupling strength. The approximation of Hammes-Schiffer and Tully yields errors intermediate between NPI and the analytical scheme. PMID:26279558
Bangalore, Sai Santosh; Wang, Jelai; Allison, David B.
2009-01-01
In the fields of genomics and high dimensional biology (HDB), massive multiple testing prompts the use of extremely small significance levels. Because tail areas of statistical distributions are needed for hypothesis testing, the accuracy of these areas is important to confidently make scientific judgments. Previous work on accuracy was primarily focused on evaluating professionally written statistical software, like SAS, on the Statistical Reference Datasets (StRD) provided by National Institute of Standards and Technology (NIST) and on the accuracy of tail areas in statistical distributions. The goal of this paper is to provide guidance to investigators, who are developing their own custom scientific software built upon numerical libraries written by others. In specific, we evaluate the accuracy of small tail areas from cumulative distribution functions (CDF) of the Chi-square and t-distribution by comparing several open-source, free, or commercially licensed numerical libraries in Java, C, and R to widely accepted standards of comparison like ELV and DCDFLIB. In our evaluation, the C libraries and R functions are consistently accurate up to six significant digits. Amongst the evaluated Java libraries, Colt is most accurate. These languages and libraries are popular choices among programmers developing scientific software, so the results herein can be useful to programmers in choosing libraries for CDF accuracy. PMID:20161126
NASA Astrophysics Data System (ADS)
Hedrick, A. R.; Marks, D. G.; Winstral, A. H.; Marshall, H. P.
2014-12-01
The ability to forecast snow water equivalent, or SWE, in mountain catchments would benefit many different communities ranging from avalanche hazard mitigation to water resource management. Historical model runs of Isnobal, the physically based energy balance snow model, have been produced over the 2150 km2 Boise River Basin for water years 2012 - 2014 at 100-meter resolution. Spatially distributed forcing parameters such as precipitation, wind, and relative humidity are generated from automated weather stations located throughout the watershed, and are supplied to Isnobal at hourly timesteps. Similarly, the Weather Research & Forecasting (WRF) Model provides hourly predictions of the same forcing parameters from an atmospheric physics perspective. This work aims to quantitatively compare WRF model output to the spatial meteorologic fields developed to force Isnobal, with the hopes of eventually using WRF predictions to create accurate hourly forecasts of SWE over a large mountainous basin.
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, David C.; Goorvitch, D.
1994-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Accurate numerical solutions of conservative nonlinear oscillators
NASA Astrophysics Data System (ADS)
Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan
2014-12-01
The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1975-01-01
The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.
Accurate complex scaling of three dimensional numerical potentials
Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan; Deutsch, Thierry
2013-05-28
The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scaling of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.
Accurate derivative evaluation for any Grad-Shafranov solver
NASA Astrophysics Data System (ADS)
Ricketson, L. F.; Cerfon, A. J.; Rachh, M.; Freidberg, J. P.
2016-01-01
We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad-Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
The development of accurate and efficient methods of numerical quadrature
NASA Technical Reports Server (NTRS)
Feagin, T.
1973-01-01
Some new methods for performing numerical quadrature of an integrable function over a finite interval are described. Each method provides a sequence of approximations of increasing order to the value of the integral. Each approximation makes use of all previously computed values of the integrand. The points at which new values of the integrand are computed are selected in such a way that the order of the approximation is maximized. The methods are compared with the quadrature methods of Clenshaw and Curtis, Gauss, Patterson, and Romberg using several examples.
Accurate numerical solution of compressible, linear stability equations
NASA Technical Reports Server (NTRS)
Malik, M. R.; Chuang, S.; Hussaini, M. Y.
1982-01-01
The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.
Numerical evaluation of uniform beam modes.
Tang, Y.; Reactor Analysis and Engineering
2003-12-01
The equation for calculating the normal modes of a uniform beam under transverse free vibration involves the hyperbolic sine and cosine functions. These functions are exponential growing without bound. Tables for the natural frequencies and the corresponding normal modes are available for the numerical evaluation up to the 16th mode. For modes higher than the 16th, the accuracy of the numerical evaluation will be lost due to the round-off errors in the floating-point math imposed by the digital computers. Also, it is found that the functions of beam modes commonly presented in the structural dynamics books are not suitable for numerical evaluation. In this paper, these functions are rearranged and expressed in a different form. With these new equations, one can calculate the normal modes accurately up to at least the 100th mode. Mike's Arbitrary Precision Math, an arbitrary precision math library, is used in the paper to verify the accuracy.
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.
2015-09-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.
Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique
2013-06-01
The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979
Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques
Petersen, Richard C.
2014-01-01
Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms
NASA Technical Reports Server (NTRS)
VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.
2000-01-01
The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.
Efficient numerical evaluation of Feynman integrals
NASA Astrophysics Data System (ADS)
Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran
2016-03-01
Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)
Seth A Veitzer
2008-10-21
Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.
Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations
Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg
2007-08-10
In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.
Takahashi, F; Endo, A
2007-01-01
A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure. PMID:17510203
A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation
NASA Astrophysics Data System (ADS)
Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin
2016-07-01
In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.
Accurate Evaluation Method of Molecular Binding Affinity from Fluctuation Frequency
NASA Astrophysics Data System (ADS)
Hoshino, Tyuji; Iwamoto, Koji; Ode, Hirotaka; Ohdomari, Iwao
2008-05-01
Exact estimation of the molecular binding affinity is significantly important for drug discovery. The energy calculation is a direct method to compute the strength of the interaction between two molecules. This energetic approach is, however, not accurate enough to evaluate a slight difference in binding affinity when distinguishing a prospective substance from dozens of candidates for medicine. Hence more accurate estimation of drug efficacy in a computer is currently demanded. Previously we proposed a concept of estimating molecular binding affinity, focusing on the fluctuation at an interface between two molecules. The aim of this paper is to demonstrate the compatibility between the proposed computational technique and experimental measurements, through several examples for computer simulations of an association of human immunodeficiency virus type-1 (HIV-1) protease and its inhibitor (an example for a drug-enzyme binding), a complexation of an antigen and its antibody (an example for a protein-protein binding), and a combination of estrogen receptor and its ligand chemicals (an example for a ligand-receptor binding). The proposed affinity estimation has proven to be a promising technique in the advanced stage of the discovery and the design of drugs.
PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release
NASA Astrophysics Data System (ADS)
Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.
2016-09-01
The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.
Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods
NASA Astrophysics Data System (ADS)
Kozdon, J. E.; Wilcox, L.
2013-12-01
Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.
Numerical Simulation of the 2004 Indian Ocean Tsunami: Accurate Flooding and drying in Banda Aceh
NASA Astrophysics Data System (ADS)
Cui, Haiyang; Pietrzak, Julie; Stelling, Guus; Androsov, Alexey; Harig, Sven
2010-05-01
The Indian Ocean Tsunami on December 26, 2004 caused one of the largest tsunamis in recent times and led to widespread devastation and loss of life. One of the worst hit regions was Banda Aceh, which is the capital of the Aceh province, located in the northern part of Sumatra, 150km from the source of the earthquake. A German-Indonesian Tsunami Early Warning System (GITEWS) (www.gitews.de) is currently under active development. The work presented here is carried out within the GITEWS framework. One of the aims of this project is the development of accurate models with which to simulate the propagation, flooding and drying, and run-up of a tsunami. In this context, TsunAWI has been developed by the Alfred Wegener Institute; it is an explicit, () finite element model. However, the accurate numerical simulation of flooding and drying requires the conservation of mass and momentum. This is not possible in the current version of TsunAWi. The P1NC - P1element guarantees mass conservation in a global sense, yet as we show here it is important to guarantee mass conservation at the local level, that is within each individual cell. Here an unstructured grid, finite volume ocean model is presented. It is derived from the P1NC - P1 element, and is shown to be mass and momentum conserving. Then a number of simulations are presented, including dam break problems flooding over both a wet and a dry bed. Excellent agreement is found. Then we present simulations for Banda Aceh, and compare the results to on-site survey data, as well as to results from the original TsunAWI code.
Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows
Johnson, B M; Guan, X; Gammie, F
2008-04-11
In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
Numerical Evaluation of 2D Ground States
NASA Astrophysics Data System (ADS)
Kolkovska, Natalia
2016-02-01
A ground state is defined as the positive radial solution of the multidimensional nonlinear problem
A Novel Method for the Accurate Evaluation of Poisson's Ratio of Soft Polymer Materials
Lee, Jae-Hoon; Lee, Sang-Soo; Chang, Jun-Dong; Thompson, Mark S.; Kang, Dong-Joong; Park, Sungchan
2013-01-01
A new method with a simple algorithm was developed to accurately measure Poisson's ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H) with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson's ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson's ratio despite misalignment between specimens and experimental devices. In this study, Poisson's ratio of PVA-H was 0.44 ± 0.025 (n = 6) for 2.6–47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson's ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials. PMID:23737733
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of
Accurate evaluation of homogenous and nonhomogeneous gas emissivities
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Lee, K. P.
1984-01-01
Spectral transmittance and total band adsorptance of selected infrared bands of carbon dioxide and water vapor are calculated by using the line-by-line and quasi-random band models and these are compared with available experimental results to establish the validity of the quasi-random band model. Various wide-band model correlations are employed to calculate the total band absorptance and total emissivity of these two gases under homogeneous and nonhomogeneous conditions. These results are compared with available experimental results under identical conditions. From these comparisons, it is found that the quasi-random band model can provide quite accurate results and is quite suitable for most atmospheric applications.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)
A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...
Danshita, Ippei; Polkovnikov, Anatoli
2010-09-01
We study the quantum dynamics of supercurrents of one-dimensional Bose gases in a ring optical lattice to verify instanton methods applied to coherent macroscopic quantum tunneling (MQT). We directly simulate the real-time quantum dynamics of supercurrents, where a coherent oscillation between two macroscopically distinct current states occurs due to MQT. The tunneling rate extracted from the coherent oscillation is compared with that given by the instanton method. We find that the instanton method is quantitatively accurate when the effective Planck's constant is sufficiently small. We also find phase slips associated with the oscillations.
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
Accurate polarimeter with multicapture fitting for plastic lens evaluation
NASA Astrophysics Data System (ADS)
Domínguez, Noemí; Mayershofer, Daniel; Garcia, Cristina; Arasa, Josep
2016-02-01
Due to their manufacturing process, plastic injection molded lenses do not achieve a constant density throughout their volume. This change of density introduces tensions in the material, inducing local birefringence, which in turn is translated into a variation of the ordinary and extraordinary refractive indices that can be expressed as a retardation phase plane using the Jones matrix notation. The detection and measurement of the value of the retardation of the phase plane are therefore very useful ways to evaluate the quality of plastic lenses. We introduce a polariscopic device to obtain two-dimensional maps of the tension distribution in the bulk of a lens, based on detection of the local birefringence. In addition to a description of the device and the mathematical approach used, a set of initial measurements is presented that confirms the validity of the developed system for the testing of the uniformity of plastic lenses.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.
2006-01-01
Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.
Differential-equation-based representation of truncation errors for accurate numerical simulation
NASA Astrophysics Data System (ADS)
MacKinnon, Robert J.; Johnson, Richard W.
1991-09-01
High-order compact finite difference schemes for 2D convection-diffusion-type differential equations with constant and variable convection coefficients are derived. The governing equations are employed to represent leading truncation terms, including cross-derivatives, making the overall O(h super 4) schemes conform to a 3 x 3 stencil. It is shown that the two-dimensional constant coefficient scheme collapses to the optimal scheme for the one-dimensional case wherein the finite difference equation yields nodally exact results. The two-dimensional schemes are tested against standard model problems, including a Navier-Stokes application. Results show that the two schemes are generally more accurate, on comparable grids, than O(h super 2) centered differencing and commonly used O(h) and O(h super 3) upwinding schemes.
Towards more accurate numerical modeling of impedance based high frequency harmonic vibration
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2014-03-01
The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.
TOPICA: an accurate and efficient numerical tool for analysis and design of ICRF antennas
NASA Astrophysics Data System (ADS)
Lancellotti, V.; Milanesio, D.; Maggiora, R.; Vecchi, G.; Kyrytsya, V.
2006-07-01
The demand for a predictive tool to help in designing ion-cyclotron radio frequency (ICRF) antenna systems for today's fusion experiments has driven the development of codes such as ICANT, RANT3D, and the early development of TOPICA (TOrino Polytechnic Ion Cyclotron Antenna) code. This paper describes the substantive evolution of TOPICA formulation and implementation that presently allow it to handle the actual geometry of ICRF antennas (with curved, solid straps, a general-shape housing, Faraday screen, etc) as well as an accurate plasma description, accounting for density and temperature profiles and finite Larmor radius effects. The antenna is assumed to be housed in a recess-like enclosure. Both goals have been attained by formally separating the problem into two parts: the vacuum region around the antenna and the plasma region inside the toroidal chamber. Field continuity and boundary conditions allow formulating of a set of two coupled integral equations for the unknown equivalent (current) sources; then the equations are reduced to a linear system by a method of moments solution scheme employing 2D finite elements defined over a 3D non-planar surface triangular-cell mesh. In the vacuum region calculations are done in the spatial (configuration) domain, whereas in the plasma region a spectral (wavenumber) representation of fields and currents is adopted, thus permitting a description of the plasma by a surface impedance matrix. Owing to this approach, any plasma model can be used in principle, and at present the FELICE code has been employed. The natural outcomes of TOPICA are the induced currents on the conductors (antenna, housing, etc) and the electric field in front of the plasma, whence the antenna circuit parameters (impedance/scattering matrices), the radiated power and the fields (at locations other than the chamber aperture) are then obtained. An accurate model of the feeding coaxial lines is also included. The theoretical model and its TOPICA
TOPLHA: an accurate and efficient numerical tool for analysis and design of LH antennas
NASA Astrophysics Data System (ADS)
Milanesio, D.; Lancellotti, V.; Meneghini, O.; Maggiora, R.; Vecchi, G.; Bilato, R.
2007-09-01
Auxiliary ICRF heating systems in tokamaks often involve large complex antennas, made up of several conducting straps hosted in distinct cavities that open towards the plasma. The same holds especially true in the LH regime, wherein the antennas are comprised of arrays of many phased waveguides. Upon observing that the various cavities or waveguides couple to each other only through the EM fields existing over the plasma-facing apertures, we self-consistently formulated the EM problem by a convenient set of multiple coupled integral equations. Subsequent application of the Method of Moments yields a highly sparse algebraic system; therefore formal inversion of the system matrix happens to be not so memory demanding, despite the number of unknowns may be quite large (typically 105 or so). The overall strategy has been implemented in an enhanced version of TOPICA (Torino Polytechnic Ion Cyclotron Antenna) and in a newly developed code named TOPLHA (Torino Polytechnic Lower Hybrid Antenna). Both are simulation and prediction tools for plasma facing antennas that incorporate commercial-grade 3D graphic interfaces along with an accurate description of the plasma. In this work we present the new proposed formulation along with examples of application to real life large LH antenna systems.
Kottmann, Jakob S; Höfener, Sebastian; Bischoff, Florian A
2015-12-21
In the present work, we report an efficient implementation of configuration interaction singles (CIS) excitation energies and oscillator strengths using the multi-resolution analysis (MRA) framework to address the basis-set convergence of excited state computations. In MRA (ground-state) orbitals, excited states are constructed adaptively guaranteeing an overall precision. Thus not only valence but also, in particular, low-lying Rydberg states can be computed with consistent quality at the basis set limit a priori, or without special treatments, which is demonstrated using a small test set of organic molecules, basis sets, and states. We find that the new implementation of MRA-CIS excitation energy calculations is competitive with conventional LCAO calculations when the basis-set limit of medium-sized molecules is sought, which requires large, diffuse basis sets. This becomes particularly important if accurate calculations of molecular electronic absorption spectra with respect to basis-set incompleteness are required, in which both valence as well as Rydberg excitations can contribute to the molecule's UV/VIS fingerprint. PMID:25913482
The use of experimental bending tests to more accurate numerical description of TBC damage process
NASA Astrophysics Data System (ADS)
Sadowski, T.; Golewski, P.
2016-04-01
Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.
NASA Astrophysics Data System (ADS)
Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid
2016-07-01
We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].
NASA Astrophysics Data System (ADS)
Jiang, Shidong; Luo, Li-Shi
2016-07-01
The integral equation for the flow velocity u (x ; k) in the steady Couette flow derived from the linearized Bhatnagar-Gross-Krook-Welander kinetic equation is studied in detail both theoretically and numerically in a wide range of the Knudsen number k between 0.003 and 100.0. First, it is shown that the integral equation is a Fredholm equation of the second kind in which the norm of the compact integral operator is less than 1 on Lp for any 1 ≤ p ≤ ∞ and thus there exists a unique solution to the integral equation via the Neumann series. Second, it is shown that the solution is logarithmically singular at the endpoints. More precisely, if x = 0 is an endpoint, then the solution can be expanded as a double power series of the form ∑n=0∞∑m=0∞cn,mxn(xln x) m about x = 0 on a small interval x ∈ (0 , a) for some a > 0. And third, a high-order adaptive numerical algorithm is designed to compute the solution numerically to high precision. The solutions for the flow velocity u (x ; k), the stress Pxy (k), and the half-channel mass flow rate Q (k) are obtained in a wide range of the Knudsen number 0.003 ≤ k ≤ 100.0; and these solutions are accurate for at least twelve significant digits or better, thus they can be used as benchmark solutions.
Shen, Yan; Lou, Shuqin; Wang, Xin
2014-03-20
The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters. PMID:24663461
NASA Technical Reports Server (NTRS)
Levi, Keith
1989-01-01
Two procedures for the evaluation of the performance of expert systems are illustrated: one procedure evaluates predictive accuracy; the other procedure is complementary in that it uncovers the factors that contribute to predictive accuracy. Using these procedures, it is argued that expert systems should be more accurate than human experts in two senses. One sense is that expert systems must be more accurate to be cost-effective. Previous research is reviewed and original results are presented which show that simple statistical models typically perform better than human experts for the task of combining evidence from a given set of information sources. The results also suggest the second sense in which expert systems should be more accurate than human experts. They reveal that expert systems should share factors that contribute to human accuracy, but not factors that detract from human accuracy. Thus the thesis is that one should both require and expect systems to be more accurate than humans.
Smalarz, Laura; Wells, Gary L
2014-04-01
Giving confirming feedback to mistaken eyewitnesses has robust distorting effects on their retrospective judgments (e.g., how certain they were, their view, etc.). Does feedback harm evaluators' abilities to discriminate between accurate and mistaken identification testimony? Participant-witnesses to a simulated crime made accurate or mistaken identifications from a lineup and then received confirming feedback or no feedback. Each then gave videotaped testimony about their identification, and a new sample of participant-evaluators judged the accuracy and credibility of the testimonies. Among witnesses who were not given feedback, evaluators were significantly more likely to believe the testimony of accurate eyewitnesses than they were to believe the testimony of mistaken eyewitnesses, indicating significant discrimination. Among witnesses who were given confirming feedback, however, evaluators believed accurate and mistaken witnesses at nearly identical rates, indicating no ability to discriminate. Moreover, there was no evidence of overbelief in the absence of feedback whereas there was significant overbelief in the confirming feedback conditions. Results demonstrate that a simple comment following a witness' identification decision ("Good job, you got the suspect") can undermine fact-finders' abilities to discern whether the witness made an accurate or a mistaken identification. PMID:24341835
Accurate Histological Techniques to Evaluate Critical Temperature Thresholds for Prostate In Vivo
NASA Astrophysics Data System (ADS)
Bronskill, Michael; Chopra, Rajiv; Boyes, Aaron; Tang, Kee; Sugar, Linda
2007-05-01
Various histological techniques have been compared to evaluate the boundaries of thermal damage produced by ultrasound in vivo in a canine model. When all images are accurately co-registered, H&E stained micrographs provide the best assessment of acute cellular damage. Estimates of the boundaries of 100% and 0% cell killing correspond to maximum temperature thresholds of 54.6 ± 1.7°C and 51.5 ± 1.9°C, respectively.
New On-Chip De-Embedding for Accurate Evaluation of Symmetric Devices
NASA Astrophysics Data System (ADS)
Goto, Yosuke; Natsukari, Youhei; Fujishima, Minoru
2008-04-01
For the millimeter-wave wireless transceivers, miniaturized on-chip passive devices are employed to increase wireless communication speed. For using miniaturized devices, it is necessary to evaluate test vehicles in advance, in which de-embedding is applied to on-chip evaluation. Although open-short de-embedding is currently most popular, accurate de-embedding is difficult because the ground plane in a short dummy pattern is not ideal in practice. To overcome this problem, we have proposed a new de-embedding method using only a through dummy pattern, called the through-only de-embedding method. By this through-only de-embedding method, we show that a small on-chip inductor with more than 100 pico-henries can be evaluated within 1.18% error.
NASA Astrophysics Data System (ADS)
Zhang, Na; Yao, Jun; Huang, Zhaoqin; Wang, Yueying
2013-06-01
Numerical simulation in naturally fractured media is challenging because of the coexistence of porous media and fractures on multiple scales that need to be coupled. We present a new approach to reservoir simulation that gives accurate resolution of both large-scale and fine-scale flow patterns. Multiscale methods are suitable for this type of modeling, because it enables capturing the large scale behavior of the solution without solving all the small features. Dual-porosity models in view of their strength and simplicity can be mainly used for sugar-cube representation of fractured media. In such a representation, the transfer function between the fracture and the matrix block can be readily calculated for water-wet media. For a mixed-wet system, the evaluation of the transfer function becomes complicated due to the effect of gravity. In this work, we use a multiscale finite element method (MsFEM) for two-phase flow in fractured media using the discrete-fracture model. By combining MsFEM with the discrete-fracture model, we aim towards a numerical scheme that facilitates fractured reservoir simulation without upscaling. MsFEM uses a standard Darcy model to approximate the pressure and saturation on a coarse grid, whereas fine scale effects are captured through basis functions constructed by solving local flow problems using the discrete-fracture model. The accuracy and the robustness of MsFEM are shown through several examples. In the first example, we consider several small fractures in a matrix and then compare the results solved by the finite element method. Then, we use the MsFEM in more complex models. The results indicate that the MsFEM is a promising path toward direct simulation of highly resolution geomodels.
SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments
Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin
2016-01-01
Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529
Eter, Wael A; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin
2016-01-01
Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, (111)In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of (111)In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529
Numerical models for the evaluation of geothermal systems
Bodvarsson, G.S.; Pruess, K.; Lippmann, M.J.
1986-08-01
We have carried out detailed simulations of various fields in the USA (Bada, New Mexico; Heber, California); Mexico (Cerro Prieto); Iceland (Krafla); and Kenya (Olkaria). These simulation studies have illustrated the usefulness of numerical models for the overall evaluation of geothermal systems. The methodology for modeling the behavior of geothermal systems, different approaches to geothermal reservoir modeling and how they can be applied in comprehensive evaluation work are discussed.
Factors Influencing Undergraduates' Self-Evaluation of Numerical Competence
ERIC Educational Resources Information Center
Tariq, Vicki N.; Durrani, Naureen
2012-01-01
This empirical study explores factors influencing undergraduates' self-evaluation of their numerical competence, using data from an online survey completed by 566 undergraduates from a diversity of academic disciplines, across all four faculties at a post-1992 UK university. Analysis of the data, which included correlation and multiple regression…
Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam
NASA Astrophysics Data System (ADS)
Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad
2015-05-01
Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.
NASA Astrophysics Data System (ADS)
Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith
2015-09-01
Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Uncertainty evaluation in numerical modeling of complex devices
NASA Astrophysics Data System (ADS)
Cheng, X.; Monebhurrun, V.
2014-10-01
Numerical simulation is an efficient tool for exploring and understanding the physics of complex devices, e.g. mobile phones. For meaningful results, it is important to evaluate the uncertainty of the numerical simulation. Uncertainty quantification in specific absorption rate (SAR) calculation using a full computer-aided design (CAD) mobile phone model is a challenging task. Since a typical SAR numerical simulation is computationally expensive, the traditional Monte Carlo (MC) simulation method proves inadequate. The unscented transformation (UT) is an alternative and numerically efficient method herein investigated to evaluate the uncertainty in the SAR calculation using the realistic models of two commercially available mobile phones. The electromagnetic simulation process is modeled as a nonlinear mapping with the uncertainty in the inputs e.g. the relative permittivity values of the mobile phone material properties, inducing an uncertainty in the output, e.g. the peak spatial-average SAR value.The numerical simulation results demonstrate that UT may be a potential candidate for the uncertainty quantification in SAR calculations since only a few simulations are necessary to obtain results similar to those obtained after hundreds or thousands of MC simulations.
Numerical evaluation of the performance of active noise control systems
NASA Technical Reports Server (NTRS)
Mollo, C. G.; Bernhard, R. J.
1990-01-01
This paper presents a generalized numerical technique for evaluating the optimal performance of active noise controllers. In this technique, the indirect BEM numerical procedures are used to derive the active noise controllers for optimal control of enclosed harmonic sound fields where the strength of the noise sources or the description of the enclosure boundary may not be known. The performance prediction for a single-input single-output system is presented, together with the analysis of the stability and observability of an active noise-control system employing detectors. The numerical procedures presented can be used for the design of both the physical configuration and the electronic components of the optimal active noise controller.
Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals
NASA Technical Reports Server (NTRS)
Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.
2007-01-01
Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.
Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study
NASA Astrophysics Data System (ADS)
Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans
2015-03-01
Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.
Li, Dandan; Hu, Bo; Wang, Qing; Liu, Hongchang; Pan, Feng; Wu, Wei
2015-01-01
Safflower (Carthamus tinctorius L.) has received a significant amount of attention as a medicinal plant and oilseed crop. Gene expression studies provide a theoretical molecular biology foundation for improving new traits and developing new cultivars. Real-time quantitative PCR (RT-qPCR) has become a crucial approach for gene expression analysis. In addition, appropriate reference genes (RGs) are essential for accurate and rapid relative quantification analysis of gene expression. In this study, fifteen candidate RGs involved in multiple metabolic pathways of plants were finally selected and validated under different experimental treatments, at different seed development stages and in different cultivars and tissues for real-time PCR experiments. These genes were ABCS, 60SRPL10, RANBP1, UBCL, MFC, UBCE2, EIF5A, COA, EF1-β, EF1, GAPDH, ATPS, MBF1, GTPB and GST. The suitability evaluation was executed by the geNorm and NormFinder programs. Overall, EF1, UBCE2, EIF5A, ATPS and 60SRPL10 were the most stable genes, and MBF1, as well as MFC, were the most unstable genes by geNorm and NormFinder software in all experimental samples. To verify the validation of RGs selected by the two programs, the expression analysis of 7 CtFAD2 genes in safflower seeds at different developmental stages under cold stress was executed using different RGs in RT-qPCR experiments for normalization. The results showed similar expression patterns when the most stable RGs selected by geNorm or NormFinder software were used. However, the differences were detected using the most unstable reference genes. The most stable combination of genes selected in this study will help to achieve more accurate and reliable results in a wide variety of samples in safflower. PMID:26457898
Numerical evaluation of gas core length in free surface vortices
NASA Astrophysics Data System (ADS)
Cristofano, L.; Nobili, M.; Caruso, G.
2014-11-01
The formation and evolution of free surface vortices represent an important topic in many hydraulic intakes, since strong whirlpools introduce swirl flow at the intake, and could cause entrainment of floating matters and gas. In particular, gas entrainment phenomena are an important safety issue for Sodium cooled Fast Reactors, because the introduction of gas bubbles within the core causes dangerous reactivity fluctuation. In this paper, a numerical evaluation of the gas core length in free surface vortices is presented, according to two different approaches. In the first one, a prediction method, developed by the Japanese researcher Sakai and his team, has been applied. This method is based on the Burgers vortex model, and it is able to estimate the gas core length of a free surface vortex starting from two parameters calculated with single-phase CFD simulations. The two parameters are the circulation and the downward velocity gradient. The other approach consists in performing a two-phase CFD simulation of a free surface vortex, in order to numerically reproduce the gas- liquid interface deformation. Mapped convergent mesh is used to reduce numerical error and a VOF (Volume Of Fluid) method was selected to track the gas-liquid interface. Two different turbulence models have been tested and analyzed. Experimental measurements of free surface vortices gas core length have been executed, using optical methods, and numerical results have been compared with experimental measurements. The computational domain and the boundary conditions of the CFD simulations were set consistently with the experimental test conditions.
Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui
2014-01-01
The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers. PMID:24586313
Study on Applicability of Numerical Simulation to Evaluation of Gas Entrainment From Free Surface
Kei Ito; Takaaki Sakai; Hiroyuki Ohshima
2006-07-01
An onset condition of gas entrainment (GE) due to free surface vortex has been studied to establish a design of fast breeder reactor with higher coolant velocity than conventional designs, because the GE might cause the reactor operation instability and therefore should be avoided. The onset condition of the GE has been investigated experimentally and theoretically, however, dependency of the vortex type GE on local geometry configuration of each experimental system and local velocity distribution has prevented researchers from formulating the universal onset condition of the vortex type GE. A real scale test is considered as an accurate method to evaluate the occurrence of the vortex type GE, but the real scale test is generally expensive and not useful in the design study of large and complicated FBR systems, because frequent displacement of inner equipments accompanied by the design change is difficult in the real scale test. Numerical simulation seems to be promising method as an alternative to the real scale test. In this research, to evaluate the applicability of the numerical simulation to the design work, numerical simulations were conducted on the basic experimental system of the vortex type GE. This basic experiment consisted of rectangular flow channel and two important equipments for vortex type GE in the channel, i.e. vortex generation and suction equipments. Generated vortex grew rapidly interacting with the suction flow and the grown vortex formed a free surface dent (gas core). When the tip of the gas core or the bubbles detached from the tip of the gas core reached the suction mouth, the gas was entrained to the suction tube. The results of numerical simulation under the experimental conditions were compared to the experiment in terms of velocity distributions and free surface shape. As a result, the numerical simulation showed qualitatively good agreement with experimental data. The numerical simulation results were similar to the experimental
Cartwright, Michael S; Dupuis, Janae E; Bargoil, Jessica M; Foster, Dana C
2015-09-01
Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the
Cycle-accurate evaluation of reconfigurable photonic networks-on-chip
NASA Astrophysics Data System (ADS)
Debaes, Christof; Artundo, Iñigo; Heirman, Wim; Van Campenhout, Jan; Thienpont, Hugo
2010-05-01
There is little doubt that the most important limiting factors of the performance of next-generation Chip Multiprocessors (CMPs) will be the power efficiency and the available communication speed between cores. Photonic Networks-on-Chip (NoCs) have been suggested as a viable route to relieve the off- and on-chip interconnection bottleneck. Low-loss integrated optical waveguides can transport very high-speed data signals over longer distances as compared to on-chip electrical signaling. In addition, with the development of silicon microrings, photonic switches can be integrated to route signals in a data-transparent way. Although several photonic NoC proposals exist, their use is often limited to the communication of large data messages due to a relatively long set-up time of the photonic channels. In this work, we evaluate a reconfigurable photonic NoC in which the topology is adapted automatically (on a microsecond scale) to the evolving traffic situation by use of silicon microrings. To evaluate this system's performance, the proposed architecture has been implemented in a detailed full-system cycle-accurate simulator which is capable of generating realistic workloads and traffic patterns. In addition, a model was developed to estimate the power consumption of the full interconnection network which was compared with other photonic and electrical NoC solutions. We find that our proposed network architecture significantly lowers the average memory access latency (35% reduction) while only generating a modest increase in power consumption (20%), compared to a conventional concentrated mesh electrical signaling approach. When comparing our solution to high-speed circuit-switched photonic NoCs, long photonic channel set-up times can be tolerated which makes our approach directly applicable to current shared-memory CMPs.
The Good, the Strong, and the Accurate: Preschoolers' Evaluations of Informant Attributes
ERIC Educational Resources Information Center
Fusaro, Maria; Corriveau, Kathleen H.; Harris, Paul L.
2011-01-01
Much recent evidence shows that preschoolers are sensitive to the accuracy of an informant. Faced with two informants, one of whom names familiar objects accurately and the other inaccurately, preschoolers subsequently prefer to learn the names and functions of unfamiliar objects from the more accurate informant. This study examined the inference…
Xu, Jing; Ding, Yunhong; Peucheret, Christophe; Xue, Weiqi; Seoane, Jorge; Zsigri, Beáta; Jeppesen, Palle; Mørk, Jesper
2011-01-01
Although patterning effects (PEs) are known to be a limiting factor of ultrafast photonic switches based on semiconductor optical amplifiers (SOAs), a simple approach for their evaluation in numerical simulations and experiments is missing. In this work, we experimentally investigate and verify a theoretical prediction of the pseudo random binary sequence (PRBS) length needed to capture the full impact of PEs. A wide range of SOAs and operation conditions are investigated. The very simple form of the PRBS length condition highlights the role of two parameters, i.e. the recovery time of the SOAs as well as the operation bit rate. Furthermore, a simple and effective method for probing the maximum PEs is demonstrated, which may relieve the computational effort or the experimental difficulties associated with the use of long PRBSs for the simulation or characterization of SOA-based switches. Good agrement with conventional PRBS characterization is obtained. The method is suitable for quick and systematic estimation and optimization of the switching performance. PMID:21263552
Thermal numerical simulator for laboratory evaluation of steamflood oil recovery
Sarathi, P.
1991-04-01
A thermal numerical simulator running on an IBM AT compatible personal computer is described. The simulator was designed to assist laboratory design and evaluation of steamflood oil recovery. An overview of the historical evolution of numerical thermal simulation, NIPER's approach to solving these problems with a desk top computer, the derivation of equations and a description of approaches used to solve these equations, and verification of the simulator using published data sets and sensitivity analysis are presented. The developed model is a three-phase, two-dimensional multicomponent simulator capable of being run in one or two dimensions. Mass transfer among the phases and components is dictated by pressure- and temperature-dependent vapor-liquid equilibria. Gravity and capillary pressure phenomena were included. Energy is transferred by conduction, convection, vaporization and condensation. The model employs a block centered grid system with a five-point discretization scheme. Both areal and vertical cross-sectional simulations are possible. A sequential solution technique is employed to solve the finite difference equations. The study clearly indicated the importance of heat loss, injected steam quality, and injection rate to the process. Dependence of overall recovery on oil volatility and viscosity is emphasized. The process is very sensitive to relative permeability values. Time-step sensitivity runs indicted that the current version is time-step sensitive and exhibits conditional stability. 75 refs., 19 figs., 19 tabs.
Quantitative evaluation of numerical integration schemes for Lagrangian particle dispersion models
NASA Astrophysics Data System (ADS)
Ramli, Huda Mohd.; Esler, J. Gavin
2016-07-01
A rigorous methodology for the evaluation of integration schemes for Lagrangian particle dispersion models (LPDMs) is presented. A series of one-dimensional test problems are introduced, for which the Fokker-Planck equation is solved numerically using a finite-difference discretisation in physical space and a Hermite function expansion in velocity space. Numerical convergence errors in the Fokker-Planck equation solutions are shown to be much less than the statistical error associated with a practical-sized ensemble (N = 106) of LPDM solutions; hence, the former can be used to validate the latter. The test problems are then used to evaluate commonly used LPDM integration schemes. The results allow for optimal time-step selection for each scheme, given a required level of accuracy. The following recommendations are made for use in operational models. First, if computational constraints require the use of moderate to long time steps, it is more accurate to solve the random displacement model approximation to the LPDM rather than use existing schemes designed for long time steps. Second, useful gains in numerical accuracy can be obtained, at moderate additional computational cost, by using the relatively simple "small-noise" scheme of Honeycutt.
Evaluating the Impact of Aerosols on Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Freitas, Saulo; Silva, Arlindo; Benedetti, Angela; Grell, Georg; Members, Wgne; Zarzur, Mauricio
2015-04-01
The Working Group on Numerical Experimentation (WMO, http://www.wmo.int/pages/about/sec/rescrosscut/resdept_wgne.html) has organized an exercise to evaluate the impact of aerosols on NWP. This exercise will involve regional and global models currently used for weather forecast by the operational centers worldwide and aims at addressing the following questions: a) How important are aerosols for predicting the physical system (NWP, seasonal, climate) as distinct from predicting the aerosols themselves? b) How important is atmospheric model quality for air quality forecasting? c) What are the current capabilities of NWP models to simulate aerosol impacts on weather prediction? Toward this goal we have selected 3 strong or persistent events of aerosol pollution worldwide that could be fairly represented in current NWP models and that allowed for an evaluation of the aerosol impact on weather prediction. The selected events includes a strong dust storm that blew off the coast of Libya and over the Mediterranean, an extremely severe episode of air pollution in Beijing and surrounding areas, and an extreme case of biomass burning smoke in Brazil. The experimental design calls for simulations with and without explicitly accounting for aerosol feedbacks in the cloud and radiation parameterizations. In this presentation we will summarize the results of this study focusing on the evaluation of model performance in terms of its ability to faithfully simulate aerosol optical depth, and the assessment of the aerosol impact on the predictions of near surface wind, temperature, humidity, rainfall and the surface energy budget.
Numerical Analysis for Structural Safety Evaluation of Butterfly Valves
NASA Astrophysics Data System (ADS)
Shin, Myung-Seob; Yoon, Joon-Yong; Park, Han-Yung
2010-06-01
Butterfly valves are widely used in current industry to control the fluid flow. They are used for both on-off and throttling applications involving large flows at relatively low operating pressure especially in large size pipelines. For the industrial application of butterfly valves, it must be ensured that the valve could be used safety under the fatigue life and the deformations produced by the pressure of the fluid. In this study, we carried out the structure analysis of the body and the valve disc of the butterfly valve and the numerical simulation was performed by using ANSYS v11.0. The reliability of valve is evaluated under the investigation of the deformation, the leak test and the durability of the valve.
Factors influencing undergraduates' self-evaluation of numerical competence
NASA Astrophysics Data System (ADS)
Tariq, Vicki N.; Durrani, Naureen
2012-04-01
This empirical study explores factors influencing undergraduates' self-evaluation of their numerical competence, using data from an online survey completed by 566 undergraduates from a diversity of academic disciplines, across all four faculties at a post-1992 UK university. Analysis of the data, which included correlation and multiple regression analyses, revealed that undergraduates exhibiting greater confidence in their mathematical and numeracy skills, as evidenced by their higher self-evaluation scores and their higher scores on the confidence sub-scale contributing to the measurement of attitude, possess more cohesive, rather than fragmented, conceptions of mathematics, and display more positive attitudes towards mathematics/numeracy. They also exhibit lower levels of mathematics anxiety. Students exhibiting greater confidence also tended to be those who were relatively young (i.e. 18-29 years), whose degree programmes provided them with opportunities to practise and further develop their numeracy skills, and who possessed higher pre-university mathematics qualifications. The multiple regression analysis revealed two positive predictors (overall attitude towards mathematics/numeracy and possession of a higher pre-university mathematics qualification) and five negative predictors (mathematics anxiety, lack of opportunity to practise/develop numeracy skills, being a more mature student, being enrolled in Health and Social Care compared with Science and Technology, and possessing no formal mathematics/numeracy qualification compared with a General Certificate of Secondary Education or equivalent qualification) accounted for approximately 64% of the variation in students' perceptions of their numerical competence. Although the results initially suggested that male students were significantly more confident than females, one compounding variable was almost certainly the students' highest pre-university mathematics or numeracy qualification, since a higher
An accurate method of extracting fat droplets in liver images for quantitative evaluation
NASA Astrophysics Data System (ADS)
Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie
2015-03-01
The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.
Casimir problem of spherical dielectrics: numerical evaluation for general permittivities.
Brevik, I; Aarseth, J B; Høye, J S
2002-08-01
The Casimir mutual free energy F for a system of two dielectric concentric nonmagnetic spherical bodies is calculated, at arbitrary temperatures. The present paper is a continuation of an earlier investigation [Phys. Rev. E 63, 051101 (2001)], in which F was evaluated in full only for the case of ideal metals (refractive index n= infinity ). Here, analogous results are presented for dielectrics, for some chosen values of n. Our basic calculational method stems from quantum statistical mechanics. The Debye expansions for the Riccati-Bessel functions when carried out to a high order are found to be very useful in practice (thereby overflow/underflow problems are easily avoided), and also to give accurate results even for the lowest values of l down to l=1. Another virtue of the Debye expansions is that the limiting case of metals becomes quite amenable to an analytical treatment in spherical geometry. We first discuss the zero-frequency TE mode problem from a mathematical viewpoint and then, as a physical input, invoke the actual dispersion relations. The result of our analysis, based upon the adoption of the Drude dispersion relation at low frequencies, is that the zero-frequency TE mode does not contribute for a real metal. Accordingly, F turns out in this case to be only one-half of the conventional value at high temperatures. The applicability of the Drude model in this context has, however, been questioned recently, and we do not aim at a complete discussion of this issue here. Existing experiments are low-temperature experiments, and are so far not accurate enough to distinguish between the different predictions. We also calculate explicitly the contribution from the zero-frequency mode for a dielectric. For a dielectric, this zero-frequency problem is absent. PMID:12241249
Technology Transfer Automated Retrieval System (TEKTRAN)
The three evapotranspiration (ET) measurement/retrieval techniques used in this study, lysimeter, scintillometer and remote sensing vary in their level of complexity, accuracy, resolution and applicability. The lysimeter with its point measurement is the most accurate and direct method to measure ET...
NASA Astrophysics Data System (ADS)
Che, Xiao-Hua; Qiao, Wen-Xiao; Ju, Xiao-Dong; Wang, Rui-Jia
2016-03-01
We developed a novel cement evaluation logging tool, named the azimuthally acoustic bond tool (AABT), which uses a phased-arc array transmitter with azimuthal detection capability. We combined numerical simulations and field tests to verify the AABT tool. The numerical simulation results showed that the radiation direction of the subarray corresponding to the maximum amplitude of the first arrival matches the azimuth of the channeling when it is behind the casing. With larger channeling size in the circumferential direction, the amplitude difference of the casing wave at different azimuths becomes more evident. The test results showed that the AABT can accurately locate the casing collars and evaluate the cement bond quality with azimuthal resolution at the casing—cement interface, and can visualize the size, depth, and azimuth of channeling. In the case of good casing—cement bonding, the AABT can further evaluate the cement bond quality at the cement—formation interface with azimuthal resolution by using the amplitude map and the velocity of the formation wave.
In the field of bioassessment, tolerance has traditionally referred to the degree to which organisms can withstand environmental degradation. This concept has been around for many years and its use is widespread. In numerous cases, tolerance values (TVs) have been assigned to i...
Numerical Weather Predictions Evaluation Using Spatial Verification Methods
NASA Astrophysics Data System (ADS)
Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.
2014-12-01
During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain--Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is cofinanced by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007--2013).
Evaluation of kinetic uncertainty in numerical models of petroleum generation
Peters, K.E.; Walters, C.C.; Mankiewicz, P.J.
2006-01-01
Oil-prone marine petroleum source rocks contain type I or type II kerogen having Rock-Eval pyrolysis hydrogen indices greater than 600 or 300-600 mg hydrocarbon/g total organic carbon (HI, mg HC/g TOC), respectively. Samples from 29 marine source rocks worldwide that contain mainly type II kerogen (HI = 230-786 mg HC/g TOC) were subjected to open-system programmed pyrolysis to determine the activation energy distributions for petroleum generation. Assuming a burial heating rate of 1??C/m.y. for each measured activation energy distribution, the calculated average temperature for 50% fractional conversion of the kerogen in the samples to petroleum is approximately 136 ?? 7??C, but the range spans about 30??C (???121-151??C). Fifty-two outcrop samples of thermally immature Jurassic Oxford Clay Formation were collected from five locations in the United Kingdom to determine the variations of kinetic response for one source rock unit. The samples contain mainly type I or type II kerogens (HI = 230-774 mg HC/g TOC). At a heating rate of 1??C/m.y., the calculated temperatures for 50% fractional conversion of the Oxford Clay kerogens to petroleum differ by as much as 23??C (127-150??C). The data indicate that kerogen type, as defined by hydrogen index, is not systematically linked to kinetic response, and that default kinetics for the thermal decomposition of type I or type II kerogen can introduce unacceptable errors into numerical simulations. Furthermore, custom kinetics based on one or a few samples may be inadequate to account for variations in organofacies within a source rock. We propose three methods to evaluate the uncertainty contributed by kerogen kinetics to numerical simulations: (1) use the average kinetic distribution for multiple samples of source rock and the standard deviation for each activation energy in that distribution; (2) use source rock kinetics determined at several locations to describe different parts of the study area; and (3) use a weighted
NASA Astrophysics Data System (ADS)
van den Heever, S. C.; Tao, W. K.; Skofronick Jackson, G.; Tanelli, S.; L'Ecuyer, T. S.; Petersen, W. A.; Kummerow, C. D.
2015-12-01
Cloud, aerosol and precipitation processes play a fundamental role in the water and energy cycle. It is critical to accurately represent these microphysical processes in numerical models if we are to better predict cloud and precipitation properties on weather through climate timescales. Much has been learned about cloud properties and precipitation characteristics from NASA satellite missions such as TRMM, CloudSat, and more recently GPM. Furthermore, data from these missions have been successfully utilized in evaluating the microphysical schemes in cloud-resolving models (CRMs) and global models. However, there are still many uncertainties associated with these microphysics schemes. These uncertainties can be attributed, at least in part, to the fact that microphysical processes cannot be directly observed or measured, but instead have to be inferred from those cloud properties that can be measured. Evaluation of microphysical parameterizations are becoming increasingly important as enhanced computational capabilities are facilitating the use of more sophisticated schemes in CRMs, and as future global models are being run on what has traditionally been regarded as cloud-resolving scales using CRM microphysical schemes. In this talk we will demonstrate how TRMM, CloudSat and GPM data have been used to evaluate different aspects of current CRM microphysical schemes, providing examples of where these approaches have been successful. We will also highlight CRM microphysical processes that have not been well evaluated and suggest approaches for addressing such issues. Finally, we will introduce a potential NASA satellite mission, the Cloud and Precipitation Processes Mission (CAPPM), which would facilitate the development and evaluation of different microphysical-dynamical feedbacks in numerical models.
EEMD based pitch evaluation method for accurate grating measurement by AFM
NASA Astrophysics Data System (ADS)
Li, Changsheng; Yang, Shuming; Wang, Chenying; Jiang, Zhuangde
2016-09-01
The pitch measurement and AFM calibration precision are significantly influenced by the grating pitch evaluation method. This paper presents the ensemble empirical mode decomposition (EEMD) based pitch evaluation method to relieve the accuracy deterioration caused by high and low frequency components of scanning profile during pitch evaluation. The simulation analysis shows that the application of EEMD can improve the pitch accuracy of the FFT-FT algorithm. The pitch error is small when the iteration number of the FFT-FT algorithms was 8. The AFM measurement of the 500 nm-pitch one-dimensional grating shows that the EEMD based pitch evaluation method could improve the pitch precision, especially the grating line position precision, and greatly expand the applicability of the gravity center algorithm when particles and impression marks were distributed on the sample surface. The measurement indicates that the nonlinearity was stable, and the nonlinearity of x axis and forward scanning was much smaller than their counterpart. Finally, a detailed pitch measurement uncertainty evaluation model suitable for commercial AFMs was demonstrated and a pitch uncertainty in the sub-nanometer range was achieved. The pitch uncertainty was reduced about 10% by EEMD.
NASA Astrophysics Data System (ADS)
Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime
2016-03-01
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.
Variable impedance cardiography waveforms: how to evaluate the preejection period more accurately
NASA Astrophysics Data System (ADS)
Ermishkin, V. V.; Kolesnikov, V. A.; Lukoshkova, E. V.; Mokh, V. P.; Sonina, R. S.; Dupik, N. V.; Boitsov, S. A.
2012-12-01
Impedance method has been successfully applied for left ventricular function assessment during functional tests. The preejection period (PEP), the interval between Q peak in ECG and a specific mark on impedance cardiogram (ICG) which corresponds to aortic valve opening, is an important indicator of the contractility state and its neurogenic control. Accurate identification of ejection onset by ICG is often problematic, especially in the cardiologic patients, due to peculiar waveforms. An essential obstacle is variability of the shape of the ICG waveform during the exercise and subsequent recovery. A promissing solution can be introduction of an additional pulse sensor placed in the nearby region. We tested this idea in 28 healthy subjects and 6 cardiologic patients using a dual-channel impedance cardiograph for simultaneous recording from the aortic and neck regions, and an earlobe photoplethysmograph. Our findings suggest that incidence of abnormal complicated ICG waveforms increases with age. The combination of standard ICG with ear photoplethysmography and/or additional impedance channel significantly improves the efficacy and accuracy of PEP estimation.
NASA Astrophysics Data System (ADS)
Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.
2013-12-01
The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales
Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Hu, Weigang
2016-01-01
Purpose 4DCT delineated internal target volume (ITV) was applied to determine the tumor motion and used as planning target in treatment planning in lung cancer stereotactic body radiotherapy (SBRT). This work is to study the accuracy of using ITV to predict the real target dose in lung cancer SBRT. Materials and methods Both for phantom and patient cases, the ITV and gross tumor volumes (GTVs) were contoured on the maximum intensity projection (MIP) CT and ten CT phases, respectively. A SBRT plan was designed using ITV as the planning target on average projection (AVG) CT. This plan was copied to each CT phase and the dose distribution was recalculated. The GTV_4D dose was acquired through accumulating the GTV doses over all ten phases and regarded as the real target dose. To analyze the ITV dose error, the ITV dose was compared to the real target dose by endpoints of D99, D95, D1 (doses received by the 99%, 95% and 1% of the target volume), and dose coverage endpoint of V100(relative volume receiving at least the prescription dose). Results The phantom study shows that the ITV underestimates the real target dose by 9.47%∼19.8% in D99, 4.43%∼15.99% in D95, and underestimates the dose coverage by 5% in V100. The patient cases show that the ITV underestimates the real target dose and dose coverage by 3.8%∼10.7% in D99, 4.7%∼7.2% in D95, and 3.96%∼6.59% in V100 in motion target cases. Conclusions Cautions should be taken that ITV is not accurate enough to predict the real target dose in lung cancer SBRT with large tumor motions. Restricting the target motion or reducing the target dose heterogeneity could reduce the ITV dose underestimation effect in lung SBRT. PMID:26968812
Direct and indirect ophthalmoscopy for a more accurate baseline evaluation in aircrew members.
Blount, W C
1977-03-01
The currently required Federal Aviation Agency visual evaluation for commercial and airline pilots often does not detect quiescent retinal disease, unless there is a specific history or a current change in visual acuity which dictates the need for a dilated ophthalmoscopic evaluation. Statistics indicate that there may be a significant number of undetected retinal changes which can cause sudden and irreversible alterations in visual acuity during an airman's career. The requirements for an ophthalmoscopic examination should include, at the time of entry as an aircrew member into the aviation industry, a dilated fundus examination by the binocular indirect and direct ophthalmoscopic methods. In addition, documentary photography, visual fields, and other specific studies as indicated for these patients would be accomplished. These studies should be required by both the Federal Aviation Agency and the military services just as baseline ECG's chest films, SMA 12, and other laboratory studies are utilized. PMID:857802
Arakawa, Mototaka; Kushibiki, Jun-ichi; Aoki, Naoya
2004-05-01
The effective radius of a bulk-wave ultrasonic transducer as a circular piston source, fabricated on one end of a synthetic silica (SiO2) glass buffer rod, was evaluated for accurate velocity measurements of dispersive specimens over a wide frequency range. The effective radius was determined by comparing measured and calculated phase variations due to diffraction in an ultrasonic transmission line of the SiO2 buffer rod/water-couplant/SiO2 standard specimen, using radio-frequency (RF) tone burst ultrasonic waves. Fourteen devices with different device parameters were evaluated. The velocities of the nondispersive standard specimen (C-7940) were found to be 5934.10 +/- 0.35 m/s at 70 to 290 MHz, after diffraction correction using the nominal radius (0.75 mm) for an ultrasonic device with an operating center frequency of about 400 MHz. Corrected velocities were more accurately found to be 5934.15 +/- 0.03 m/s by using the effective radius (0.780 mm) for the diffraction correction. Bulk-wave ultrasonic devices calibrated by this experimental procedure enable conducting extremely accurate velocity dispersion measurements. PMID:15217227
Asthma control cost-utility randomized trial evaluation (ACCURATE): the goals of asthma treatment
2011-01-01
Background Despite the availability of effective therapies, asthma remains a source of significant morbidity and use of health care resources. The central research question of the ACCURATE trial is whether maximal doses of (combination) therapy should be used for long periods in an attempt to achieve complete control of all features of asthma. An additional question is whether patients and society value the potential incremental benefit, if any, sufficiently to concur with such a treatment approach. We assessed patient preferences and cost-effectiveness of three treatment strategies aimed at achieving different levels of clinical control: 1. sufficiently controlled asthma 2. strictly controlled asthma 3. strictly controlled asthma based on exhaled nitric oxide as an additional disease marker Design 720 Patients with mild to moderate persistent asthma from general practices with a practice nurse, age 18-50 yr, daily treatment with inhaled corticosteroids (more then 3 months usage of inhaled corticosteroids in the previous year), will be identified via patient registries of general practices in the Leiden, Nijmegen, and Amsterdam areas in The Netherlands. The design is a 12-month cluster-randomised parallel trial with 40 general practices in each of the three arms. The patients will visit the general practice at baseline, 3, 6, 9, and 12 months. At each planned and unplanned visit to the general practice treatment will be adjusted with support of an internet-based asthma monitoring system supervised by a central coordinating specialist nurse. Patient preferences and utilities will be assessed by questionnaire and interview. Data on asthma control, treatment step, adherence to treatment, utilities and costs will be obtained every 3 months and at each unplanned visit. Differences in societal costs (medication, other (health) care and productivity) will be compared to differences in the number of limited activity days and in quality adjusted life years (Dutch EQ5D, SF6D
NASA Astrophysics Data System (ADS)
Oh, K.; Han, M.; Kim, K.; Heo, Y.; Moon, C.; Park, S.; Nam, S.
2016-02-01
For quality assurance in radiation therapy, several types of dosimeters are used such as ionization chambers, radiographic films, thermo-luminescent dosimeter (TLD), and semiconductor dosimeters. Among them, semiconductor dosimeters are particularly useful for in vivo dosimeters or high dose gradient area such as the penumbra region because they are more sensitive and smaller in size compared to typical dosimeters. In this study, we developed and evaluated Cadmium Telluride (CdTe) dosimeters, one of the most promising semiconductor dosimeters due to their high quantum efficiency and charge collection efficiency. Such CdTe dosimeters include single crystal form and polycrystalline form depending upon the fabrication process. Both types of CdTe dosimeters are commercially available, but only the polycrystalline form is suitable for radiation dosimeters, since it is less affected by volumetric effect and energy dependence. To develop and evaluate polycrystalline CdTe dosimeters, polycrystalline CdTe films were prepared by thermal evaporation. After that, CdTeO3 layer, thin oxide layer, was deposited on top of the CdTe film by RF sputtering to improve charge carrier transport properties and to reduce leakage current. Also, the CdTeO3 layer which acts as a passivation layer help the dosimeter to reduce their sensitivity changes with repeated use due to radiation damage. Finally, the top and bottom electrodes, In/Ti and Pt, were used to have Schottky contact. Subsequently, the electrical properties under high energy photon beams from linear accelerator (LINAC), such as response coincidence, dose linearity, dose rate dependence, reproducibility, and percentage depth dose, were measured to evaluate polycrystalline CdTe dosimeters. In addition, we compared the experimental data of the dosimeter fabricated in this study with those of the silicon diode dosimeter and Thimble ionization chamber which widely used in routine dosimetry system and dose measurements for radiation
NASA Technical Reports Server (NTRS)
Canright, R. B., Jr.; Semler, T. T.
1972-01-01
Several approximations to the Doppler broadening functions psi(x, theta) and chi(x, theta) are compared with respect to accuracy and speed of evaluation. A technique, due to A. M. Turning (1943), is shown to be at least as accurate as direct numerical quadrature and somewhat faster than Gaussian quadrature. FORTRAN 4 listings are included.
NASA Astrophysics Data System (ADS)
Sakai, Yasumasa; Taki, Hirofumi; Kanai, Hiroshi
2016-07-01
In our previous study, the viscoelasticity of the radial artery wall was estimated to diagnose endothelial dysfunction using a high-frequency (22 MHz) ultrasound device. In the present study, we employed a commercial ultrasound device (7.5 MHz) and estimated the viscoelasticity using arterial pressure and diameter, both of which were measured at the same position. In a phantom experiment, the proposed method successfully estimated the elasticity and viscosity of the phantom with errors of 1.8 and 30.3%, respectively. In an in vivo measurement, the transient change in the viscoelasticity was measured for three healthy subjects during flow-mediated dilation (FMD). The proposed method revealed the softening of the arterial wall originating from the FMD reaction within 100 s after avascularization. These results indicate the high performance of the proposed method in evaluating vascular endothelial function just after avascularization, where the function is difficult to be estimated by a conventional FMD measurement.
Evaluation of a low-cost and accurate ocean temperature logger on subsurface mooring systems
Tian, Chuan; Deng, Zhiqun; Lu, Jun; Xu, Xiaoyang; Zhao, Wei; Xu, Ming
2014-06-23
Monitoring seawater temperature is important to understanding evolving ocean processes. To monitor internal waves or ocean mixing, a large number of temperature loggers are typically mounted on subsurface mooring systems to obtain high-resolution temperature data at different water depths. In this study, we redesigned and evaluated a compact, low-cost, self-contained, high-resolution and high-accuracy ocean temperature logger, TC-1121. The newly designed TC-1121 loggers are smaller, more robust, and their sampling intervals can be automatically changed by indicated events. They have been widely used in many mooring systems to study internal wave and ocean mixing. The logger’s fundamental design, noise analysis, calibration, drift test, and a long-term sea trial are discussed in this paper.
Evaluation of the EURO-CORDEX RCMs to accurately simulate the Etesian wind system
NASA Astrophysics Data System (ADS)
Dafka, Stella; Xoplaki, Elena; Toreti, Andrea; Zanis, Prodromos; Tyrlis, Evangelos; Luterbacher, Jürg
2016-04-01
The Etesians are among the most persistent regional scale wind systems in the lower troposphere that blow over the Aegean Sea during the extended summer season. ΑAn evaluation of the high spatial resolution, EURO-CORDEX Regional Climate Models (RCMs) is here presented. The study documents the performance of the individual models in representing the basic spatiotemporal pattern of the Etesian wind system for the period 1989-2004. The analysis is mainly focused on evaluating the abilities of the RCMs in simulating the surface wind over the Aegean Sea and the associated large scale atmospheric circulation. Mean Sea Level Pressure (SLP), wind speed and geopotential height at 500 hPa are used. The simulated results are validated against reanalysis datasets (20CR-v2c and ERA20-C) and daily observational measurements (12:00 UTC) from the mainland Greece and Aegean Sea. The analysis highlights the general ability of the RCMs to capture the basic features of the Etesians, but also indicates considerable deficiencies for selected metrics, regions and subperiods. Some of these deficiencies include the significant underestimation (overestimation) of the mean SLP in the northeastern part of the analysis domain in all subperiods (for May and June) when compared to 20CR-v2c (ERA20-C), the significant overestimation of the anomalous ridge over the Balkans and central Europe and the underestimation of the wind speed over the Aegean Sea. Future work will include an assessment of the Etesians for the next decades using EURO-CORDEX projections under different RCP scenarios and estimate the future potential for wind energy production.
Congenital spinal dermal tract: how accurate is clinical and radiological evaluation?
Tisdall, Martin M; Hayward, Richard D; Thompson, Dominic N P
2015-06-01
OBJECT A dermal sinus tract is a common form of occult spinal dysraphism. The presumed etiology relates to a focal failure of disjunction resulting in a persistent adhesion between the neural and cutaneous ectoderm. Clinical and radiological features can appear innocuous, leading to delayed diagnosis and failure to appreciate the implications or extent of the abnormality. If it is left untreated, complications can include meningitis, spinal abscess, and inclusion cyst formation. The authors present their experience in 74 pediatric cases of spinal dermal tract in an attempt to identify which clinical and radiological factors are associated with an infective presentation and to assess the reliability of MRI in evaluating this entity. METHODS Consecutive cases of spinal dermal tract treated with resection between 1998 and 2010 were identified from the departmental surgical database. Demographics, clinical history, and radiological and operative findings were collected from the patient records. The presence or absence of active infection (abscess, meningitis) at the time of neurosurgical presentation and any history of local sinus discharge or infection was assessed. Magnetic resonance images were reviewed to evaluate the extent of the sinus tract and determine the presence of an inclusion cyst. Radiological and operative findings were compared. RESULTS The surgical course was uncomplicated in 90% of 74 cases eligible for analysis. Magnetic resonance imaging underreported the presence of both an intradural tract (MRI 46%, operative finding 86%) and an intraspinal inclusion cyst (MRI 15%, operative finding 24%). A history of sinus discharge (OR 12.8, p = 0.0003) and the intraoperative identification of intraspinal inclusion cysts (OR 5.6, p = 0.023) were associated with an infective presentation. There was no significant association between the presence of an intradural tract discovered at surgery and an infective presentation. CONCLUSIONS Surgery for the treatment of
NASA Astrophysics Data System (ADS)
Prykäri, Tuukka; Czajkowski, Jakub; Alarousu, Erkki; Myllylä, Risto
2010-05-01
Optical coherence tomography (OCT), a technique for the noninvasive imaging of turbid media, based on low-coherence interferometry, was originally developed for the imaging of biological tissues. Since the development of the technique, most of its applications have been related to the area of biomedicine. However, from early stages, the vertical resolution of the technique has already been improved to a submicron scale. This enables new possibilities and applications. This article presents the possible applications of OCT in paper industry, where submicron or at least a resolution close to one micron is required. This requirement comes from the layered structure of paper products, where layer thickness may vary from single microns to tens of micrometers. This is especially similar to the case with high-quality paper products, where several different coating layers are used to obtain a smooth surface structure and a high gloss. In this study, we demonstrate that optical coherence tomography can be used to measure and evaluate the quality of the coating layer of a premium glossy photopaper. In addition, we show that for some paper products, it is possible to measure across the entire thickness range of a paper sheet. Furthermore, we suggest that in addition to topography and tomography images of objects, it is possible to obtain information similar to gloss by tracking the magnitude of individual interference signals in optical coherence tomography.
Semi-numerical evaluation of one-loop corrections
Ellis, R.K.; Giele, W.T.; Zanderighi, G.; /Fermilab
2005-08-01
We present a semi-numerical algorithm to calculate one-loop virtual corrections to scattering amplitudes. The divergences of the loop amplitudes are regulated using dimensional regularization. We treat in detail the case of amplitudes with up to five external legs and massless internal lines, although the method is more generally applicable. Tensor integrals are reduced to generalized scalar integrals, which in turn are reduced to a set of known basis integrals using recursion relations. The reduction algorithm is modified near exceptional configurations to ensure numerical stability. To test the procedure we apply these techniques to one-loop corrections to the Higgs to four quark process for which analytic results have recently become available.
Flocke, N
2009-08-14
In this paper it is shown that shifted Jacobi polynomials G(n)(p,q,x) can be used in connection with the Gaussian quadrature modified moment technique to greatly enhance the accuracy of evaluation of Rys roots and weights used in Gaussian integral evaluation in quantum chemistry. A general four-term inhomogeneous recurrence relation is derived for the shifted Jacobi polynomial modified moments over the Rys weight function e(-Tx)/square root x. It is shown that for q=1/2 this general four-term inhomogeneous recurrence relation reduces to a three-term p-dependent inhomogeneous recurrence relation. Adjusting p to proper values depending on the Rys exponential parameter T, the method is capable of delivering highly accurate results for large number of roots and weights in the most difficult to treat intermediate T range. Examples are shown, and detailed formulas together with practical suggestions for their efficient implementation are also provided. PMID:19691378
NASA Astrophysics Data System (ADS)
Flocke, N.
2009-08-01
In this paper it is shown that shifted Jacobi polynomials Gn(p,q,x) can be used in connection with the Gaussian quadrature modified moment technique to greatly enhance the accuracy of evaluation of Rys roots and weights used in Gaussian integral evaluation in quantum chemistry. A general four-term inhomogeneous recurrence relation is derived for the shifted Jacobi polynomial modified moments over the Rys weight function e-Tx/√x . It is shown that for q =1/2 this general four-term inhomogeneous recurrence relation reduces to a three-term p-dependent inhomogeneous recurrence relation. Adjusting p to proper values depending on the Rys exponential parameter T, the method is capable of delivering highly accurate results for large number of roots and weights in the most difficult to treat intermediate T range. Examples are shown, and detailed formulas together with practical suggestions for their efficient implementation are also provided.
EVALUATION OF NUMERICAL SCHEMES FOR SOLVING A CONSERVATION OF SPECIES EQUATION WITH CHEMICAL TERMS
Numerical methods are investigated for solving a system of continuity equations that contain linear and nonlinear chemistry as source and sink terms. It is shown that implicit, finite-difference approximations, when applied to the chemical kinetic terms, yield accurate results wh...
A Numerical Simulation Approach for Reliability Evaluation of CFRP Composite
NASA Astrophysics Data System (ADS)
Liu, D. S.-C.; Jenab, K.
2013-02-01
Due to the superior mechanical properties of carbon fiber reinforced plastic (CFRP) materials, they are vastly used in industries such as aircraft manufacturers. The aircraft manufacturers are switching metal to composite structures while studying reliability (R-value) of CFRP. In this study, a numerical simulation method to determine the reliability of Multiaxial Warp Knitted (MWK) textiles used to make CFRP composites is proposed. This method analyzes the distribution of carbon fiber angle misalignments, from a chosen 0° direction, caused by the sewing process of the textile, and finds the R-value, a value between 0 and 1. The application of this method is demonstrated by an illustrative example.
Schultz, Zachery D.; Warrick, Jay W.; Guckenberger, David J.; Pezzi, Hannah M.; Sperger, Jamie M.; Heninger, Erika; Saeed, Anwaar; Leal, Ticiana; Mattox, Kara; Traynor, Anne M.; Campbell, Toby C.; Berry, Scott M.; Beebe, David J.; Lang, Joshua M.
2016-01-01
Background Expression of programmed-death ligand 1 (PD-L1) in non-small cell lung cancer (NSCLC) is typically evaluated through invasive biopsies; however, recent advances in the identification of circulating tumor cells (CTCs) may be a less invasive method to assay tumor cells for these purposes. These liquid biopsies rely on accurate identification of CTCs from the diverse populations in the blood, where some tumor cells share characteristics with normal blood cells. While many blood cells can be excluded by their high expression of CD45, neutrophils and other immature myeloid subsets have low to absent expression of CD45 and also express PD-L1. Furthermore, cytokeratin is typically used to identify CTCs, but neutrophils may stain non-specifically for intracellular antibodies, including cytokeratin, thus preventing accurate evaluation of PD-L1 expression on tumor cells. This holds even greater significance when evaluating PD-L1 in epithelial cell adhesion molecule (EpCAM) positive and EpCAM negative CTCs (as in epithelial-mesenchymal transition (EMT)). Methods To evaluate the impact of CTC misidentification on PD-L1 evaluation, we utilized CD11b to identify myeloid cells. CTCs were isolated from patients with metastatic NSCLC using EpCAM, MUC1 or Vimentin capture antibodies and exclusion-based sample preparation (ESP) technology. Results Large populations of CD11b+CD45lo cells were identified in buffy coats and stained non-specifically for intracellular antibodies including cytokeratin. The amount of CD11b+ cells misidentified as CTCs varied among patients; accounting for 33–100% of traditionally identified CTCs. Cells captured with vimentin had a higher frequency of CD11b+ cells at 41%, compared to 20% and 18% with MUC1 or EpCAM, respectively. Cells misidentified as CTCs ultimately skewed PD-L1 expression to varying degrees across patient samples. Conclusions Interfering myeloid populations can be differentiated from true CTCs with additional staining criteria
Lift capability prediction for helicopter rotor blade-numerical evaluation
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Cîrciu, Ionicǎ; Luculescu, Doru
2016-06-01
The main objective of this paper is to describe the key physical features for modelling the unsteady aerodynamic effects found on helicopter rotor blade operating under nominally attached flow conditions away from stall. The unsteady effects were considered as phase differences between the forcing function and the aerodynamic response, being functions of the reduced frequency, the Mach number and the mode forcing. For a helicopter rotor, the reduced frequency at any blade element can't be exactly calculated but a first order approximation for the reduced frequency gives useful information about the degree of unsteadiness. The sources of unsteady effects were decomposed into perturbations to the local angle of attack and velocity field. The numerical calculus and graphics were made in FLUENT and MAPLE soft environments. This mathematical model is applicable for aerodynamic design of wind turbine rotor blades, hybrid energy systems optimization and aeroelastic analysis.
Analytical solutions of moisture flow equations and their numerical evaluation
Gibbs, A.G.
1981-04-01
The role of analytical solutions of idealized moisture flow problems is discussed. Some different formulations of the moisture flow problem are reviewed. A number of different analytical solutions are summarized, including the case of idealized coupled moisture and heat flow. The evaluation of special functions which commonly arise in analytical solutions is discussed, including some pitfalls in the evaluation of expressions involving combinations of special functions. Finally, perturbation theory methods are summarized which can be used to obtain good approximate analytical solutions to problems which are too complicated to solve exactly, but which are close to an analytically solvable problem.
Evaluation and purchase of confocal microscopes: Numerous factors to consider
The purchase of a confocal microscope can be a complex and difficult decision for an individual scientist, group or evaluation committee. This is true even for scientists that have used confocal technology for many years. The task of reaching the optimal decision becomes almost i...
NASA Astrophysics Data System (ADS)
Shi, W. D.; Zhang, G. J.; Zhang, D. S.
2013-12-01
The objective of this paper is to evaluate the predictive capability of three turbulence models for the simulation of unsteady cavitating flows around a 2D Clark-y hydrofoil. Three turbulence models were standard k-ε model, hybrid model of density correction model (DCM) and filter-based model (FBM) and an improved partially-averaged Navier-Stokes model (PANS) based on k-ε model. Using the above-mentioned turbulence models and a homogeneous cavitation model, the unsteady cloud cavitation flows around the hydrofoil were numerically simulated and the time evolutions of cavity shape and lift evolutions over time were obtained. The results with comparison to a tunnel experiment data show that the hybrid model and PANS model can accurately capture unsteady cavity shedding details, fluctuation frequency and amplitude of lift and drag. The k-ε model has a poor agreement with the real experimental visualizations and this is mainly attributed to an over prediction of the turbulent viscosity in the rear part of the cavity, which limits the reentrant jet fully reaching the leading edge. The adverse pressure gradient plays an important role in the progression of the reentrant jet. Both the shock wave generated by the collapse of the cloud cavity and the growth of attached sheet cavity contribute to the increase of adverse pressure gradient.
Numerical Evaluation of Lateral Diffusion Inside Diffusive Gradients in Thin Films Samplers
2015-01-01
Using numerical simulation of diffusion inside diffusive gradients in thin films (DGT) samplers, we show that the effect of lateral diffusion inside the sampler on the solute flux into the sampler is a nonlinear function of the diffusion layer thickness and the physical sampling window size. In contrast, earlier work concluded that this effect was constant irrespective of parameters of the sampler geometry. The flux increase caused by lateral diffusion inside the sampler was determined to be ∼8.8% for standard samplers, which is considerably lower than the previous estimate of ∼20%. Lateral diffusion is also propagated to the diffusive boundary layer (DBL), where it leads to a slightly stronger decrease in the mass uptake than suggested by the common 1D diffusion model that is applied for evaluating DGT results. We introduce a simple correction procedure for lateral diffusion and demonstrate how the effect of lateral diffusion on diffusion in the DBL can be accounted for. These corrections often result in better estimates of the DBL thickness (δ) and the DGT-measured concentration than earlier approaches and will contribute to more accurate concentration measurements in solute monitoring in waters. PMID:25877251
Borring, J.; Gundtoft, H.E.; Borum, K.K.; Toft, P.
1997-08-01
In an effort to improve their ultrasonic scanning technique for accurate determination of the cladding thickness in LEU fuel plates, new equipment and modifications to the existing hardware and software have been tested and evaluated. The authors are now able to measure an aluminium thickness down to 0.25 mm instead of the previous 0.35 mm. Furthermore, they have shown how the measuring sensitivity can be improved from 0.03 mm to 0.01 mm. It has now become possible to check their standard fuel plates for DR3 against the minimum cladding thickness requirements non-destructively. Such measurements open the possibility for the acceptance of a thinner nominal cladding than normally used today.
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Singer, Bart A.
2003-01-01
We evaluate the applicability of a production computational fluid dynamics code for conducting detached eddy simulation for unsteady flows. A second-order accurate Navier-Stokes code developed at NASA Langley Research Center, known as TLNS3D, is used for these simulations. We focus our attention on high Reynolds number flow (Re = 5 x 10(sup 4) - 1.4 x 10(sup 5)) past a circular cylinder to simulate flows with large-scale separations. We consider two types of flow situations: one in which the flow at the separation point is laminar, and the other in which the flow is already turbulent when it detaches from the surface of the cylinder. Solutions are presented for two- and three-dimensional calculations using both the unsteady Reynolds-averaged Navier-Stokes paradigm and the detached eddy simulation treatment. All calculations use the standard Spalart-Allmaras turbulence model as the base model.
3-D numerical evaluation of density effects on tracer tests.
Beinhorn, M; Dietrich, P; Kolditz, O
2005-12-01
In this paper we present numerical simulations carried out to assess the importance of density-dependent flow on tracer plume development. The scenario considered in the study is characterized by a short-term tracer injection phase into a fully penetrating well and a natural hydraulic gradient. The scenario is thought to be typical for tracer tests conducted in the field. Using a reference case as a starting point, different model parameters were changed in order to determine their importance to density effects. The study is based on a three-dimensional model domain. Results were interpreted using concentration contours and a first moment analysis. Tracer injections of 0.036 kg per meter of saturated aquifer thickness do not cause significant density effects assuming hydraulic gradients of at least 0.1%. Higher tracer input masses, as used for geoelectrical investigations, may lead to buoyancy-induced flow in the early phase of a tracer test which in turn impacts further plume development. This also holds true for shallow aquifers. Results of simulations with different tracer injection rates and durations imply that the tracer input scenario has a negligible effect on density flow. Employing model cases with different realizations of a log conductivity random field, it could be shown that small variations of hydraulic conductivity in the vicinity of the tracer injection well have a major control on the local tracer distribution but do not mask effects of buoyancy-induced flow. PMID:16183165
Numerical evaluation of one-loop diagrams near exceptional momentum configurations
Walter T Giele; Giulia Zanderighi; E.W.N. Glover
2004-07-06
One problem which plagues the numerical evaluation of one-loop Feynman diagrams using recursive integration by part relations is a numerical instability near exceptional momentum configurations. In this contribution we will discuss a generic solution to this problem. As an example we consider the case of forward light-by-light scattering.
Numerical evaluation of single central jet for turbine disk cooling
NASA Astrophysics Data System (ADS)
Subbaraman, M. R.; Hadid, A. H.; McConnaughey, P. K.
The cooling arrangement of the Space Shuttle Main Engine High Pressure Oxidizer Turbopump (HPOTP) incorporates two jet rings, each of which produces 19 high-velocity coolant jets. At some operating conditions, the frequency of excitation associated with the 19 jets coincides with the natural frequency of the turbine blades, contributing to fatigue cracking of blade shanks. In this paper, an alternate turbine disk cooling arrangement, applicable to disk faces of zero hub radius, is evaluated, which consists of a single coolant jet impinging at the center of the turbine disk. Results of the CFD analysis show that replacing the jet ring with a single central coolant jet in the HPOTP leads to an acceptable thermal environment at the disk rim. Based on the predictions of flow and temperature fields for operating conditions, the single central jet cooling system was recommended for implementation into the development program of the Technology Test Bed Engine at NASA Marshall Space Flight Center.
[Numerical evaluation of soil quality under different conservation tillage patterns].
Wu, Yu-Hong; Tian, Xiao-Hong; Chi, Wen-Bo; Nan, Xiong-Xiong; Yan, Xiao-Li; Zhu, Rui-Xiang; Tong, Yan-An
2010-06-01
A 9-year field experiment was conducted on the Guanzhong Plain of Shaanxi Province to study the effects of subsoiling, rotary tillage, straw return, no-till seeding, and traditional tillage on the soil physical and chemical properties and the grain yield in a winter wheat-summer maize rotation system, and a comprehensive evaluation was made on the soil quality under these tillage patterns by the method of principal components analysis (PCA). Comparing with traditional tillage, all the conservation tillage patterns improved soil fertility quality and soil physical properties. Under conservative tillage, the activities of soil urease and alkaline phosphatase increased significantly, soil quality index increased by 19.8%-44.0%, and the grain yield of winter wheat and summer maize (expect that under no till seeding with straw covering) increased by 13%-28% and 3%-12%, respectively. Subsoiling every other year, straw-chopping combined with rotary tillage, and straw-mulching combined with subsoiling not only increased crop yield, but also improved soil quality. Based on the economic and ecological benefits, the practices of subsoiling and straw return should be promoted. PMID:20873622
Johnson, B M; Guan, X; Gammie, C F
2008-06-24
The descriptions of some of the numerical tests in our original paper are incomplete, making reproduction of the results difficult. We provide the missing details here. The relevant tests are described in section 4 of the original paper (Figures 8-11).
Brezovský, Jan
2016-01-01
An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools’ predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations
Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan
2016-05-01
An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To
Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard
2005-08-01
MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.
What the Numbers Mean: Providing a Context for Numerical Student Evaluations of Courses.
ERIC Educational Resources Information Center
Trout, Paul A.
1997-01-01
Analysis of the content of and student responses to college course evaluations suggests that, in general, students are seeking entertainment, comfort, high grades, and less work and are hostile to the necessary routines and rigors of higher education. The commonly used numerical evaluation form is not only unreliable and invalid, it is an…
Evaluating the use of high-resolution numerical weather forecast for debris flow prediction.
NASA Astrophysics Data System (ADS)
Nikolopoulos, Efthymios I.; Bartsotas, Nikolaos S.; Borga, Marco; Kallos, George
2015-04-01
The sudden occurrence combined with the high destructive power of debris flows pose a significant threat to human life and infrastructures. Therefore, developing early warning procedures for the mitigation of debris flows risk is of great economical and societal importance. Given that rainfall is the predominant factor controlling debris flow triggering, it is indisputable that development of effective debris flows warning procedures requires accurate knowledge of the properties (e.g. duration, intensity) of the triggering rainfall. Moreover, efficient and timely response of emergency operations depends highly on the lead-time provided by the warning systems. Currently, the majority of early warning systems for debris flows are based on nowcasting procedures. While the latter may be successful in predicting the hazard, they provide warnings with a relatively short lead-time (~6h). Increasing the lead-time is necessary in order to improve the pre-incident operations and communication of the emergency, thus coupling warning systems with weather forecasting is essential for advancing early warning procedures. In this work we evaluate the potential of using high-resolution (1km) rainfall fields forecasted with a state-of-the-art numerical weather prediction model (RAMS/ICLAMS), in order to predict the occurrence of debris flows. Analysis is focused over the Upper Adige region, Northeast Italy, an area where debris flows are frequent. Seven storm events that generated a large number (>80) of debris flows during the period 2007-2012 are analyzed. Radar-based rainfall estimates, available from the operational C-band radar located at Mt Macaion, are used as the reference to evaluate the forecasted rainfall fields. Evaluation is mainly focused on assessing the error in forecasted rainfall properties (magnitude, duration) and the correlation in space and time with the reference field. Results show that the forecasted rainfall fields captured very well the magnitude and
vom Saal, Frederick S.; Welshons, Wade V.
2016-01-01
There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273
vom Saal, Frederick S; Welshons, Wade V
2014-12-01
There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273
NASA Astrophysics Data System (ADS)
Hrubý, Jan
2012-04-01
Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.
Lindgren, Richard J.; Taylor, Charles J.; Houston, Natalie A.
2009-01-01
A substantial number of public water system wells in south-central Texas withdraw groundwater from the karstic, highly productive Edwards aquifer. However, the use of numerical groundwater flow models to aid in the delineation of contributing areas for public water system wells in the Edwards aquifer is problematic because of the complex hydrogeologic framework and the presence of conduit-dominated flow paths in the aquifer. The U.S. Geological Survey, in cooperation with the Texas Commission on Environmental Quality, evaluated six published numerical groundwater flow models (all deterministic) that have been developed for the Edwards aquifer San Antonio segment or Barton Springs segment, or both. This report describes the models developed and evaluates each with respect to accessibility and ease of use, range of conditions simulated, accuracy of simulations, agreement with dye-tracer tests, and limitations of the models. These models are (1) GWSIM model of the San Antonio segment, a FORTRAN computer-model code that pre-dates the development of MODFLOW; (2) MODFLOW conduit-flow model of San Antonio and Barton Springs segments; (3) MODFLOW diffuse-flow model of San Antonio and Barton Springs segments; (4) MODFLOW Groundwater Availability Modeling [GAM] model of the Barton Springs segment; (5) MODFLOW recalibrated GAM model of the Barton Springs segment; and (6) MODFLOW-DCM (dual conductivity model) conduit model of the Barton Springs segment. The GWSIM model code is not commercially available, is limited in its application to the San Antonio segment of the Edwards aquifer, and lacks the ability of MODFLOW to easily incorporate newly developed processes and packages to better simulate hydrologic processes. MODFLOW is a widely used and tested code for numerical modeling of groundwater flow, is well documented, and is in the public domain. These attributes make MODFLOW a preferred code with regard to accessibility and ease of use. The MODFLOW conduit-flow model
Numeric and symbolic evaluation of the pfaffian of general skew-symmetric matrices
NASA Astrophysics Data System (ADS)
González-Ballestero, C.; Robledo, L. M.; Bertsch, G. F.
2011-10-01
Evaluation of pfaffians arises in a number of physics applications, and for some of them a direct method is preferable to using the determinantal formula. We discuss two methods for the numerical evaluation of pfaffians. The first is tridiagonalization based on Householder transformations. The main advantage of this method is its numerical stability that makes unnecessary the implementation of a pivoting strategy. The second method considered is based on Aitken's block diagonalization formula. It yields to a kind of LU (similar to Cholesky's factorization) decomposition (under congruence) of arbitrary skew-symmetric matrices that is well suited both for the numeric and symbolic evaluations of the pfaffian. Fortran subroutines (FORTRAN 77 and 90) implementing both methods are given. We also provide simple implementations in Python and Mathematica for purpose of testing, or for exploratory studies of methods that make use of pfaffians.
NASA Astrophysics Data System (ADS)
Ahmed, Mahmoud; Eslamian, Morteza
2015-07-01
Laminar natural convection in differentially heated ( β = 0°, where β is the inclination angle), inclined ( β = 30° and 60°), and bottom-heated ( β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.
Ahmed, Mahmoud; Eslamian, Morteza
2015-12-01
Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number. PMID:26183389
Ryabinkin, Ilya G; Nagesh, Jayashree; Izmaylov, Artur F
2015-11-01
We have developed a numerical differentiation scheme that eliminates evaluation of overlap determinants in calculating the time-derivative nonadiabatic couplings (TDNACs). Evaluation of these determinants was the bottleneck in previous implementations of mixed quantum-classical methods using numerical differentiation of electronic wave functions in the Slater determinant representation. The central idea of our approach is, first, to reduce the analytic time derivatives of Slater determinants to time derivatives of molecular orbitals and then to apply a finite-difference formula. Benchmark calculations prove the efficiency of the proposed scheme showing impressive several-order-of-magnitude speedups of the TDNAC calculation step for midsize molecules. PMID:26538034
NASA Technical Reports Server (NTRS)
Constantinides, E. D.; Marhefka, R. J.
1992-01-01
The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals of such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. Here, a convergent series solution form for the incomplete Airy functions is derived. Asymptotic expansions involving several terms were also developed and serve as large argument approximations. The combination of the series solution form with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.
Zradziński, Patryk
2015-01-01
Due to the various physical mechanisms of interaction between a worker's body and the electromagnetic field at various frequencies, the principles of numerical simulations have been discussed for three areas of worker exposure: to low frequency magnetic field, to low and intermediate frequency electric field and to radiofrequency electromagnetic field. This paper presents the identified difficulties in applying numerical simulations to evaluate physical estimators of direct and indirect effects of exposure to electromagnetic fields at various frequencies. Exposure of workers operating a plastic sealer have been taken as an example scenario of electromagnetic field exposure at the workplace for discussion of those difficulties in applying numerical simulations. The following difficulties in reliable numerical simulations of workers' exposure to the electromagnetic field have been considered: workers' body models (posture, dimensions, shape and grounding conditions), working environment models (objects most influencing electromagnetic field distribution) and an analysis of parameters for which exposure limitations are specified in international guidelines and standards. PMID:26323781
Zradziński, Patryk
2015-01-01
Due to the various physical mechanisms of interaction between a worker's body and the electromagnetic field at various frequencies, the principles of numerical simulations have been discussed for three areas of worker exposure: to low frequency magnetic field, to low and intermediate frequency electric field and to radiofrequency electromagnetic field. This paper presents the identified difficulties in applying numerical simulations to evaluate physical estimators of direct and indirect effects of exposure to electromagnetic fields at various frequencies. Exposure of workers operating a plastic sealer have been taken as an example scenario of electromagnetic field exposure at the workplace for discussion of those difficulties in applying numerical simulations. The following difficulties in reliable numerical simulations of workers’ exposure to the electromagnetic field have been considered: workers’ body models (posture, dimensions, shape and grounding conditions), working environment models (objects most influencing electromagnetic field distribution) and an analysis of parameters for which exposure limitations are specified in international guidelines and standards. PMID:26323781
A Framework for Evaluating Regional-Scale Numerical Photochemical Modeling Systems
This paper discusses the need for critically evaluating regional-scale (~ 200-2000 km) three dimensional numerical photochemical air quality modeling systems to establish a model's credibility in simulating the spatio-temporal features embedded in the observations. Because of li...
Asuero, A G; Navas, M J; Jiminez-Trillo, J L
1986-02-01
The spectrophotometric methods applicable to the numerical evaluation of acidity constants of monobasic acids are briefly reviewed. The equations are presented in a form suitable for easy calculation with a programmable pocket calculator. The aim of this paper is to cover a gap in the education analytical literature. PMID:18964064
Numerical evaluation of the three-dimensional searchlight problem in a half-space
Kornreich, D.E.; Ganapol, B.D.
1997-11-01
The linear Boltzmann equation for the transport of neutral particles is investigated with the objective of generating a benchmark-quality calculation for the three-dimensional searchlight problem in a semi-infinite medium. The derivation assumes stationarity, one energy group, and isotropic scattering. The scalar flux (both surface and interior) and the current at the surface are the quantities of interest. The source considered is a pencil-beam incident at a point on the surface of a semi-infinite medium. The scalar flux will have two-dimensional variation only if the beam is normal; otherwise, it is three-dimensional. The solutions are obtained by using Fourier and Laplace transform models. The transformed transport equation is formulated so that it can be related to a one-dimensional pseudo problem, thus providing some analytical leverage for the inversions. The numerical inversions use standard numerical techniques such as Gauss-Legendre quadrature, summation of infinite series, H-function iteration and evaluation, and Euler-Knopp acceleration. The numerical evaluations of the scalar flux and current at the surface are relatively simple, and the interior scalar flux is relatively difficult to calculate because of the embedded two-dimensional Fourier transform inversion, Laplace transform inversion, and H-function evaluation. Comparisons of these numerical solutions to results from the MCNP probabilistic code and the THREE-DANT discrete ordinates code are provided and help confirm proper operation of the analytical code.
Generalization Evaluation of Machine Learning Numerical Observers for Image Quality Assessment.
Kalayeh, Mahdi M; Marin, Thibault; Brankov, Jovan G
2013-06-01
In this paper, we present two new numerical observers (NO) based on machine learning for image quality assessment. The proposed NOs aim to predict human observer performance in a cardiac perfusion-defect detection task for single-photon emission computed tomography (SPECT) images. Human observer (HumO) studies are now considered to be the gold standard for task-based evaluation of medical images. However such studies are impractical for use in early stages of development for imaging devices and algorithms, because they require extensive involvement of trained human observers who must evaluate a large number of images. To address this problem, numerical observers (also called model observers) have been developed as a surrogate for human observers. The channelized Hotelling observer (CHO), with or without internal noise model, is currently the most widely used NO of this kind. In our previous work we argued that development of a NO model to predict human observers' performance can be viewed as a machine learning (or system identification) problem. This consideration led us to develop a channelized support vector machine (CSVM) observer, a kernel-based regression model that greatly outperformed the popular and widely used CHO. This was especially evident when the numerical observers were evaluated in terms of generalization performance. To evaluate generalization we used a typical situation for the practical use of a numerical observer: after optimizing the NO (which for a CHO might consist of adjusting the internal noise model) based upon a broad set of reconstructed images, we tested it on a broad (but different) set of images obtained by a different reconstruction method. In this manuscript we aim to evaluate two new regression models that achieve accuracy higher than the CHO and comparable to our earlier CSVM method, while dramatically reducing model complexity and computation time. The new models are defined in a Bayesian machine-learning framework: a channelized
Numerical criteria for the evaluation of ab initio predictions of protein structure.
Zemla, A; Venclovas, C; Reinhardt, A; Fidelis, K; Hubbard, T J
1997-01-01
As part of the CASP2 protein structure prediction experiment, a set of numerical criteria were defined for the evaluation of "ab initio" predictions. The evaluation package comprises a series of electronic submission formats, a submission validator, evaluation software, and a series of scripts to summarize the results for the CASP2 meeting and for presentation via the World Wide Web (WWW). The evaluation package is accessible for use on new predictions via WWW so that results can be compared to those submitted to CASP2. With further input from the community, the evaluation criteria are expected to evolve into a comprehensive set of measures capturing the overall quality of a prediction as well as critical detail essential for further development of prediction methods. We discuss present measures, limitations of the current criteria, and possible improvements. PMID:9485506
NASA Technical Reports Server (NTRS)
Weston, K. C.; Reynolds, A. C., Jr.; Alikhan, A.; Drago, D. W.
1974-01-01
Numerical solutions for radiative transport in a class of anisotropically scattering materials are presented. Conditions for convergence and divergence of the iterative method are given and supported by computed results. The relation of two flux theories to the equation of radiative transfer for isotropic scattering is discussed. The adequacy of the two flux approach for the reflectance, radiative flux and radiative flux divergence of highly scattering media is evaluated with respect to solutions of the radiative transfer equation.
Selection of a numerical unsaturated flow code for tilted capillary barrier performance evaluation
Webb, S.W.
1996-09-01
Capillary barriers consisting of tilted fine-over-coarse layers have been suggested as landfill covers as a means to divert water infiltration away from sensitive underground regions under unsaturated flow conditions, especially for arid and semi-arid regions. Typically, the HELP code is used to evaluate landfill cover performance and design. Unfortunately, due to its simplified treatment of unsaturated flow and its essentially one-dimensional nature, HELP is not adequate to treat the complex multidimensional unsaturated flow processes occurring in a tilted capillary barrier. In order to develop the necessary mechanistic code for the performance evaluation of tilted capillary barriers, an efficient and comprehensive unsaturated flow code needs to be selected for further use and modification. The present study evaluates a number of candidate mechanistic unsaturated flow codes for application to tilted capillary barriers. Factors considered included unsaturated flow modeling, inclusion of evapotranspiration, nodalization flexibility, ease of modification, and numerical efficiency. A number of unsaturated flow codes are available for use with different features and assumptions. The codes chosen for this evaluation are TOUGH2, FEHM, and SWMS{_}2D. All three codes chosen for this evaluation successfully simulated the capillary barrier problem chosen for the code comparison, although FEHM used a reduced grid. The numerical results are a strong function of the numerical weighting scheme. For the same weighting scheme, similar results were obtained from the various codes. Based on the CPU time of the various codes and the code capabilities, the TOUGH2 code has been selected as the appropriate code for tilted capillary barrier performance evaluation, possibly in conjunction with the infiltration, runoff, and evapotranspiration models of HELP. 44 refs.
ERIC Educational Resources Information Center
Au, Wayne
2011-01-01
Current and former leaders of many major urban school districts, including Washington, D.C.'s Michelle Rhee and New Orleans' Paul Vallas, have sought to use tests to evaluate teachers. In fact, the use of high-stakes standardized tests to evaluate teacher performance in the manner of value-added measurement (VAM) has become one of the cornerstones…
Giannaros, Theodore M; Melas, Dimitrios; Matzarakis, Andreas
2015-02-01
The evaluation of thermal bioclimate can be conducted employing either observational or modeling techniques. The advantage of the numerical modeling approach lies in that it can be applied in areas where there is lack of observational data, providing a detailed insight on the prevailing thermal bioclimatic conditions. However, this approach should be exploited carefully since model simulations can be frequently biased. The aim of this paper is to examine the suitability of a mesoscale atmospheric model in terms of evaluating thermal bioclimate. For this, the numerical weather prediction Weather Research and Forecasting (WRF) model and the radiation RayMan model are employed for simulating thermal bioclimatic conditions in Greece during a 1-year time period. The physiologically equivalent temperature (PET) is selected as an index for evaluating thermal bioclimate, while synoptic weather station data are exploited for verifying model performance. The results of the present study shed light on the strengths and weaknesses of the numerical modeling approach. Overall, it is shown that model simulations can provide a useful alternative tool for studying thermal bioclimate. Specifically for Greece, the WRF/RayMan modeling system was found to perform adequately well in reproducing the spatial and temporal variations of PET. PMID:24771280
Zhang, Jing; Tian, Jiabin; Ta, Na; Huang, Xinsheng; Rao, Zhushi
2016-08-01
Finite element method was employed in this study to analyze the change in performance of implantable hearing devices due to the consideration of soft tissues' viscoelasticity. An integrated finite element model of human ear including the external ear, middle ear and inner ear was first developed via reverse engineering and analyzed by acoustic-structure-fluid coupling. Viscoelastic properties of soft tissues in the middle ear were taken into consideration in this model. The model-derived dynamic responses including middle ear and cochlea functions showed a better agreement with experimental data at high frequencies above 3000 Hz than the Rayleigh-type damping. On this basis, a coupled finite element model consisting of the human ear and a piezoelectric actuator attached to the long process of incus was further constructed. Based on the electromechanical coupling analysis, equivalent sound pressure and power consumption of the actuator corresponding to viscoelasticity and Rayleigh damping were calculated using this model. The analytical results showed that the implant performance of the actuator evaluated using a finite element model considering viscoelastic properties gives a lower output above about 3 kHz than does Rayleigh damping model. Finite element model considering viscoelastic properties was more accurate to numerically evaluate implantable hearing devices. PMID:27276992
Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui
2016-06-14
Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823
Numerical evaluation of welded tube wall profiles from scanned X-ray line source data
NASA Astrophysics Data System (ADS)
Lunin, V.; Podobedov, D.; Ewert, U.; Redmer, B.
2001-04-01
This investigation presents an iterative algorithm for inversion of X-ray line scanning data of a multi-angle inspection. The main focus is the development of a robust algorithm that may successfully evaluate the influence of local surface geometry in welding regions. An idea here is to repetitively solve the forward problem with iterated profile parameters until the solution agrees with measurement. For accurate parameterization of a particular inner crack, this procedure can be combined with an analysis of the residual image obtained by subtracting the projection image caused by reconstructed surface wall profiles, from the original data.
Ridouane, E. H.; Bianchi, M.
2011-11-01
This study describes a detailed three-dimensional computational fluid dynamics modeling to evaluate the thermal performance of uninsulated wall assemblies accounting for conduction through framing, convection, and radiation. The model allows for material properties variations with temperature. Parameters that were varied in the study include ambient outdoor temperature and cavity surface emissivity. Understanding the thermal performance of uninsulated wall cavities is essential for accurate prediction of energy use in residential buildings. The results can serve as input for building energy simulation tools for modeling the temperature dependent energy performance of homes with uninsulated walls.
Combined experimental and numerical evaluation of a prototype nano-PCM enhanced wallboard
Biswas, Kaushik; LuPh.D., Jue; Soroushian, Parviz; Shrestha, Som S
2014-01-01
In the United States, forty-eight (48) percent of the residential end-use energy consumption is spent on space heating and air conditioning. Reducing envelope-generated heating and cooling loads through application of phase change material (PCM)-enhanced building envelopes can facilitate maximizing the energy efficiency of buildings. Combined experimental testing and numerical modeling of PCM-enhanced envelope components are two important aspects of the evaluation of their energy benefits. An innovative phase change material (nano-PCM) was developed with PCM encapsulated with expanded graphite (interconnected) nanosheets, which is highly conductive for enhanced thermal storage and energy distribution, and is shape-stable for convenient incorporation into lightweight building components. A wall with cellulose cavity insulation and prototype PCM-enhanced interior wallboards was built and tested in a natural exposure test (NET) facility in a hot-humid climate location. The test wall contained PCM wallboards and regular gypsum wallboard, for a side-by-side annual comparison study. Further, numerical modeling of the walls containing the nano-PCM wallboard was performed to determine its actual impact on wall-generated heating and cooling loads. The model was first validated using experimental data, and then used for annual simulations using Typical Meteorological Year (TMY3) weather data. This article presents the measured performance and numerical analysis evaluating the energy-saving potential of the nano-PCM-enhanced wallboard.
Sobsey, Constance A; Han, Jun; Lin, Karen; Swardfager, Walter; Levitt, Anthony; Borchers, Christoph H
2016-09-01
Malondialdhyde (MDA) is a commonly used marker of lipid peroxidation in oxidative stress. To provide a sensitive analytical method that is compatible with high throughput, we developed a multiple reaction monitoring-mass spectrometry (MRM-MS) approach using 3-nitrophenylhydrazine chemical derivatization, isotope-labeling, and liquid chromatography (LC) with electrospray ionization (ESI)-tandem mass spectrometry assay to accurately quantify MDA in human plasma. A stable isotope-labeled internal standard was used to compensate for ESI matrix effects. The assay is linear (R(2)=0.9999) over a 20,000-fold concentration range with a lower limit of quantitation of 30fmol (on-column). Intra- and inter-run coefficients of variation (CVs) were <2% and ∼10% respectively. The derivative was stable for >36h at 5°C. Standards spiked into plasma had recoveries of 92-98%. When compared to a common LC-UV method, the LC-MS method found near-identical MDA concentrations. A pilot project to quantify MDA in patient plasma samples (n=26) in a study of major depressive disorder with winter-type seasonal pattern (MDD-s) confirmed known associations between MDA concentrations and obesity (p<0.02). The LC-MS method provides high sensitivity and high reproducibility for quantifying MDA in human plasma. The simple sample preparation and rapid analysis time (5x faster than LC-UV) offers high throughput for large-scale clinical applications. PMID:27437618
Evaluation of the Performance of the Hybrid Lattice Boltzmann Based Numerical Flux
NASA Astrophysics Data System (ADS)
Zheng, H. W.; Shu, C.
2016-06-01
It is well known that the numerical scheme is a key factor to the stability and accuracy of a Navier-Stokes solver. Recently, a new hybrid lattice Boltzmann numerical flux (HLBFS) is developed by Shu's group. It combines two different LBFS schemes by a switch function. It solves the Boltzmann equation instead of the Euler equation. In this article, the main object is to evaluate the ability of this HLBFS scheme by our in-house cell centered hybrid mesh based Navier-Stokes code. Its performance is examined by several widely-used bench-mark test cases. The comparisons on results between calculation and experiment are conducted. They show that the scheme can capture the shock wave as well as the resolving of boundary layer.
Numerical evaluation of two-center integrals over Slater type orbitals
NASA Astrophysics Data System (ADS)
Kurt, S. A.; Yükçü, N.
2016-03-01
Slater Type Orbitals (STOs) which one of the types of exponential type orbitals (ETOs) are used usually as basis functions in the multicenter molecular integrals to better understand physical and chemical properties of matter. In this work, we develop algorithms for two-center overlap and two-center two-electron hybrid and Coulomb integrals which are calculated with help of translation method for STOs and some auxiliary functions by V. Magnasco's group. We use Mathematica programming language to produce algorithms for these calculations. Numerical results for some quantum numbers are presented in the tables. Consequently, we compare our obtained numerical results with the other known literature results and other details of evaluation method are discussed.
Yonetani, Yusuke; Nitta, Kouichi; Matoba, Osamu
2010-02-01
We evaluate numerically the effect of shrinkage of photopolymer on the bit error rate or signal-to-noise ratio in a reflection-type holographic data storage system with angular multiplexing. In the evaluation, we use a simple model where the material is divided into layered structures and then the shrinkage rate is proportional to the intensity in each layer. We present the effectiveness of the proposed model from the experimental results in the recording of the plane waves both in a transmission-type hologram and a reflection-type one. Several kinds of shrinkage rates are used to evaluate the characteristics of angular multiplexing in the reflection-type holographic memory. PMID:20119021
NASA Astrophysics Data System (ADS)
Omoniyi, Bayonle; Stow, Dorrik
2016-04-01
One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.
Numerical Study and Performance Evaluation for Pulse Detonation Engine with Exhaust Nozzle
NASA Astrophysics Data System (ADS)
Kimura, Yuichiro; Tsuboi, Nobuyuki; Hayashi, A. Koichi; Yamada, Eisuke
This paper presents the propulsive performance evaluation for the H2/Air Pulse Detonation Engine (PDE) with a converging-diverging exhaust nozzle by the system-level modeling and multi-cycle numerical simulations. This study deals with the two-dimensional and axisymmetric compressible Euler equations with a detail chemical reaction model. First, single-shot propulsive performance of simplified-PDE, which is without exhaust nozzle, is evaluated to show the validity of the numerical and performance evaluation method. The influences of the initial conditions, ignition energy, grid resolution, and scale effects on the propulsive performance are studied with the multi-cycle simulations. The present results are compared with the results calculated by Ma et al. and Harris et al. and the difference between their results and the present simulations are approximately 2-3% because their chemical reactions use one-step model with one-γ model. The effects of the specific heat ratio should be estimated for various nozzle configurations and flight conditions.
The numerical evaluation of the maximum-likelihood estimate of a subset of mixture proportions
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
Necessary and sufficient conditions are given for a maximum likelihood estimate of a subset of mixture proportions. From these conditions, likelihood equations are derived satisfied by the maximum-likelihood estimate and a successive-approximations procedure is discussed as suggested by equations for numerically evaluating the maximum-likelihood estimate. It is shown that, with probability one for large samples, this procedure converges locally to the maximum-likelihood estimate whenever a certain step-size lies between zero and two. Furthermore, optimal rates of local convergence are obtained for a step-size which is bounded below by a number between one and two.
NASA Astrophysics Data System (ADS)
Ishikawa, Atsushi; Nakai, Hiromi
2016-04-01
Gibbs free energy of hydration of a proton and standard hydrogen electrode potential were evaluated using high-level quantum chemical calculations. The solvent effect was included using the cluster-continuum model, which treated short-range effects by quantum chemical calculations of proton-water complexes, and the long-range effects by a conductor-like polarizable continuum model. The harmonic solvation model (HSM) was employed to estimate enthalpy and entropy contributions due to nuclear motions of the clusters by including the cavity-cluster interactions. Compared to the commonly used ideal gas model, HSM treatment significantly improved the contribution of entropy, showing a systematic convergence toward the experimental data.
NASA Astrophysics Data System (ADS)
Lin, C.; Gillespie, J.; Schuder, M. D.; Duberstein, W.; Beverland, I. J.; Heal, M. R.
2015-01-01
Low-power, and relatively low-cost, gas sensors have potential to improve understanding of intra-urban air pollution variation by enabling data capture over wider networks than is possible with 'traditional' reference analysers. We evaluated an Aeroqual Ltd. Series 500 semiconducting metal oxide O3 and an electrochemical NO2 sensor against UK national network reference analysers for more than 2 months at an urban background site in central Edinburgh. Hourly-average Aeroqual O3 sensor observations were highly correlated (R2 = 0.91) and of similar magnitude to observations from the UV-absorption reference O3 analyser. The Aeroqual NO2 sensor observations correlated poorly with the reference chemiluminescence NO2 analyser (R2 = 0.02), but the deviations between Aeroqual and reference analyser values ([NO2]Aeroq - [NO2]ref) were highly significantly correlated with concurrent Aeroqual O3 sensor observations [O3]Aeroq. This permitted effective linear calibration of the [NO2]Aeroq data, evaluated using 'hold out' subsets of the data (R2 ≥ 0.85). These field observations under temperate environmental conditions suggest that the Aeroqual Series 500 NO2 and O3 monitors have good potential to be useful ambient air monitoring instruments in urban environments provided that the O3 and NO2 gas sensors are calibrated against reference analysers and deployed in parallel.
NASA Astrophysics Data System (ADS)
Jahanshahi, Mohammad R.; Masri, Sami F.
2013-03-01
In mechanical, aerospace and civil structures, cracks are important defects that can cause catastrophes if neglected. Visual inspection is currently the predominant method for crack assessment. This approach is tedious, labor-intensive, subjective and highly qualitative. An inexpensive alternative to current monitoring methods is to use a robotic system that could perform autonomous crack detection and quantification. To reach this goal, several image-based crack detection approaches have been developed; however, the crack thickness quantification, which is an essential element for a reliable structural condition assessment, has not been sufficiently investigated. In this paper, a new contact-less crack quantification methodology, based on computer vision and image processing concepts, is introduced and evaluated against a crack quantification approach which was previously developed by the authors. The proposed approach in this study utilizes depth perception to quantify crack thickness and, as opposed to most previous studies, needs no scale attachment to the region under inspection, which makes this approach ideal for incorporation with autonomous or semi-autonomous mobile inspection systems. Validation tests are performed to evaluate the performance of the proposed approach, and the results show that the new proposed approach outperforms the previously developed one.
NASA Astrophysics Data System (ADS)
Wen, Xiulan; Zhao, Yibing; Wang, Dongxia; Zhu, Xiaochu; Xue, Xiaoqiang
2013-03-01
Although significant progress has been made in precision machining of free-form surfaces recently, inspection of such surfaces remains a difficult problem. In order to solve the problem that no specific standards for the verification of free-form surface profile are available, the profile parameters of free-form surface are proposed by referring to ISO standards regarding form tolerances and considering its complexity and non-rotational symmetry. Non-uniform rational basis spline(NURBS) for describing free-form surface is formulated. Crucial issues in surface inspection and profile error verification are localization between the design coordinate system(DCS) and measurement coordinate system(MCS) for searching the closest points on the design model corresponding to measured points. A quasi particle swarm optimization(QPSO) is proposed to search the transformation parameters to implement localization between DCS and MCS. Surface subdivide method which does the searching in a recursively reduced range of the parameters u and v of the NURBS design model is developed to find the closest points. In order to verify the effectiveness of the proposed methods, the design model is generated by NURBS and the measurement data of simulation example are generated by transforming the design model to arbitrary position and orientation, and the parts are machined based on the design model and are measured on CMM. The profile errors of simulation example and actual parts are calculated by the proposed method. The results verify that the evaluation precision of freeform surface profile error by the proposed method is higher 10%-22% than that by CMM software. The proposed method deals with the hard problem that it has a lower precision in profile error evaluation of free-form surface.
Analytical expression for gas-particle equilibration time scale and its numerical evaluation
NASA Astrophysics Data System (ADS)
Anttila, Tatu; Lehtinen, Kari E. J.; Dal Maso, Miikka
2016-05-01
We have derived a time scale τeq that describes the characteristic time for a single compound i with a saturation vapour concentration Ceff,i to reach thermodynamic equilibrium between the gas and particle phases. The equilibration process was assumed to take place via gas-phase diffusion and absorption into a liquid-like phase present in the particles. It was further shown that τeq combines two previously derived and often applied time scales τa and τs that account for the changes in the gas and particle phase concentrations of i resulting from the equilibration, respectively. The validity of τeq was tested by comparing its predictions against results from a numerical model that explicitly simulates the transfer of i between the gas and particle phases. By conducting a large number of simulations where the values of the key input parameters were varied randomly, it was found out that τeq yields highly accurate results when i is a semi-volatile compound in the sense that the ratio of total (gas and particle phases) concentration of i to the saturation vapour concentration of i, μ, is below unity. On the other hand, the comparison of analytical and numerical time scales revealed that using τa or τs alone to calculate the equilibration time scale may lead to considerable errors. It was further shown that τeq tends to overpredict the equilibration time when i behaves as a non-volatile compound in a sense that μ > 1. Despite its simplicity, the time scale derived here has useful applications. First, it can be used to assess if semi-volatile compounds reach thermodynamic equilibrium during dynamic experiments that involve changes in the compound volatility. Second, the time scale can be used in modeling of secondary organic aerosol (SOA) to check whether SOA forming compounds equilibrate over a certain time interval.
An Experimental-Numerical Evaluation of Thermal Contact Conductance in Fin-Tube Heat Exchangers
NASA Astrophysics Data System (ADS)
Kim, Chang Nyung; Jeong, Jin; Youn, Baek; Kil, Seong Ho
The contact between fin collar and tube surface of a fin-tube heat exchanger is secured through mechanical expansion of tubes. However, the characteristics of heat transfer through the interfaces between the tubes and fins have not been clearly understood because the interfaces consist partially of metal-to-metal contact and partially of air. The objective of the present study is to develop a new method utilizing an experimental-numerical method for the estimation of the thermal contact resistance between the fin collar and tube surface and to evaluate the factors affecting the thermal contact resistance in a fin-tube heat exchanger. In this study, heat transfer characteristics of actual heat exchanger assemblies have been tested in a vacuum chamber using water as an internal fluid, and a finite difference numerical scheme has been employed to reduce the experimental data for the evaluation of the thermal contact conductance. The present study has been conducted for fin-tube heat exchangers of tube diameter of 7mm with different tube expansion ratios, fin spacings, and fin types. The results show, with an appropriate error analysis, that these parameters as well as hydrophilic fin coating affect notably the thermal contact conductance. It has been found out that the thermal contact resistance takes fairly large portion of the total thermal resistance in a fin-tube heat exchanger and it turns out that careful consideration is needed in a manufacturing process of heat exchangers to reduce the thermal contact resistance.
Song, Kwang Hyun; Snyder, Karen Chin; Kim, Jinkoo; Li, Haisen; Ning, Wen; Rusnac, Robert; Jackson, Paul; Gordon, James; Siddiqui, Salim M; Chetty, Indrin J
2016-01-01
2.5 MV electronic portal imaging, available on Varian TrueBeam machines, was characterized using various phantoms in this study. Its low-contrast detectability, spatial resolution, and contrast-to-noise ratio (CNR) were compared with those of conventional 6 MV and kV planar imaging. Scatter effect in large patient body was simulated by adding solid water slabs along the beam path. The 2.5 MV imaging mode was also evaluated using clinically acquired images from 24 patients for the sites of brain, head and neck, lung, and abdomen. With respect to 6 MV, the 2.5 MV achieved higher contrast and preserved sharpness on bony structures with only half of the imaging dose. The quality of 2.5 MV imaging was comparable to that of kV imaging when the lateral separation of patient was greater than 38 cm, while the kV image quality degraded rapidly as patient separation increased. Based on the results of patient images, 2.5 MV imaging was better for cranial and extracranial SRS than the 6 MV imaging. PMID:27455505
Ratcliff, Laura E; Grisanti, Luca; Genovese, Luigi; Deutsch, Thierry; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang; Beljonne, David; Cornil, Jérôme
2015-05-12
A fast and accurate scheme has been developed to evaluate two key molecular parameters (on-site energies and transfer integrals) that govern charge transport in organic supramolecular architecture devices. The scheme is based on a constrained density functional theory (CDFT) approach implemented in the linear-scaling BigDFT code that exploits a wavelet basis set. The method has been applied to model disordered structures generated by force-field simulations. The role of the environment on the transport parameters has been taken into account by building large clusters around the active molecules involved in the charge transfer. PMID:26574411
Numerical evaluation of the Feynman integral-over-paths in real and imaginary-time
NASA Astrophysics Data System (ADS)
Register, L. F.; Stroscio, M. A.; Littlejohn, M. A.
New techniques are described for Monte Carlo evaluation of the propagation of quantum mechanical systems in both real and imaginary-time using the Feynman integral-over-paths formulation of quantum mechanics. For imaginary-time calculations path translation is used to augment the technique of Lawande et. al. This simple-yet-powerful technique allows the equilibrium probability density to be accurately evaluated in the presence of multiple potential wells. It is shown that path translation permits the calculation of the unknown ground-state energy of one confining potential by comparison with the known ground-state energy of another. A double finite-square-well potential and a finite-square-well/parabolic-well pair are presented as examples. For real-time calculations, a weighted analytical averaging of the exponential in the classical action is performed over a region of paths. This "windowed action" has both real and imaginary components. The imaginary component yields an exponentially decaying probability for selecting paths, thereby providing a basis for the Monte Carlo evaluation of the real-time integral-over-paths. Examples of a wave-packet in a parabolic well and a wave-packet impinging upon a potential barrier are considered.
Evaluation of numerical sediment quality targets for the St. Louis River Area of Concern
Crane, J.L.; MacDonald, D.D.; Ingersoll, C.G.; Smorong, D.E.; Lindskoog, R.A.; Severn, C.G.; Berger, T.A.; Field, L.J.
2002-01-01
Numerical sediment quality targets (SQTs) for the protection of sediment-dwelling organisms have been established for the St. Louis River Area of Concern (AOC), 1 of 42 current AOCs in the Great Lakes basin. The two types of SQTs were established primarily from consensus-based sediment quality guidelines. Level I SQTs are intended to identify contaminant concentrations below which harmful effects on sediment-dwelling organisms are unlikely to be observed. Level II SQTs are intended to identify contaminant concentrations above which harmful effects on sediment-dwelling organisms are likely to be observed. The predictive ability of the numerical SQTs was evaluated using the matching sediment chemistry and toxicity data set for the St. Louis River AOC. This evaluation involved determination of the incidence of toxicity to amphipods (Hyalella azteca) and midges (Chironomus tentans) within five ranges of Level II SQT quotients (i.e., mean probable effect concentration quotients [PEC-Qs]). The incidence of toxicity was determined based on the results of 10-day toxicity tests with amphipods (endpoints: survival and growth) and 10-day toxicity tests with midges (endpoints: survival and growth). For both toxicity tests, the incidence of toxicity increased as the mean PEC-Q ranges increased. The incidence of toxicity observed in these tests was also compared to that for other geographic areas in the Great Lakes region and in North America for 10- to 14-day amphipod (H. azteca) and 10- to 14-day midge (C. tentans or C. riparius) toxicity tests. In general, the predictive ability of the mean PEC-Qs was similar across geographic areas. The results of these predictive ability evaluations indicate that collectively the mean PEC-Qs provide a reliable basis for classifying sediments as toxic or not toxic in the St. Louis River AOC, in the larger geographic areas of the Great Lakes, and elsewhere in North America.
Numerical evaluation of the groundwater drainage system for underground storage caverns
NASA Astrophysics Data System (ADS)
Park, Eui Seob; Chae, Byung Gon
2015-04-01
A novel concept storing cryogenic liquefied natural gas in a hard rock lined cavern has been developed and tested for several years as an alternative. In this concept, groundwater in rock mass around cavern has to be fully drained until the early stage of construction and operation to avoid possible adverse effect of groundwater near cavern. And then rock mass should be re-saturated to form an ice ring, which is the zone around cavern including ice instead of water in several joints within the frozen rock mass. The drainage system is composed of the drainage tunnel excavated beneath the cavern and drain holes drilled on rock surface of the drainage tunnel. In order to de-saturate sufficiently rock mass around the cavern, the position and horizontal spacing of drain holes should be designed efficiently. In this paper, a series of numerical study results related to the drainage system of the full-scale cavern are presented. The rock type in the study area consists mainly of banded gneiss and mica schist. Gneiss is in slightly weathered state and contains a little joint and fractures. Schist contains several well-developed schistosities that mainly stand vertically, so that vertical joints are better developed than the horizontals in the area. Lugeon tests revealed that upper aquifer and bedrock are divided in the depth of 40-50m under the surface. Groundwater level was observed in twenty monitoring wells and interpolated in the whole area. Numerical study using Visual Modflow and Seep/W has been performed to evaluate the efficiency of drainage system for underground liquefied natural gas storage cavern in two hypothetically designed layouts and determine the design parameters. In Modflow analysis, groundwater flow change in an unconfined aquifer was simulated during excavation of cavern and operation of drainage system. In Seep/W analysis, amount of seepage and drainage was also estimated in a representative vertical section of each cavern. From the results
SEQUESTRATION OF METALS IN ACTIVE CAP MATERIALS: A LABORATORY AND NUMERICAL EVALUATION
Dixon, K.; Knox, A.
2012-02-13
Active capping involves the use of capping materials that react with sediment contaminants to reduce their toxicity or bioavailability. Although several amendments have been proposed for use in active capping systems, little is known about their long-term ability to sequester metals. Recent research has shown that the active amendment apatite has potential application for metals contaminated sediments. The focus of this study was to evaluate the effectiveness of apatite in the sequestration of metal contaminants through the use of short-term laboratory column studies in conjunction with predictive, numerical modeling. A breakthrough column study was conducted using North Carolina apatite as the active amendment. Under saturated conditions, a spike solution containing elemental As, Cd, Co, Se, Pb, Zn, and a non-reactive tracer was injected into the column. A sand column was tested under similar conditions as a control. Effluent water samples were periodically collected from each column for chemical analysis. Relative to the non-reactive tracer, the breakthrough of each metal was substantially delayed by the apatite. Furthermore, breakthrough of each metal was substantially delayed by the apatite compared to the sand column. Finally, a simple 1-D, numerical model was created to qualitatively predict the long-term performance of apatite based on the findings from the column study. The results of the modeling showed that apatite could delay the breakthrough of some metals for hundreds of years under typical groundwater flow velocities.
Evaluation and Numerical Simulation of Tsunami for Coastal Nuclear Power Plants of India
Sharma, Pavan K.; Singh, R.K.; Ghosh, A.K.; Kushwaha, H.S.
2006-07-01
Recent tsunami generated on December 26, 2004 due to Sumatra earthquake of magnitude 9.3 resulted in inundation at the various coastal sites of India. The site selection and design of Indian nuclear power plants demand the evaluation of run up and the structural barriers for the coastal plants: Besides it is also desirable to evaluate the early warning system for tsunami-genic earthquakes. The tsunamis originate from submarine faults, underwater volcanic activities, sub-aerial landslides impinging on the sea and submarine landslides. In case of a submarine earthquake-induced tsunami the wave is generated in the fluid domain due to displacement of the seabed. There are three phases of tsunami: generation, propagation, and run-up. Reactor Safety Division (RSD) of Bhabha Atomic Research Centre (BARC), Trombay has initiated computational simulation for all the three phases of tsunami source generation, its propagation and finally run up evaluation for the protection of public life, property and various industrial infrastructures located on the coastal regions of India. These studies could be effectively utilized for design and implementation of early warning system for coastal region of the country apart from catering to the needs of Indian nuclear installations. This paper presents some results of tsunami waves based on different analytical/numerical approaches with shallow water wave theory. (authors)
A Numerical Evaluation on the Viability of Heap Thermophilic Bioleaching of Chalcopyrite
NASA Astrophysics Data System (ADS)
Vilcaez, J.; Suto, K.; Inoue, C.
2007-03-01
The present numerical evaluation explores into the interactions among the many variables governing the mass and heat transport processes that take place in a heap thermophilic bioleaching system. The necessity of using mesophiles together with thermophiles is proved by tracing the activity of both microorganisms individually at each point throughout the heap. The role of key variables such as the fraction of FeS2 per CuFeS2 leached was quantified and its importance highlighted. In this evaluation, the heat transfer process plays the main role because of the heat accumulation required to maintain the heap temperature within the range of 60 °C to 80 °C where thermophilic microorganisms are capable of completing the unfinished dissolution of copper started by mesophilic microorganisms at 30 °C. The evaluation was done taking into consideration: biological activity as function of the temperature in the heap, heat loss due to conduction and advection from the top and bottom of the heap, and mass transfer between the gas and liquid phases as a function of temperature. The exothermic nature of the leaching reactions of CuFeS2 and FeS2 makes the system auto-thermal.
Numerical simulation and fracture evaluation method of dual laterolog in organic shale
NASA Astrophysics Data System (ADS)
Tan, Maojin; Wang, Peng; Li, Jun; Liu, Qiong; Yang, Qinshan
2014-01-01
Fracture identification and parameter evaluation are important for logging interpretation of organic shale, especially fracture evaluation from conventional logs in case the imaging log is not available. It is helpful to study dual laterolog responses of the fractured shale reservoir. First, a physical model is set up according to the property of organic shale, and three-dimensional finite element method (FEM) based on the principle of dual laterolog is introduced and applied to simulate dual laterolog responses in various shale models, which can help identify the fractures in shale formations. Then, through a number of numerical simulations of dual laterolog for various shale models with different base rock resistivities and fracture openings, the corresponding equations of various cases are constructed respectively, and the fracture porosity can be calculated consequently. Finally, we apply this methodology proposed above to a case study of organic shale, and the fracture porosity and fracture opening are calculated. The results are consistent with the fracture parameters processed from Full borehole Micro-resistivity Imaging (FMI). It indicates that the method is applicable for fracture evaluation of organic shale.
Numerical evaluation of the radiation from unbaffled, finite plates using the FFT
NASA Technical Reports Server (NTRS)
Williams, E. G.
1983-01-01
An iteration technique is described which numerically evaluates the acoustic pressure and velocity on and near unbaffled, finite, thin plates vibrating in air. The technique is based on Rayleigh's integral formula and its inverse. These formulas are written in their angular spectrum form so that the fast Fourier transform (FFT) algorithm may be used to evaluate them. As an example of the technique the pressure on the surface of a vibrating, unbaffled disk is computed and shown to be in excellent agreement with the exact solution using oblate spheroidal functions. Furthermore, the computed velocity field outside the disk shows the well-known singularity at the rim of the disk. The radiated fields from unbaffled flat sources of any geometry with prescribed surface velocity may be evaluated using this technique. The use of the FFT to perform the integrations in Rayleigh's formulas provides a great savings in computation time compared with standard integration algorithms, especially when an array processor can be used to implement the FFT.
NASA Technical Reports Server (NTRS)
Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.
1996-01-01
A numerical model of heat transfer using combined conduction, radiation and convection in AADSF was used to evaluate temperature gradients in the vicinity of the crystal/melt interface for variety of hot and cold zone set point temperatures specifically for the growth of mercury cadmium telluride (MCT). Reverse usage of hot and cold zones was simulated to aid the choice of proper orientation of crystal/melt interface regarding residual acceleration vector without actual change of furnace location on board the orbiter. It appears that an additional booster heater will be extremely helpful to ensure desired temperature gradient when hot and cold zones are reversed. Further efforts are required to investigate advantages/disadvantages of symmetrical furnace design (i.e. with similar length of hot and cold zones).
Numerical evaluation of the Bose-ghost propagator in minimal Landau gauge on the lattice
NASA Astrophysics Data System (ADS)
Cucchieri, Attilio; Mendes, Tereza
2016-07-01
We present numerical details of the evaluation of the so-called Bose-ghost propagator in lattice minimal Landau gauge, for the SU(2) case in four Euclidean dimensions. This quantity has been proposed as a carrier of the confining force in the Gribov-Zwanziger approach and, as such, its infrared behavior could be relevant for the understanding of color confinement in Yang-Mills theories. Also, its nonzero value can be interpreted as direct evidence of Becchi-Rouet-Stora-Tyutin-symmetry breaking, which is induced when restricting the functional measure to the first Gribov region Ω . Our simulations are done for lattice volumes up to 1204 and for physical lattice extents up to 13.5 fm. We investigate the infinite-volume and continuum limits.
NASA Astrophysics Data System (ADS)
Chillara, Vamshi Krishna; Lissenden, Cliff J.
2016-01-01
Interest in using the higher harmonic generation of ultrasonic guided wave modes for nondestructive evaluation continues to grow tremendously as the understanding of nonlinear guided wave propagation has enabled further analysis. The combination of the attractive properties of guided waves with the attractive properties of higher harmonic generation provides a very unique potential for characterization of incipient damage, particularly in plate and shell structures. Guided waves can propagate relatively long distances, provide access to hidden structural components, have various displacement polarizations, and provide many opportunities for mode conversions due to their multimode character. Moreover, higher harmonic generation is sensitive to changing aspects of the microstructures such as to the dislocation density, precipitates, inclusions, and voids. We review the recent advances in the theory of nonlinear guided waves, as well as the numerical simulations and experiments that demonstrate their utility.
Numerical evaluation of a 13.5-nm high-brightness microplasma extreme ultraviolet source
Hara, Hiroyuki Arai, Goki; Dinh, Thanh-Hung; Higashiguchi, Takeshi; Jiang, Weihua; Miura, Taisuke; Endo, Akira; Ejima, Takeo; Li, Bowen; Dunne, Padraig; O'Sullivan, Gerry; Sunahara, Atsushi
2015-11-21
The extreme ultraviolet (EUV) emission and its spatial distribution as well as plasma parameters in a microplasma high-brightness light source are characterized by the use of a two-dimensional radiation hydrodynamic simulation. The expected EUV source size, which is determined by the expansion of the microplasma due to hydrodynamic motion, was evaluated to be 16 μm (full width) and was almost reproduced by the experimental result which showed an emission source diameter of 18–20 μm at a laser pulse duration of 150 ps [full width at half-maximum]. The numerical simulation suggests that high brightness EUV sources should be produced by use of a dot target based microplasma with a source diameter of about 20 μm.
Design and numerical evaluation of a volume coil array for parallel MR imaging at ultrahigh fields
Pang, Yong; Wong, Ernest W.H.; Yu, Baiying
2014-01-01
In this work, we propose and investigate a volume coil array design method using different types of birdcage coils for MR imaging. Unlike the conventional radiofrequency (RF) coil arrays of which the array elements are surface coils, the proposed volume coil array consists of a set of independent volume coils including a conventional birdcage coil, a transverse birdcage coil, and a helix birdcage coil. The magnetic fluxes of these three birdcage coils are intrinsically cancelled, yielding a highly decoupled volume coil array. In contrast to conventional non-array type volume coils, the volume coil array would be beneficial in improving MR signal-to-noise ratio (SNR) and also gain the capability of implementing parallel imaging. The volume coil array is evaluated at the ultrahigh field of 7T using FDTD numerical simulations, and the g-factor map at different acceleration rates was also calculated to investigate its parallel imaging performance. PMID:24649435
Numerical surrogates for human observers in myocardial motion evaluation from SPECT image
Marin, Thibault; Kalayehis, Mahdi M.; Parages, Felipe M.; Brankov, Jovan G.
2014-01-01
In medical imaging, the gold standard for image-quality assessment is a task-based approach in which one evaluates human observer performance for a given diagnostic task (e.g., detection of a myocardial perfusion or motion defect). To facilitate practical task-based image-quality assessment, model observers are needed as approximate surrogates for human observers. In cardiac-gated SPECT imaging, diagnosis relies on evaluation of the myocardial motion as well as perfusion. Model observers for the perfusion-defect detection task have been studied previously, but little effort has been devoted toward development of a model observer for cardiac-motion defect detection. In this work describe two model observers for predicting human observer performance in detection of cardiac-motion defects. Both proposed methods rely on motion features extracted using previously reported deformable mesh model for myocardium motion estimation. The first method is based on a Hotelling linear discriminant that is similar in concept to that used commonly for perfusion-defect detection. In the second method, based on relevance vector machines (RVM) for regression, we compute average human observer performance by first directly predicting individual human observer scores, and then using multi reader receiver operating characteristic (ROC) analysis. Our results suggest that the proposed RVM model observer can predict human observer performance accurately, while the new Hotelling motion-defect detector is somewhat less effective. PMID:23981533
Evaluation of Site Effects Using Numerical and Experimental Analyses In Cittas Di Castello (italy)
NASA Astrophysics Data System (ADS)
Pergalani, F.; de Franco, R.; Compagnoni, M.; Caielli, G.
In the paper the results of the numerical and experimental analyses, in a site of the Umbria Region (Città di Castello - PG), finalized to the evaluations of site effects are shown. The aim of the work was to compare the two type of analyses, to give some methodologies that may be used at the level of urban planning, to consider these as- pects. Therefore a series of geologic, geomorphologic (1:5.000 scale), geotechnic and seismic analyses have been carried out, to identify the areas affected to local effects and to characterize the lithotechnic units. The expected seismic inputs are been indi- viduated and 2D (Quad4M, Hudson et al., 1993) numerical analyses have been done. An experimental analysis, using the registrations of small events, has been done. The results, for the two approaches, were performed in terms of elastic pseudo-acceleration spectra and amplification factors, as a ratio between spectral intensity (Housner, 1952), calculated using the pseudo-velocity spectra, in the periods of 0.1-0.5 s and 0.1-2.5 s of output and input. The results have been analyzed and compared, to give a method- ology that may be exhaustive and precise. The conclusions can be summarized in the following points: u° the results of the two approaches are coherent; u° the differences between the two approaches are: the use of the numerical analysis is easy and quick but, in this case, the use of 2D analysis produces a simplification of real geometry; the use of experimental analysis allows to consider the 3D conditions, but, in this case, the registrations of events characterized by low energy, do not allow to consider the non linear behavior of materials, moreover it is necessary to perform the registrations for a period depending from the seismicity of the region (1 month - two years); u° the possi- bility of integration of the two methodologies allows to perform a complete analysis, using the advantages of the two methods. Housner G.W., Spectrum Intensities of strong
NASA Astrophysics Data System (ADS)
Shrestha, D. L.; Robertson, D. E.; Wang, Q. J.; Pagano, T. C.; Hapuarachchi, H. A. P.
2013-05-01
The quality of precipitation forecasts from four Numerical Weather Prediction (NWP) models is evaluated over the Ovens catchment in Southeast Australia. Precipitation forecasts are compared with observed precipitation at point and catchment scales and at different temporal resolutions. The four models evaluated are the Australian Community Climate Earth-System Simulator (ACCESS) including ACCESS-G with a 80 km resolution, ACCESS-R 37.5 km, ACCESS-A 12 km, and ACCESS-VT 5 km. The skill of the NWP precipitation forecasts varies considerably between rain gauging stations. In general, high spatial resolution (ACCESS-A and ACCESS-VT) and regional (ACCESS-R) NWP models overestimate precipitation in dry, low elevation areas and underestimate in wet, high elevation areas. The global model (ACCESS-G) consistently underestimates the precipitation at all stations and the bias increases with station elevation. The skill varies with forecast lead time and, in general, it decreases with the increasing lead time. When evaluated at finer spatial and temporal resolution (e.g. 5 km, hourly), the precipitation forecasts appear to have very little skill. There is moderate skill at short lead times when the forecasts are averaged up to daily and/or catchment scale. The precipitation forecasts fail to produce a diurnal cycle shown in observed precipitation. Significant sampling uncertainty in the skill scores suggests that more data are required to get a reliable evaluation of the forecasts. The non-smooth decay of skill with forecast lead time can be attributed to diurnal cycle in the observation and sampling uncertainty. Future work is planned to assess the benefits of using the NWP rainfall forecasts for short-term streamflow forecasting. Our findings here suggest that it is necessary to remove the systematic biases in rainfall forecasts, particularly those from low resolution models, before the rainfall forecasts can be used for streamflow forecasting.
NASA Astrophysics Data System (ADS)
Shrestha, D. L.; Robertson, D. E.; Wang, Q. J.; Pagano, T. C.; Hapuarachchi, P.
2012-11-01
The quality of precipitation forecasts from four Numerical Weather Prediction (NWP) models is evaluated over the Ovens catchment in southeast Australia. Precipitation forecasts are compared with observed precipitation at point and catchment scales and at different temporal resolutions. The four models evaluated are the Australian Community Climate Earth-System Simulator (ACCESS) including ACCESS-G with a 80 km resolution, ACCESS-R 37.5 km, ACCESS-A 12 km, and ACCESS-VT 5 km. The high spatial resolution NWP models (ACCESS-A and ACCESS-VT) appear to be relatively free of bias (i.e. <30%) for 24 h total precipitation forecasts. The low resolution models (ACCESS-R and ACCESS-G) have widespread systematic biases as large as 70%. When evaluated at finer spatial and temporal resolution (e.g. 5 km, hourly) against station observations, the precipitation forecasts appear to have very little skill. There is moderate skill at short lead times when the forecasts are averaged up to daily and/or catchment scale. The skill decreases with increasing lead times and the global model ACCESS-G does not have significant skill beyond 7 days. The precipitation forecasts fail to produce a diurnal cycle shown in observed precipitation. Significant sampling uncertainty in the skill scores suggests that more data are required to get a reliable evaluation of the forecasts. Future work is planned to assess the benefits of using the NWP rainfall forecasts for short-term streamflow forecasting. Our findings here suggest that it is necessary to remove the systematic biases in rainfall forecasts, particularly those from low resolution models, before the rainfall forecasts can be used for streamflow forecasting.
A numerical model for the analysis and evaluation of global 137Cs fallout.
Shimada, Y; Morisawa, S; Inoue, Y
1996-02-01
Fallout 137Cs from atmospheric nuclear detonation tests has been monitored worldwide since the late 1950's. The deviation and the correlation among these monitoring data were analyzed, and their surface deposition characteristics were estimated by the compartment model developed in this research. In the analysis, the scale of space (i.e., size of each compartment) and the degree of detail (i.e., number of compartments) were statistically determined using the global distribution data of 137Cs. The mathematical model was evaluated by comparing the numerically stimulated results with the fallout monitoring data including the 137Cs concentration in sea water. The major findings obtained in this research include that the deposition pattern of 137Cs is dependent on the latitude zone but not on the longitude, the mathematical model is promising for evaluating the dynamic performance of 137Cs in global atmospheric environment and its surface deposition, 137Cs is accumulated more in both the surface and deep ocean water of the North Pacific Ocean and the North Atlantic Ocean than that of other oceans, the 137Cs inventory is decreasing after the peak time in 1965, and the 137Cs inventory in the deep ocean water is decreasing more slowly than that in the surface ocean water. PMID:8567283
A numerical model for the analysis and evaluation of global {sup 137}Cs fallout
Shimada, Y.; Morisawa, S.; Inoue, Y.
1996-02-01
Fallout {sup 137}Cs from atmospheric nuclear detonation test have been monitored worldwide since the late 1950`s. The deviation and the correlation among these monitoring data were analyzed, and their surface deposition characteristics were estimated by the compartment model developed in this research. In the analysis, the scale of space (i.e., size of each compartment) and the degree of detail (i.e., number of compartments) were statistically determined using the global distribution data of {sup 137}Cs. The mathematical model was evaluated by comparing the numerically simulated results with the fallout monitoring data including the {sup 137}Cs concentration in sea water. The major findings obtained in this research include that the deposition pattern of {sup 137}Cs is dependent on the latitude zone but not on the longitude, the mathematical model is promising for evaluating the dynamic performance of {sup 137}Cs in global atmospheric environment and its surface deposition, {sup 137}Cs is accumulated more in both the surface and deep ocean water of the North Pacific Ocean and the North Atlantic ocean than that of other oceans, the {sup 137}Cs inventory is decreasing after the peak time in 1965, and the {sup 137}Cs inventory in the deep ocean water is decreasing more slowly than that in the surface ocean water. 26 refs., 10 figs., 3 tabs.
Numerical evaluation of seismic response of shallow foundation on loose silt and silty sand
NASA Astrophysics Data System (ADS)
Asgari, Ali; Golshani, Aliakbar; Bagheri, Mohsen
2014-03-01
This study includes the results of a set of numerical simulations carried out for sands containing plastic/non-plastic fines, and silts with relative densities of approximately 30-40% under different surcharges on the shallow foundation using FLAC 2D. Each model was subjected to three ground motion events, obtained by scaling the amplitude of the El Centro (1940), Kobe (1995) and Kocaeli (1999) Q12earthquakes. Dynamic behaviour of loose deposits underlying shallow foundations is evaluated through fully coupled nonlinear effective stress dynamic analyses. Effects of nonlinear soil structure interaction (SSI) were also considered by using interface elements. This parametric study evaluates the effects of soil type, structure weight, liquefiable soil layer thickness, event parameters (e.g., moment magnitude of earthquake ( M w ), peak ground acceleration PGA, PGV/PGA ratio and the duration of strong motion ( D 5-95) and their interactions on the seismic responses. Investigation on the effects of these parameters and their complex interactions can be a valuable tool to gain new insights for improved seismic design and construction.
NASA Astrophysics Data System (ADS)
Baierl, M.; Kordilla, J.; Reimann, T.; Dörfliger, N.; Sauter, M.; Geyer, T.
2012-04-01
This work deals with the analysis of pumping tests in strongly heterogeneous media. Pumping tests were performed in the catchment area of the Lez spring (South of France), which is composed of carbonate rocks. Pumping rates for the different tests varied between 0.04 l/s - 0.7 l/s, i.e. the radius of influence of the cone of depression is small. The investigated boreholes are characterised by tight rocks, moderate fractures and karstified zones. The observed drawdown curves are clearly influenced by the rock characteristics. Single drawdown curves show S-shape character. Data evaluation was performed with the solution approaches of Theis (1935) and Gringarten-Ramey (1974), which are implemented in the employed software AQTESOLV (Pro 4.0). Parameters were varied in reliable data ranges with consideration of reported values in the literature. The Theis method analyses unsteady flow in homogeneous confined aquifers. The Gringarten-Ramey solution describes the drawdown in a well connected to a single horizontal fracture. The Theis curve fails to represent the characteristics for nearly all of the measured drawdown curves, while the Gringarten-Ramey method shows moderate graphical fits with a small residual sum of squares between fitted and observed drawdown curves. This highlights the importance of heterogeneities in the hydraulic parameter field at local scale. The determined hydraulic conductivities of the rock are in reasonable ranges varying between 1E-04 m/s and 1E-08 m/s. Wellbore skin effects need to be discussed further in detail. While the analytical solutions are only valid for specific geometrical and hydraulic configurations, numerical models can be applied to simulate pumping tests in complex heterogeneous media with different boundary conditions. For that reason, a two dimensional, axisymmetric numerical model, using COMSOL (Multiphysics 4.1), is set up. In a first step, the model is validated with the simulated curves from the analytical solutions under
A New Look at Stratospheric Sudden Warmings. Part II: Evaluation of Numerical Model Simulations
NASA Technical Reports Server (NTRS)
Charlton, Andrew J.; Polvani, Lorenza M.; Perlwitz, Judith; Sassi, Fabrizio; Manzini, Elisa; Shibata, Kiyotaka; Pawson, Steven; Nielsen, J. Eric; Rind, David
2007-01-01
The simulation of major midwinter stratospheric sudden warmings (SSWs) in six stratosphere-resolving general circulation models (GCMs) is examined. The GCMs are compared to a new climatology of SSWs, based on the dynamical characteristics of the events. First, the number, type, and temporal distribution of SSW events are evaluated. Most of the models show a lower frequency of SSW events than the climatology, which has a mean frequency of 6.0 SSWs per decade. Statistical tests show that three of the six models produce significantly fewer SSWs than the climatology, between 1.0 and 2.6 SSWs per decade. Second, four process-based diagnostics are calculated for all of the SSW events in each model. It is found that SSWs in the GCMs compare favorably with dynamical benchmarks for SSW established in the first part of the study. These results indicate that GCMs are capable of quite accurately simulating the dynamics required to produce SSWs, but with lower frequency than the climatology. Further dynamical diagnostics hint that, in at least one case, this is due to a lack of meridional heat flux in the lower stratosphere. Even though the SSWs simulated by most GCMs are dynamically realistic when compared to the NCEP-NCAR reanalysis, the reasons for the relative paucity of SSWs in GCMs remains an important and open question.
Jin, J.-Y.; Ryu, Samuel; Faber, Kathleen; Mikkelsen, Tom; Chen Qing; Li Shidong; Movsas, Benjamin
2006-12-15
The purpose of this study was to evaluate the accuracy of a two-dimensional (2D) to three-dimensional (3D) image-fusion-guided target localization system and a mask based stereotactic system for fractionated stereotactic radiotherapy (FSRT) of cranial lesions. A commercial x-ray image guidance system originally developed for extracranial radiosurgery was used for FSRT of cranial lesions. The localization accuracy was quantitatively evaluated with an anthropomorphic head phantom implanted with eight small radiopaque markers (BBs) in different locations. The accuracy and its clinical reliability were also qualitatively evaluated for a total of 127 fractions in 12 patients with both kV x-ray images and MV portal films. The image-guided system was then used as a standard to evaluate the overall uncertainty and reproducibility of the head mask based stereotactic system in these patients. The phantom study demonstrated that the maximal random error of the image-guided target localization was {+-}0.6 mm in each direction in terms of the 95% confidence interval (CI). The systematic error varied with measurement methods. It was approximately 0.4 mm, mainly in the longitudinal direction, for the kV x-ray method. There was a 0.5 mm systematic difference, primarily in the lateral direction, between the kV x-ray and the MV portal methods. The patient study suggested that the accuracy of the image-guided system in patients was comparable to that in the phantom. The overall uncertainty of the mask system was {+-}4 mm, and the reproducibility was {+-}2.9 mm in terms of 95% CI. The study demonstrated that the image guidance system provides accurate and precise target positioning.
Numerical simulation of small perturbation transonic flows
NASA Technical Reports Server (NTRS)
Seebass, A. R.; Yu, N. J.
1976-01-01
The results of a systematic study of small perturbation transonic flows are presented. Both the flow over thin airfoils and the flow over wedges were investigated. Various numerical schemes were employed in the study. The prime goal of the research was to determine the efficiency of various numerical procedures by accurately evaluating the wave drag, both by computing the pressure integral around the body and by integrating the momentum loss across the shock. Numerical errors involved in the computations that affect the accuracy of drag evaluations were analyzed. The factors that effect numerical stability and the rate of convergence of the iterative schemes were also systematically studied.
NASA Astrophysics Data System (ADS)
Charles, Winsbert Curt
Seismic protective techniques utilizing specialized energy dissipation devices within the lateral resisting frames have been successfully used to limit inelastic deformation in reinforced concrete buildings by increasing damping and/or altering the stiffness of these structures. However, there is a need to investigate and develop systems with self-centering capabilities; systems that are able to assist in returning a structure to its original position after an earthquake. In this project, the efficacy of a shape memory alloy (SMA) based device, as a structural recentering device is evaluated through numerical analysis using the OpenSees framework. OpenSees is a software framework for simulating the seismic response of structural and geotechnical systems. OpenSees has been developed as the computational platform for research in performance-based earthquake engineering at the Pacific Earthquake Engineering Research Center (PEER). A non-ductile reinforced concrete building, which is modelled using OpenSees and verified with available experimental data is used for the analysis in this study. The model is fitted with Tension/Compression (TC) SMA devices. The performance of the SMA recentering device is evaluated for a set of near-field and far-field ground motions. Critical performance measures of the analysis include residual displacements, interstory drift and acceleration (horizontal and vertical) for different types of ground motions. The results show that the TC device's performance is unaffected by the type of ground motion. The analysis also shows that the inclusion of the device in the lateral force resisting system of the building resulted in a 50% decrease in peak horizontal displacement, and inter-story drift elimination of residual deformations, acceleration was increased up to 110%.
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Takase, Kazuyuki
Thermal-hydraulic design of the current boiling water reactor (BWR) is performed with the subchannel analysis codes which incorporated the correlations based on empirical results including actual-size tests. Then, for the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) core, an actual size test of an embodiment of its design is required to confirm or modify such correlations. In this situation, development of a method that enables the thermal-hydraulic design of nuclear reactors without these actual size tests is desired, because these tests take a long time and entail great cost. For this reason, we developed an advanced thermal-hydraulic design method for FLWRs using innovative two-phase flow simulation technology. In this study, a detailed Two-Phase Flow simulation code using advanced Interface Tracking method: TPFIT is developed to calculate the detailed information of the two-phase flow. In this paper, firstly, we tried to verify the TPFIT code by comparing it with the existing 2-channel air-water mixing experimental results. Secondary, the TPFIT code was applied to simulation of steam-water two-phase flow in a model of two subchannels of a current BWRs and FLWRs rod bundle. The fluid mixing was observed at a gap between the subchannels. The existing two-phase flow correlation for fluid mixing is evaluated using detailed numerical simulation data. This data indicates that pressure difference between fluid channels is responsible for the fluid mixing, and thus the effects of the time average pressure difference and fluctuations must be incorporated in the two-phase flow correlation for fluid mixing. When inlet quality ratio of subchannels is relatively large, it is understood that evaluation precision of the existing two-phase flow correlations for fluid mixing are relatively low.
Critical evaluation of three hemodynamic models for the numerical simulation of intra-stent flows.
Chabi, Fatiha; Champmartin, Stéphane; Sarraf, Christophe; Noguera, Ricardo
2015-07-16
We evaluate here three hemodynamic models used for the numerical simulation of bare and stented artery flows. We focus on two flow features responsible for intra-stent restenosis: the wall shear stress and the re-circulation lengths around a stent. The studied models are the Poiseuille profile, the simplified pulsatile profile and the complete pulsatile profile based on the analysis of Womersley. The flow rate of blood in a human left coronary artery is considered to compute the velocity profiles. "Ansys Fluent 14.5" is used to solve the Navier-Stokes and continuity equations. As expected our results show that the Poiseuille profile is questionable to simulate the complex flow dynamics involved in intra-stent restenosis. Both pulsatile models give similar results close to the strut but diverge far from it. However, the computational time for the complete pulsatile model is five times that of the simplified pulsatile model. Considering the additional "cost" for the complete model, we recommend using the simplified pulsatile model for future intra-stent flow simulations. PMID:26044195
Development and Evaluation of a Remedial Numerical Skills Workbook for Navy Training. Final Report.
ERIC Educational Resources Information Center
Bowman, Harry L.; And Others
A remedial Navy-relevant numerical skills workbook was developed and field tested for use in Navy recruit training commands and as part of the Navy Junior Reserve Officers Training curriculum. Research and curriculum specialists from the Department of the Navy and Memphis State University identified Navy-relevant topics requiring numerical skill…
Kim, M. K.; Kim, J. H.; Choi, I. K.
2012-07-01
In this study, a seismic fragility evaluation of the piping system in a nuclear power plant was performed. For the evaluation of seismic fragility of the piping system, this research was progressed as three steps. At first, several piping element capacity tests were performed. The monotonic and cyclic loading tests were conducted under the same internal pressure level of actual nuclear power plants to evaluate the performance. The cracks and wall thinning were considered as degradation factors of the piping system. Second, a shaking tale test was performed for an evaluation of seismic capacity of a selected piping system. The multi-support seismic excitation was performed for the considering a difference of an elevation of support. Finally, a numerical analysis was performed for the assessment of seismic fragility of piping system. As a result, a seismic fragility for piping system of NPP in Korea by using a shaking table test and numerical analysis. (authors)
Numerical Evaluation of Love's Solution for Tidal Amplitude: Extreme tides possible
NASA Astrophysics Data System (ADS)
Hurford, T. A.; Greenberg, R.; Frey, S.
2002-09-01
Numerical evaluation of Love's 1911 solution [1] for the tidal amplitude of a uniform, compressible, self-gravitating body reveals portions of parameter space where extremely large (or even large negative) tides are possible. Love's solution depends only on (a) the ratio of gravity to the rigidity, ρ g R / μ , and (b) the ratio of rigidity to Lamé constant, μ / λ . The solution is not continuous; it includes singularities, around which values approach plus-or-minus infinity, even for parameters in a range plausible for planetary bodies. The effect involves runaway self-gravity. For rocky bodies up to Earth-sized, the solution is well behaved and the tidal amplitude is within ~ 20 % of that given by the standard Love number for an incompressible body. For a moderately larger or less rigid planet, the Love number could be enhancedgreatly, possibly to the point of disruption. A thermally evolving planet could hit such singularities as it evolves through elastic-parameter space. Similarly, a growing planet could hit these conditions as ρ g R increases, possibly placing constraints on planet formation. For example, a large rocky planet not much larger than the Earth or Venus could hit conditions of extreme tides and be susceptible to possible disruption, conceivably placing an upper limit on growth. The growing core of a giant planet might also be affected. Depending on elastic parameters, planetary satellites may also experience more extreme tides than usually assumed, with potentially important effects on their thermal, geophysical, and orbital evolution. [1] Love, A.E.H., Some Problems of Geodynamics, New York Dover Publications, 1967
Evaluation of numerical weather predictions performed in the context of the project DAPHNE
NASA Astrophysics Data System (ADS)
Tegoulias, Ioannis; Pytharoulis, Ioannis; Bampzelis, Dimitris; Karacostas, Theodore
2014-05-01
The region of Thessaly in central Greece is one of the main areas of agricultural production in Greece. Severe weather phenomena affect the agricultural production in this region with adverse effects for farmers and the national economy. For this reason the project DAPHNE aims at tackling the problem of drought by means of weather modification through the development of the necessary tools to support the application of a rainfall enhancement program. In the present study the numerical weather prediction system WRF-ARW is used, in order to assess its ability to represent extreme weather phenomena in the region of Thessaly. WRF is integrated in three domains covering Europe, Eastern Mediterranean and Central-Northern Greece (Thessaly and a large part of Macedonia) using telescoping nesting with grid spacing of 15km, 5km and 1.667km, respectively. The cases examined span throughout the transitional and warm period (April to September) of the years 2008 to 2013, including days with thunderstorm activity. Model results are evaluated against all available surface observations and radar products, taking into account the spatial characteristics and intensity of the storms. Preliminary results indicate a good level of agreement between the simulated and observed fields as far as the standard parameters (such as temperature, humidity and precipitation) are concerned. Moreover, the model generally exhibits a potential to represent the occurrence of the convective activity, but not its exact spatiotemporal characteristics. Acknowledgements This research work has been co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-2013)
2010-01-01
Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references for normalization of gene
NASA Technical Reports Server (NTRS)
Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.
2007-01-01
Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.
Liu, Mengge; Chen, Guang; Guo, Hailong; Fan, Baolei; Liu, Jianjun; Fu, Qiang; Li, Xiu; Lu, Xiaomin; Zhao, Xianen; Li, Guoliang; Sun, Zhiwei; Xia, Lian; Zhu, Shuyun; Yang, Daoshan; Cao, Ziping; Wang, Hua; Suo, Yourui; You, Jinmao
2015-09-16
Determination of plant growth regulators (PGRs) in a signal transduction system (STS) is significant for transgenic food safety, but may be challenged by poor accuracy and analyte instability. In this work, a microwave-assisted extraction-derivatization (MAED) method is developed for six acidic PGRs in oil samples, allowing an efficient (<1.5 h) and facile (one step) pretreatment. Accuracies are greatly improved, particularly for gibberellin A3 (-2.72 to -0.65%) as compared with those reported (-22 to -2%). Excellent selectivity and quite low detection limits (0.37-1.36 ng mL(-1)) are enabled by fluorescence detection-mass spectrum monitoring. Results show the significant differences in acidic PGRs between transgenic and nontransgenic oils, particularly 1-naphthaleneacetic acid (1-NAA), implying the PGRs induced variations of components and genes. This study provides, for the first time, an accurate and efficient determination for labile PGRs involved in STS and a promising concept for objectively evaluating the safety of transgenic foods. PMID:26309068
Fast and accurate propagation of coherent light
Lewis, R. D.; Beylkin, G.; Monzón, L.
2013-01-01
We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184
NASA Astrophysics Data System (ADS)
Andersson, A.
2005-08-01
The ability to predict surface defects in outer panels is of vital importance in the automotive industry, especially for brands in the premium car segment. Today, measures to prevent these defects can not be taken until a test part has been manufactured, which requires a great deal of time and expense. The decision as to whether a certain surface is of acceptable quality or not is based on subjective evaluation. It is quite possible to detect a defect by measurement, but it is not possible to correlate measured defects and the subjective evaluation. If all results could be based on the same criteria, it would be possible to compare a surface by both FE simulations, experiments and subjective evaluation with the same result. In order to find a solution concerning the prediction of surface defects, a laboratory tool was manufactured and analysed both experimentally and numerically. The tool represents the area around a fuel filler lid and the aim was to recreate surface defects, so-called "teddy bear ears". A major problem with the evaluation of such defects is that the panels are evaluated manually and to a great extent subjectivity is involved in the classification and judgement of the defects. In this study the same computer software was used for the evaluation of both the experimental and the numerical results. In this software the surface defects were indicated by a change in the curvature of the panel. The results showed good agreement between numerical and experimental results. Furthermore, the evaluation software gave a good indication of the appearance of the surface defects compared to an analysis done in existing tools for surface quality measurements. Since the agreement between numerical and experimental results was good, this indicates that these tools can be used for an early verification of surface defects in outer panels.
Numerical Evaluation and Comparison of Kalantari's Zero Bounds for Complex Polynomials
Dehmer, Matthias; Tsoy, Yury Robertovich
2014-01-01
In this paper, we investigate the performance of zero bounds due to Kalantari and Dehmer by using special classes of polynomials. Our findings are evidenced by numerical as well as analytical results. PMID:25350861
Numerical evaluation of the scale problem on the wind flow of a windbreak
Liu, Benli; Qu, Jianjun; Zhang, Weimin; Tan, Lihai; Gao, Yanhong
2014-01-01
The airflow field around wind fences with different porosities, which are important in determining the efficiency of fences as a windbreak, is typically studied via scaled wind tunnel experiments and numerical simulations. However, the scale problem in wind tunnels or numerical models is rarely researched. In this study, we perform a numerical comparison between a scaled wind-fence experimental model and an actual-sized fence via computational fluid dynamics simulations. The results show that although the general field pattern can be captured in a reduced-scale wind tunnel or numerical model, several flow characteristics near obstacles are not proportional to the size of the model and thus cannot be extrapolated directly. For example, the small vortex behind a low-porosity fence with a scale of 1:50 is approximately 4 times larger than that behind a full-scale fence. PMID:25311174
A numerical model for CO effect evaluation in HT-PEMFCs: Part 1 - Experimental validation
NASA Astrophysics Data System (ADS)
Cozzolino, R.; Chiappini, D.; Tribioli, L.
2016-06-01
In this paper, a self-made numerical model of a high temperature polymer electrolyte membrane fuel cell is presented. In particular, the experimental activity has been addressed to the impact on cell performance of the CO content in the anode gas feeding, for the whole operating range, and a numerical code has been implemented and validated against these experimental results. The proposed numerical model employs a zero-dimensional framework coupled with a semi-empirical approach, which aims at providing a smart and flexible tool useful for investigating the membrane behavior under different working conditions. Results show an acceptable agreement between numerical and experimental data, confirming the potentiality and reliability of the developed tool, despite its simplicity.
NASA Astrophysics Data System (ADS)
Zaniboni, Filippo; Armigliato, Alberto; Pagnoni, Gianluca; Tinti, Stefano
2013-04-01
Small landslides are very common along the submarine margins, due to steep slopes and continuous material deposition that increment mass instability and supply collapse occurrences, even without earthquake triggering. This kind of events can have relevant consequences when occurring close to the coast, because they are characterized by sudden change of velocity and relevant speed achievement, reflecting into high tsunamigenic potential. This is the case for example of the slide of Rhodes Island (Greece), named Northern Rhodes Slide (NRS), where unusual 3-4 m waves were registered on 24 March 2002, provoking some damage in the coastal stretch of the city of Rhodes (Papadopoulos et al., 2007). The event was not associated with earthquake occurrence, and eyewitnesses supported the hypothesis of a non-seismic source for the tsunami, placed 1 km offshore. Subsequent marine geophysical surveys (Sakellariou et al., 2002) evidenced the presence of several detachment niches at about 300-400 m depth along the northern steep slope, one of which can be considered responsible of the observed tsunami, fitting with the previously mentioned supposition. In this work, that is carried out in the frame of the European funded project NearToWarn, we evaluated the tsunami effects due to the NRS by means of numerical modelling: after having reconstructed the sliding body basing on morphological assumptions (obtaining an esteemed volume of 33 million m3), we simulated the sliding motion through the in-house built code UBO-BLOCK1, adopting a Lagrangian approach and splitting the sliding mass into a "chain" of interacting blocks. This provides the complete dynamics of the landslide, including the shape changes that relevantly influence the tsunami generation. After the application of an intermediate code, accounting for the slide impulse filtering through the water depth, the tsunami propagation in the sea around the island of Rhodes and up to near coasts of Turkey was simulated via the
NASA Astrophysics Data System (ADS)
Jung, Minseok; Kihara, Hisashi; Abe, Ken-ichi; Takahashi, Yusuke
2016-06-01
A three-dimensional numerical simulation model that considers the effect of the angle of attack was developed to evaluate plasma flows around reentry vehicles. In this simulation model, thermochemical nonequilibrium of flowfields is considered by using a four-temperature model for high-accuracy simulations. Numerical simulations were performed for the orbital reentry experiment of the Japan Aerospace Exploration Agency, and the results were compared with experimental data to validate the simulation model. A comparison of measured and predicted results showed good agreement. Moreover, to evaluate the effect of the angle of attack, we performed numerical simulations around the Atmospheric Reentry Demonstrator of the European Space Agency by using an axisymmetric model and a three-dimensional model. Although there were no differences in the flowfields in the shock layer between the results of the axisymmetric and the three-dimensional models, the formation of the electron number density, which is an important parameter in evaluating radio-frequency blackout, was greatly changed in the wake region when a non-zero angle of attack was considered. Additionally, the number of altitudes at which radio-frequency blackout was predicted in the numerical simulations declined when using the three-dimensional model for considering the angle of attack.
NASA Astrophysics Data System (ADS)
Nobukawa, Teruyoshi; Nomura, Takanori
2015-08-01
A multilayer recording using a varifocal lens generated with a phase-only spatial light modulator (SLM) is proposed. A phase-only SLM is used for not only improving interference efficiency between signal and reference beams but also shifting a focus plane along an optical axis. A focus plane can be shifted by adding a spherical phase to a phase modulation pattern displayed on a phase-only SLM. A focal shift with adding a spherical phase was numerically confirmed. In addition, shift selectivity and recording performance of the proposed multilayer recording method were numerically evaluated in coaxial holographic data storage.
Pohlmann, Karl; Ye, Ming; Pohll, Greg; Chapman, Jenny
2007-01-19
Numerical groundwater models are based on conceptualizations of hydrogeologic systems that are by necessity developed from limited information and therefore are simplifications of real conditions. Each aspect (e.g. recharge, hydrostratigraphy, boundary conditions) of the groundwater model is often based on a single conceptual model that is considered to be the best representation given the available data. However, the very nature of their construction means that each conceptual model is inherently uncertain and the available information may be insufficient to refute plausible alternatives, thereby raising the possibility that the flow model is underestimating overall uncertainty. In this study we use the Death Valley Regional Flow System model developed by the U.S. Geological Survey as a framework to predict regional groundwater flow southward into Yucca Flat on the Nevada Test Site. An important aspect of our work is to evaluate the uncertainty associated with multiple conceptual models of groundwater recharge and subsurface hydrostratigraphy and quantify the impacts of this uncertainty on model predictions. In our study, conceptual model uncertainty arises from two sources: (1) alternative interpretations of the hydrostratigraphy in the northern portion of Yucca Flat where, owing to sparse data, the hydrogeologic system can be conceptualized in different ways, and (2) uncertainty in groundwater recharge in the region as evidenced by the existence of several independent approaches for estimating this aspect of the hydrologic system. The composite prediction of groundwater flow is derived from the regional model that formally incorporates the uncertainty in these alternative input models using the maximum likelihood Bayesian model averaging method. An assessment of the joint predictive uncertainty of the input conceptual models is also produced. During this process, predictions of the alternative models are weighted by model probability, which is the degree of
Technology Transfer Automated Retrieval System (TEKTRAN)
In-situ determination of ice formation and thawing in soils is difficult despite its importance for many environmental processes. A sensible heat balance (SHB) method using a sequence of heat pulse probes has been shown to accurately measure water evaporation in subsurface soil, and it has the poten...
Yoshimi, Satoshi; Ochi, Hidenori; Murakami, Eisuke; Uchida, Takuro; Kan, Hiromi; Akamatsu, Sakura; Hayes, C Nelson; Abe, Hiromi; Miki, Daiki; Hiraga, Nobuhiko; Imamura, Michio; Aikata, Hiroshi; Chayama, Kazuaki
2015-01-01
Daclatasvir and asunaprevir dual oral therapy is expected to achieve high sustained virological response (SVR) rates in patients with HCV genotype 1b infection. However, presence of the NS5A-Y93H substitution at baseline has been shown to be an independent predictor of treatment failure for this regimen. By using the Invader assay, we developed a system to rapidly and accurately detect the presence of mutant strains and evaluate the proportion of patients harboring a pre-treatment Y93H mutation. This assay system, consisting of nested PCR followed by Invader reaction with well-designed primers and probes, attained a high overall assay success rate of 98.9% among a total of 702 Japanese HCV genotype 1b patients. Even in serum samples with low HCV titers, more than half of the samples could be successfully assayed. Our assay system showed a better lower detection limit of Y93H proportion than using direct sequencing, and Y93H frequencies obtained by this method correlated well with those of deep-sequencing analysis (r = 0.85, P <0.001). The proportion of the patients with the mutant strain estimated by this assay was 23.6% (164/694). Interestingly, patients with the Y93H mutant strain showed significantly lower ALT levels (p=8.8 x 10-4), higher serum HCV RNA levels (p=4.3 x 10-7), and lower HCC risk (p=6.9 x 10-3) than those with the wild type strain. Because the method is both sensitive and rapid, the NS5A-Y93H mutant strain detection system established in this study may provide important pre-treatment information valuable not only for treatment decisions but also for prediction of disease progression in HCV genotype 1b patients. PMID:26083687
Xiao, Meng; Pang, Lu; Chen, Sharon C-A; Fan, Xin; Zhang, Li; Li, Hai-Xia; Hou, Xin; Cheng, Jing-Wei; Kong, Fanrong; Zhao, Yu-Pei; Xu, Ying-Chun
2016-01-01
Species identification of Nocardia is not straightforward due to rapidly evolving taxonomy, insufficient discriminatory power of conventional phenotypic methods and also of single gene locus analysis including 16S rRNA gene sequencing. Here we evaluated the ability of a 5-locus (16S rRNA, gyrB, secA1, hsp65 and rpoB) multilocus sequence analysis (MLSA) approach as well as that of matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) in comparison with sequencing of the 5'-end 606 bp partial 16S rRNA gene to provide identification of 25 clinical isolates of Nocardia. The 5'-end 606 bp 16S rRNA gene sequencing successfully assigned 24 of 25 (96%) clinical isolates to species level, namely Nocardia cyriacigeorgica (n = 12, 48%), N. farcinica (n = 9, 36%), N. abscessus (n = 2, 8%) and N. otitidiscaviarum (n = 1, 4%). MLSA showed concordance with 16S rRNA gene sequencing results for the same 24 isolates. However, MLSA was able to identify the remaining isolate as N. wallacei, and clustered N. cyriacigeorgica into three subgroups. None of the clinical isolates were correctly identified to the species level by MALDI-TOF MS analysis using the manufacturer-provided database. A small "in-house" spectral database was established incorporating spectra of five clinical isolates representing the five species identified in this study. After complementation with the "in-house" database, of the remaining 20 isolates, 19 (95%) were correctly identified to species level (score ≥ 2.00) and one (an N. abscessus strain) to genus level (score ≥ 1.70 and < 2.00). In summary, MLSA showed superior discriminatory power compared with the 5'-end 606 bp partial 16S rRNA gene sequencing for species identification of Nocardia. MALDI-TOF MS can provide rapid and accurate identification but is reliant on a robust mass spectra database. PMID:26808813
NASA Astrophysics Data System (ADS)
Lea, James M.; Mair, Douglas WF; Rea, Brice R.
2014-05-01
Several different methodologies have previously been employed in the tracking of glacier terminus change, though a systematic comparison of these has not been undertaken. Similarly, the suitability of using the resulting data for the calibration/validation of numerical models has not been evaluated. This could be especially significant for flowline modelling of tidewater glaciers, where discrepancies between the different terminus tracking methods could potentially introduce bias into model calibrations. The choice of method for quantifying terminus change of tidewater glaciers is therefore significant from both glacier monitoring, and numerical modelling viewpoints. In this study we evaluate three existing methodologies that have been widely used to track terminus change (the centreline, bow and box methods) against a full range of idealised glaciological scenarios, and examples of 6 real glaciers in Greenland. We also evaluate two new methodologies that aim to reduce measurement error compared to the existing methodologies, and allow direct comparison of results to those of flowline models. These are (1) a modification to the box method, that can account for termini retreating through fjords that change orientation (termed the curvilinear box method [CBM]), and (2) a method that determines the average terminus position relative to the glacier centreline using an inverse distance weighting extrapolation (termed the extrapolated centreline method [ECM]). No single method achieved complete accuracy for all scenarios though the ECM was best, being able to successfully account for variable fjord orientation, width and terminus geometry. Only results from the centreline, CBM and ECM will be directly comparable to flowline model output, though the CBM and ECM are likely to be the most accurate when applied to real world scenarios.
NASA Technical Reports Server (NTRS)
Lummerzheim, D.; Lilensten, J.
1994-01-01
Auroral electron transport calculations are a critical part of auroral models. We evaluate a numerical solution to the transport and energy degradation problem. The numerical solution is verified by reproducing simplified problems to which analytic solutions exist, internal self-consistency tests, comparison with laboratory experiments of electron beams penetrating a collision chamber, and by comparison with auroral observations, particularly the emission ratio of the N2 second positive to N2(+) first negative emissions. Our numerical solutions agree with range measurements in collision chambers. The calculated N(2)2P to N2(+)1N emission ratio is independent of the spectral characteristics of the incident electrons, and agrees with the value observed in aurora. Using different sets of energy loss cross sections and different functions to describe the energy distribution of secondary electrons that emerge from ionization collisions, we discuss the uncertainties of the solutions to the electron transport equation resulting from the uncertainties of these input parameters.
Toyoda, Masayuki; Ozaki, Taisuke
2009-03-28
A numerical method to calculate the four-center electron-repulsion integrals for strictly localized pseudoatomic orbital basis sets has been developed. Compared to the conventional Gaussian expansion method, this method has an advantage in the ease of combination with O(N) density functional calculations. Additional mathematical derivations are also presented including the analytic derivatives of the integrals with respect to atomic positions and spatial damping of the Coulomb interaction due to the screening effect. In the numerical test for a simple molecule, the convergence up to 10(-5) hartree in energy is successfully obtained with a feasible cost of computation. PMID:19334815
Numerical models to evaluate the temperature increase induced by ex vivo microwave thermal ablation
NASA Astrophysics Data System (ADS)
Cavagnaro, M.; Pinto, R.; Lopresto, V.
2015-04-01
Microwave thermal ablation (MTA) therapies exploit the local absorption of an electromagnetic field at microwave (MW) frequencies to destroy unhealthy tissue, by way of a very high temperature increase (about 60 °C or higher). To develop reliable interventional protocols, numerical tools able to correctly foresee the temperature increase obtained in the tissue would be very useful. In this work, different numerical models of the dielectric and thermal property changes with temperature were investigated, looking at the simulated temperature increments and at the size of the achievable zone of ablation. To assess the numerical data, measurement of the temperature increases close to a MTA antenna were performed in correspondence with the antenna feed-point and the antenna cooling system, for increasing values of the radiated power. Results show that models not including the changes of the dielectric and thermal properties can be used only for very low values of the power radiated by the antenna, whereas a good agreement with the experimental values can be obtained up to 20 W if water vaporization is included in the numerical model. Finally, for higher power values, a simulation that dynamically includes the tissue’s dielectric and thermal property changes with the temperature should be performed.
NASA Astrophysics Data System (ADS)
Subhra Mukherji, Suchi; Banerjee, Arindam
2010-11-01
We will discuss findings from our numerical investigation on the hydrodynamic performance of horizontal axis hydrokinetic turbines (HAHkT) under different turbine geometries and flow conditions. Hydrokinetic turbines are a class of zero-head hydropower systems which utilizes kinetic energy of flowing water to drive a generator. However, such turbines very often suffer from low efficiency which is primarily controlled by tip-speed ratio, solidity, angle of attack and number of blades. A detailed CFD study was performed using two-dimensional and three dimensional numerical models to examine the effect of each of these parameters on the performance of small HAHkTs having power capacities <= 10 kW. The two-dimensional numerical results provide an optimum angle of attack that maximizes the lift as well as lift to drag ratio yielding maximum power output. However three-dimensional numerical studies estimate optimum turbine solidity and blade numbers that produces maximum power coefficient at a given tip speed ratio. In addition, simulations were also performed to observe the axial velocity deficit at the turbine rotor downstream for different tip-speed ratios to obtain both qualitative and quantitative details about stall delay phenomena and the energy loss suffered by the turbine under ambient flow condition.
Numerical models to evaluate the temperature increase induced by ex vivo microwave thermal ablation.
Cavagnaro, M; Pinto, R; Lopresto, V
2015-04-21
Microwave thermal ablation (MTA) therapies exploit the local absorption of an electromagnetic field at microwave (MW) frequencies to destroy unhealthy tissue, by way of a very high temperature increase (about 60 °C or higher). To develop reliable interventional protocols, numerical tools able to correctly foresee the temperature increase obtained in the tissue would be very useful. In this work, different numerical models of the dielectric and thermal property changes with temperature were investigated, looking at the simulated temperature increments and at the size of the achievable zone of ablation. To assess the numerical data, measurement of the temperature increases close to a MTA antenna were performed in correspondence with the antenna feed-point and the antenna cooling system, for increasing values of the radiated power. Results show that models not including the changes of the dielectric and thermal properties can be used only for very low values of the power radiated by the antenna, whereas a good agreement with the experimental values can be obtained up to 20 W if water vaporization is included in the numerical model. Finally, for higher power values, a simulation that dynamically includes the tissue's dielectric and thermal property changes with the temperature should be performed. PMID:25826652
NASA Astrophysics Data System (ADS)
Versluis, Louis; Ziegler, Tom
1988-01-01
An algorithm, based on numerical integration, has been proposed for the evaluation of analytical energy gradients within the Hartree-Fock-Slater (HFS) method. The utility of this algorithm in connection with molecular structure optimization is demonstrated by calculations on organics, main group molecules, and transition metal complexes. The structural parameters obtained from HFS calculations are in at least as good agreement with experiment as structures obtained from ab initio HF calculations. The time required to evaluate the energy gradient by numerical integration constitutes only a fraction (40%-25%) of the elapsed time in a full HFS-SCF calculation. The algorithm is also suitable for density functional methods with exchange-correlation potential different from that employed in the HFS method.
NASA Astrophysics Data System (ADS)
Volkov, K. N.
2007-09-01
The total-pressure loss in gas turbines is evaluated. Reynolds-averaged Navier-Stokes equations are used for numerical calculations. The Spalart-Allmaras model, the k-ɛ model, and the two-layer model and their different modifications allowing for the rotation of the flow and the curvature of streamlines are used to close these equations. The role of different corrections to the turbulence models for the accuracy of calculated estimates is elucidated.
NASA Technical Reports Server (NTRS)
George, William K.; Rae, William J.; Woodward, Scott H.
1991-01-01
The importance of frequency response considerations in the use of thin-film gages for unsteady heat transfer measurements in transient facilities is considered, and methods for evaluating it are proposed. A departure frequency response function is introduced and illustrated by an existing analog circuit. A Fresnel integral temperature which possesses the essential features of the film temperature in transient facilities is introduced and is used to evaluate two numerical algorithms. Finally, criteria are proposed for the use of finite-difference algorithms for the calculation of the unsteady heat flux from a sampled temperature signal.
Chen, Sharon C-A.; Fan, Xin; Zhang, Li; Li, Hai-Xia; Hou, Xin; Cheng, Jing-Wei; Kong, Fanrong; Zhao, Yu-Pei; Xu, Ying-Chun
2016-01-01
Species identification of Nocardia is not straightforward due to rapidly evolving taxonomy, insufficient discriminatory power of conventional phenotypic methods and also of single gene locus analysis including 16S rRNA gene sequencing. Here we evaluated the ability of a 5-locus (16S rRNA, gyrB, secA1, hsp65 and rpoB) multilocus sequence analysis (MLSA) approach as well as that of matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) in comparison with sequencing of the 5’-end 606 bp partial 16S rRNA gene to provide identification of 25 clinical isolates of Nocardia. The 5’-end 606 bp 16S rRNA gene sequencing successfully assigned 24 of 25 (96%) clinical isolates to species level, namely Nocardia cyriacigeorgica (n = 12, 48%), N. farcinica (n = 9, 36%), N. abscessus (n = 2, 8%) and N. otitidiscaviarum (n = 1, 4%). MLSA showed concordance with 16S rRNA gene sequencing results for the same 24 isolates. However, MLSA was able to identify the remaining isolate as N. wallacei, and clustered N. cyriacigeorgica into three subgroups. None of the clinical isolates were correctly identified to the species level by MALDI-TOF MS analysis using the manufacturer-provided database. A small “in-house” spectral database was established incorporating spectra of five clinical isolates representing the five species identified in this study. After complementation with the “in-house” database, of the remaining 20 isolates, 19 (95%) were correctly identified to species level (score ≥ 2.00) and one (an N. abscessus strain) to genus level (score ≥ 1.70 and < 2.00). In summary, MLSA showed superior discriminatory power compared with the 5’-end 606 bp partial 16S rRNA gene sequencing for species identification of Nocardia. MALDI-TOF MS can provide rapid and accurate identification but is reliant on a robust mass spectra database. PMID:26808813
Numerical evaluation of voltage gradient constraints on electrokinetic injection of amendments
NASA Astrophysics Data System (ADS)
Wu, Ming Zhi; Reynolds, David A.; Prommer, Henning; Fourie, Andy; Thomas, David G.
2012-03-01
A new numerical model is presented that simulates groundwater flow and multi-species reactive transport under hydraulic and electrical gradients. Coupled into the existing, reactive transport model PHT3D, the model was verified against published analytical and experimental studies, and has applications in remediation cases where the geochemistry plays an important role. A promising method for remediation of low-permeability aquifers is the electrokinetic transport of amendments for in situ chemical oxidation. Numerical modelling showed that amendment injection resulted in the voltage gradient adjacent to the cathode decreasing below a linear gradient, producing a lower achievable concentration of the amendment in the medium. An analytical method is derived to estimate the achievable amendment concentration based on the inlet concentration. Even with low achievable concentrations, analysis showed that electrokinetic remediation is feasible due to its ability to deliver a significantly higher mass flux in low-permeability media than under a hydraulic gradient.
Numerical evaluation of the jet noise source distribution from far-field cross correlations
NASA Technical Reports Server (NTRS)
Maestrello, L.; Liu, C.-H.
1976-01-01
This paper contains the development of techniques to determine the relationship between the unknown source correlation function to the correlation of scattered amplitudes in a jet. This study has application to the determination of forward motion effects. The technique has been developed and tested on a model jet of high subsonic flow. Numerical solution was obtained by solving the Fredholm integral equation of the first kind. Interpretation of the apparent source distribution and its application to flight testing are provided.
Numerical evaluation of a novel high-temperature superconductor-based quasi-diamagnetic motor
NASA Astrophysics Data System (ADS)
Racz, Arpad; Vajda, Istvan
2014-05-01
An investigation is being pursued at the Budapest University of Technology and Economics, Department of Electric Power Engineering for the application of high-temperature superconductors (HTS) in electrical power systems. In this paper we are going to propose a novel electrical machine construction based on the quasi-diamagnetic behaviour of the HTS materials. The basic operation principle of this machine will be introduced with detailed numerical simulations. Also a possible geometric outline will be presented.
Large deviations in boundary-driven systems: Numerical evaluation and effective large-scale behavior
NASA Astrophysics Data System (ADS)
Bunin, Guy; Kafri, Yariv; Podolsky, Daniel
2012-07-01
We study rare events in systems of diffusive fields driven out of equilibrium by the boundaries. We present a numerical technique and use it to calculate the probabilities of rare events in one and two dimensions. Using this technique, we show that the probability density of a slowly varying configuration can be captured with a small number of long-wavelength modes. For a configuration which varies rapidly in space this description can be complemented by a local-equilibrium assumption.
Stress analysis and damage evaluation of flawed composite laminates by hybrid-numerical methods
NASA Technical Reports Server (NTRS)
Yang, Yii-Ching
1992-01-01
Structural components in flight vehicles is often inherited flaws, such as microcracks, voids, holes, and delamination. These defects will degrade structures the same as that due to damages in service, such as impact, corrosion, and erosion. It is very important to know how a structural component can be useful and survive after these flaws and damages. To understand the behavior and limitation of these structural components researchers usually do experimental tests or theoretical analyses on structures with simulated flaws. However, neither approach has been completely successful. As Durelli states that 'Seldom does one method give a complete solution, with the most efficiency'. Examples of this principle is seen in photomechanics which additional strain-gage testing can only average stresses at locations of high concentration. On the other hand, theoretical analyses including numerical analyses are implemented with simplified assumptions which may not reflect actual boundary conditions. Hybrid-Numerical methods which combine photomechanics and numerical analysis have been used to correct this inefficiency since 1950's. But its application is limited until 1970's when modern computer codes became available. In recent years, researchers have enhanced the data obtained from photoelasticity, laser speckle, holography and moire' interferometry for input of finite element analysis on metals. Nevertheless, there is only few of literature being done on composite laminates. Therefore, this research is dedicated to this highly anisotropic material.
NASA Astrophysics Data System (ADS)
Prodanovic, M.; Soares, F.; Huh, C.
2014-12-01
Ferrofluid is a stable dispersion of paramagnetic nanosize particles in a liquid carrier which are magnetized in the presence of magnetic field. Functionalized coating and small size of nanoparticles allows them to flow through porous media without significantly compromising permeability and with little retention. We numerically and experimentally investigate the potential of ferrofluid in mobilizing trapped non-wetting phase. Numerical method is based on a coupled level set model for two-phase flow and an immersed interface method for finding magnetic field strength, and provides the equilibrium configuration of an oleic (non-wetting) phase inside some pore geometry in the presence of dispersed excitable nanoparticles in surrounding water phase. The magnetic pressures near fluid-fluid interface depend locally on the magnetic field intensity and direction, which in turn depend on the fluid configuration. Interfaces represent magnetic permeability discontinuities and hence cause disturbances in the spatial distribution of the magnetic field. Experiments are conducted in micromodels with high pore-to-throat aspect size ratio. Both numerical and experimental results show that stresses produced by the magnetization of ferrofluids can help overcome strong capillary pressures and displace trapped ganglia in the presence of additional mobilizing force such as increased fluid flux or surfactant injection.
An experimental evaluation of a helicopter rotor section designed by numerical optimization
NASA Technical Reports Server (NTRS)
Hicks, R. M.; Mccroskey, W. J.
1980-01-01
The wind tunnel performance of a 10-percent thick helicopter rotor section design by numerical optimization is presented. The model was tested at Mach number from 0.2 to 0.84 with Reynolds number ranging from 1,900,000 at Mach 0.2 to 4,000,000 at Mach numbers above 0.5. The airfoil section exhibited maximum lift coefficients greater than 1.3 at Mach numbers below 0.45 and a drag divergence Mach number of 0.82 for lift coefficients near 0. A moderate 'drag creep' is observed at low lift coefficients for Mach numbers greater than 0.6.
NASA Astrophysics Data System (ADS)
Kassanos, Ioannis; Chrysovergis, Marios; Anagnostopoulos, John; Papantonis, Dimitris; Charalampopoulos, George
2016-06-01
In this paper the effect of impeller design variations on the performance of a centrifugal pump running as turbine is presented. Numerical simulations were performed after introducing various modifications in the design for various operating conditions. Specifically, the effects of the inlet edge shape, the meridional channel width, the number of blades and the addition of splitter blades on impeller performance was investigated. The results showed that, an increase in efficiency can be achieved by increasing the number of blades and by introducing splitter blades.
NASA Astrophysics Data System (ADS)
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan
2016-02-01
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. The thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.
NASA Technical Reports Server (NTRS)
Yaros, S. F.; Carlson, J. R.; Chandrasekaran, B.
1986-01-01
An effort has been undertaken at the NASA Langley Research Center to assess the capabilities of available computational methods for use in propulsion integration design studies of transonic transport aircraft, particularly of pylon/nacelle combinations which exhibit essentially no interference drag. The three computer codes selected represent state-of-the-art computational methods for analyzing complex configurations at subsonic and transonic flight conditions. These are: EULER, a finitie volume solution of the Euler equation; VSAERO, a panel solution of the Laplace equation; and PPW, a finite difference solution of the small disturbance transonic equations. In general, all three codes have certain capabilities that allow them to be of some value in predicting the flows about transport configurations, but all have limitations. Until more accurate methods are available, careful application and interpretation of the results of these codes are needed.
EVALUATION OF U10MO FUEL PLATE IRRADIATION BEHAVIOR VIA NUMERICAL AND EXPERIMENTAL BENCHMARKING
Samuel J. Miller; Hakan Ozaltun
2012-11-01
This article analyzes dimensional changes due to irradiation of monolithic plate-type nuclear fuel and compares results with finite element analysis of the plates during fabrication and irradiation. Monolithic fuel plates tested in the Advanced Test Reactor (ATR) at Idaho National Lab (INL) are being used to benchmark proposed fuel performance for several high power research reactors. Post-irradiation metallographic images of plates sectioned at the midpoint were analyzed to determine dimensional changes of the fuel and the cladding response. A constitutive model of the fabrication process and irradiation behavior of the tested plates was developed using the general purpose commercial finite element analysis package, Abaqus. Using calculated burn-up profiles of irradiated plates to model the power distribution and including irradiation behaviors such as swelling and irradiation enhanced creep, model simulations allow analysis of plate parameters that are either impossible or infeasible in an experimental setting. The development and progression of fabrication induced stress concentrations at the plate edges was of primary interest, as these locations have a unique stress profile during irradiation. Additionally, comparison between 2D and 3D models was performed to optimize analysis methodology. In particular, the ability of 2D and 3D models account for out of plane stresses which result in 3-dimensional creep behavior that is a product of these components. Results show that assumptions made in 2D models for the out-of-plane stresses and strains cannot capture the 3-dimensional physics accurately and thus 2D approximations are not computationally accurate. Stress-strain fields are dependent on plate geometry and irradiation conditions, thus, if stress based criteria is used to predict plate behavior (as opposed to material impurities, fine micro-structural defects, or sharp power gradients), unique 3D finite element formulation for each plate is required.
Evaluation of Sulfur Flow Emplacement on Io from Galileo Data and Numerical Modeling
NASA Technical Reports Server (NTRS)
Williams, David A.; Greeley, Ronald; Lopes, Rosaly M. C.; Davies, Ashley G.
2001-01-01
Galileo images of bright lava flows surrounding Emakong Patera have bee0 analyzed and numerical modeling has been performed to assess whether these flows could have resulted from the emplacement of sulfur lavas on Io. Images from the solid-state imaging (SSI) camera show that these bright, white to yellow Emakong flows are up to 370 km long and contain dark, sinuous features that are interpreted to be lava conduits, -300-500 m wide and >lo0 km lorig. Neiu-Infrared Mapping S estimate of 344 K f 60 G131'C) within the Bmakong caldera. We suggest that these bright flows likely resulted from either sulfur lavas or silicate lavas that have undergone extensive cooling, pyroclastic mantling, and/or alteration with bright sulfurous materials. The Emakoag bright flows have estimated volume of -250-350 km', similar to some of the smaller Columbia River Basalt flows, If the Emakong flows did result from effusive sulfur eruptions, then they are orders of magnitude reater in volume than any terrestrial sulfur flows. Our numerical modeling capable of traveling tens to hundreds of kilometers, consistent with the predictions of Sagan. Our modeled flow distances are also consistent with the measured lengths of the Emakong channels and bright flows.
NASA Technical Reports Server (NTRS)
Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; Prior, D. L.; Scalia, G. M.; Thomas, J. D.; Garcia, M. J.
2000-01-01
The simplified Bernoulli equation relates fluid convective energy derived from flow velocities to a pressure gradient and is commonly used in clinical echocardiography to determine pressure differences across stenotic orifices. Its application to pulmonary venous flow has not been described in humans. Twelve patients undergoing cardiac surgery had simultaneous high-fidelity pulmonary venous and left atrial pressure measurements and pulmonary venous pulsed Doppler echocardiography performed. Convective gradients for the systolic (S), diastolic (D), and atrial reversal (AR) phases of pulmonary venous flow were determined using the simplified Bernoulli equation and correlated with measured actual pressure differences. A linear relationship was observed between the convective (y) and actual (x) pressure differences for the S (y = 0.23x + 0.0074, r = 0.82) and D (y = 0.22x + 0.092, r = 0.81) waves, but not for the AR wave (y = 0. 030x + 0.13, r = 0.10). Numerical modeling resulted in similar slopes for the S (y = 0.200x - 0.127, r = 0.97), D (y = 0.247x - 0. 354, r = 0.99), and AR (y = 0.087x - 0.083, r = 0.96) waves. Consistent with numerical modeling, the convective term strongly correlates with but significantly underestimates actual gradient because of large inertial forces.
NASA Astrophysics Data System (ADS)
Troppová, Eva; Tippner, Jan; Hrčka, Richard
2016-04-01
This paper presents an experimental measurement of thermal properties of medium density fiberboards with different thicknesses (12, 18 and 25 mm) and sample sizes (50 × 50 mm and 100 × 100 mm) by quasi-stationary method. The quasi-stationary method is a transient method which allows measurement of three thermal parameters (thermal conductivity, thermal diffusivity and heat capacity). The experimentally gained values were used to verify a numerical model and furthermore served as input parameters for the numerical probabilistic analysis. The sensitivity of measured outputs (time course of temperature) to influential factors (density, heat transfer coefficient and thermal conductivities) was established and described by the Spearman's rank correlation coefficients. The dependence of thermal properties on density was confirmed by the data measured. Density was also proved to be an important factor for sensitivity analyses as it highly correlated with all output parameters. The accuracy of the measurement method can be improved based on the results of the probabilistic analysis. The relevancy of the experiment is mainly influenced by the choice of a proper ratio between thickness and width of samples.
Le Cann, Sophie; Galland, Alexandre; Rosa, Benoît; Le Corroller, Thomas; Pithioux, Martine; Argenson, Jean-Noël; Chabrand, Patrick; Parratte, Sébastien
2014-09-01
Most acetabular cups implanted today are press-fit impacted cementless. Anchorage begins with the primary stability given by insertion of a slightly oversized cup. This primary stability is key to obtaining bone ingrowth and secondary stability. We tested the hypothesis that primary stability of the cup is related to surface roughness of the implant, using both an experimental and a numerical models to analyze how three levels of surface roughness (micro, macro and combined) affect the primary stability of the cup. We also investigated the effect of differences in diameter between the cup and its substrate, and of insertion force, on the cups' primary stability. The results of our study show that primary stability depends on the surface roughness of the cup. The presence of macro-roughness on the peripheral ring is found to decrease primary stability; there was excessive abrasion of the substrate, damaging it and leading to poor primary stability. Numerical modeling indicates that oversizing the cup compared to its substrate has an impact on primary stability, as has insertion force. PMID:25080896
Experimental and numerical evaluation of the heat fluxes in a basic two-dimensional motor
NASA Astrophysics Data System (ADS)
Nicoud, F.
In the framework of a study assessing the ablation of Internal Thermal Insulation (ITI) of the Ariane 5 P230 Solid Rocket Booster (SRB), a 2D basic motor has been designed and manufactured at ONERA. During the first phase of the study, emphasis has been put on the heat flux measurements on an inert wall facing a propellant grain. In order to numerically reproduce the increase of the heat transfer exchange coefficient which is experimentally observed when one proceeds from the head-end to the aft-end of the port, a 2D explicit code with a two-equation turbulence model has been used. It is found that the computed heat transfer coefficient is closer to the experimental one when a wall law accounting for the mean density variations due to the large temperature gradient near the ITI is used. For this, the ITI is assumed to be completely inert and the wall temperature is imposed. The experimental data for two other tests, not numerically simulated, are also presented.
Charalampous, Georgios; Hardalupas, Yannis
2011-03-20
The dependence of fluorescent and scattered light intensities from spherical droplets on droplet diameter was evaluated using Mie theory. The emphasis is on the evaluation of droplet sizing, based on the ratio of laser-induced fluorescence and scattered light intensities (LIF/Mie technique). A parametric study is presented, which includes the effects of scattering angle, the real part of the refractive index and the dye concentration in the liquid (determining the imaginary part of the refractive index). The assumption that the fluorescent and scattered light intensities are proportional to the volume and surface area of the droplets for accurate sizing measurements is not generally valid. More accurate sizing measurements can be performed with minimal dye concentration in the liquid and by collecting light at a scattering angle of 60 deg. rather than the commonly used angle of 90 deg. Unfavorable to the sizing accuracy are oscillations of the scattered light intensity with droplet diameter that are profound at the sidescatter direction (90 deg.) and for droplets with refractive indices around 1.4.
A critical evaluation of numerical algorithms and flow physics in complex supersonic flows
NASA Astrophysics Data System (ADS)
Aradag, Selin
In this research, two different complex supersonic flows are selected to apply CFD to Navier-Stokes simulations. First test case is "Supersonic Flow over an Open Rectangular Cavity". Open cavity flow fields are remarkably complicated with internal and external regions that are coupled via self-sustained shear layer oscillations. Supersonic flow past a cavity has numerous applications in store carriage and release. Internal carriage of stores, which can be modeled using a cavity configuration, is used for supersonic aircraft in order to reduce radar cross section, aerodynamic drag and aerodynamic heating. Supersonic, turbulent, three-dimensional unsteady flow past an open rectangular cavity is simulated, to understand the physics and three-dimensional nature of the cavity flow oscillations. Influences of numerical parameters such as numerical flux scheme, computation time and flux limiter on the computed flow are determined. Two dimensional simulations are also performed for comparison purposes. The next test case is "The Computational Design of Boeing/AFOSR Mach 6 Wind Tunnel". Due to huge differences between geometrical scales, this problem is both challenging and computationally intensive. It is believed that most of the experimental data obtained from conventional ground testing facilities are not reliable due to high levels of noise associated with the acoustic fluctuations from the turbulent boundary layers on the wind tunnel walls. Therefore, it is very important to have quiet testing facilities for hypersonic flow research. The Boeing/AFOSR Mach 6 Wind tunnel in Purdue University has been designed as a quiet tunnel for which the noise level is an order of magnitude lower than that in conventional wind tunnels. However, quiet flow is achieved in the Purdue Mach 6 tunnel for only low Reynolds numbers. Early transition of the nozzle wall boundary layer has been identified as the cause of the test section noise. Separation bubbles on the bleed lip and associated
Numerical evaluation of Auger recombination coefficients in relaxed and strained germanium
NASA Astrophysics Data System (ADS)
Dominici, Stefano; Wen, Hanqing; Bertazzi, Francesco; Goano, Michele; Bellotti, Enrico
2016-05-01
The potential applications of germanium and its alloys in infrared silicon-based photonics have led to a renewed interest in their optical properties. In this letter, we report on the numerical determination of Auger coefficients at T = 300 K for relaxed and biaxially strained germanium. We use a Green's function based model that takes into account all relevant direct and phonon-assisted processes and perform calculations up to a strain level corresponding to the transition from indirect to direct energy gap. We have considered excess carrier concentrations ranging from 1016 cm-3 to 5 × 1019 cm-3. For use in device level simulations, we also provide fitting formulas for the calculated electron and hole Auger coefficients as functions of carrier density.
Copps, Kevin D.; Carnes, Brian R.
2008-04-01
We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.
Numerical evaluation of an innovative cup layout for open volumetric solar air receivers
NASA Astrophysics Data System (ADS)
Cagnoli, Mattia; Savoldi, Laura; Zanino, Roberto; Zaversky, Fritz
2016-05-01
This paper proposes an innovative volumetric solar absorber design to be used in high-temperature air receivers of solar power tower plants. The innovative absorber, a so-called CPC-stacked-plate configuration, applies the well-known principle of a compound parabolic concentrator (CPC) for the first time in a volumetric solar receiver, heating air to high temperatures. The proposed absorber configuration is analyzed numerically, applying first the open-source ray-tracing software Tonatiuh in order to obtain the solar flux distribution on the absorber's surfaces. Next, a Computational Fluid Dynamic (CFD) analysis of a representative single channel of the innovative receiver is performed, using the commercial CFD software ANSYS Fluent. The solution of the conjugate heat transfer problem shows that the behavior of the new absorber concept is promising, however further optimization of the geometry will be necessary in order to exceed the performance of the classical absorber designs.
A Numerical Evaluation Of A Facial Pattern In Children With Isolated Pulmonary Stenosis
NASA Astrophysics Data System (ADS)
Ainsworth, Howard; Hunt, James; Joseph, Michael
1980-07-01
A facial contouring technique, using light sectioning by Coob was modified by Ainsworth and Joseph and used in a numerical study of children with isolated pulmonary stenosis (PS) to test the hypothesis that the facial pattern in this condition differs from the normal. Measurements were compared between a group of 20 normal children, and a group of 20 children with PS between the ages of 6 and 10.5 years. A distinctive facial pattern has emerged. Many anteroposterior measurements were significantly greater in the PS group, indicating that the tissues are more prominent in the maxillary region. Twenty-nine of the measurements showed significant differences between the two groups (P <.05). Discriminant analyses were carried out to discover which, if any, might be used to predict the group to which an individual should belong. Depending on the variables chosen, between 34 and 37 individuals from the total of 40 were assigned to their correct group, PS or control.
Numerical and experimental evaluation of a compact sensor antenna for healthcare devices.
Alomainy, A; Yang Hao; Pasveer, F
2007-12-01
The paper presents a compact planar antenna designed for wireless sensors intended for healthcare applications. Antenna performance is investigated with regards to various parameters governing the overall sensor operation. The study illustrates the importance of including full sensor details in determining and analysing the antenna performance. A globally optimized sensor antenna shows an increase in antenna gain by 2.8 dB and 29% higher radiation efficiency in comparison to a conventional printed strip antenna. The wearable sensor performance is demonstrated and effects on antenna radiated power, efficiency and front to back ratio of radiated energy are investigated both numerically and experimentally. Propagation characteristics of the body-worn sensor to on-body and off-body base units are also studied. It is demonstrated that the improved sensor antenna has an increase in transmitted and received power, consequently sensor coverage range is extended by approximately 25%. PMID:23852005
Carbon capture and storage reservoir properties from poroelastic inversion: A numerical evaluation
NASA Astrophysics Data System (ADS)
Lepore, Simone; Ghose, Ranajit
2015-11-01
We investigate the prospect of estimating carbon capture and storage (CCS) reservoir properties from P-wave intrinsic attenuation and velocity dispersion. Numerical analogues for two CCS reservoirs are examined: the Utsira saline formation at Sleipner (Norway) and the coal-bed methane basin at Atzbach-Schwanestadt (Austria). P-wave intrinsic dispersion curves in the field-seismic frequency band, obtained from theoretical studies based on simulation of oscillatory compressibility and shear tests upon representative rock samples, are considered as observed data. We carry out forward modelling using poroelasticity theories, making use of previously established empirical relations, pertinent to CCS reservoirs, to link pressure, temperature and CO2 saturation to other properties. To derive the reservoir properties, poroelastic inversions are performed through a global multiparameter optimization using simulated annealing. We find that the combination of attenuation and velocity dispersion in the error function helps significantly in eliminating the local minima and obtaining a stable result in inversion. This is because of the presence of convexity in the solution space when an integrated error function is minimized, which is governed by the underlying physics. The results show that, even in the presence of fairly large model discrepancies, the inversion provides reliable values for the reservoir properties, with the error being less than 10% for most of them. The estimated values of velocity and attenuation and their sensitivity to effective stress and CO2 saturation generally agree with the earlier experimental observation. Although developed and tested for numerical analogues of CCS reservoirs, the approach presented here can be adapted in order to predict key properties in a fluid-bearing porous reservoir, in general.
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-01-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
NASA Astrophysics Data System (ADS)
Huijssen, Jacobus; Hallez, Raphael; Pluymers, Bert; Desmet, Wim
2013-07-01
A synthesis procedure is presented for the prediction of the sound pressure level (SPL) of passenger vehicles in a pass-by noise test. The proposed synthesis procedure translates the noise from the sources in the moving vehicle to the receivers in two steps. Firstly, the steady-state receiver contributions of the sources are computed as they would arise from a number of static vehicle positions along the drive path. Secondly, these contributions are then combined into a single transient signal from a moving vehicle for each source-receiver pair by means of a travel time correction. The multiple source-receiver transfer functions are numerically evaluated by employing the Fast Multipole Boundary Element Method (FMBEM), which allows for pass-by noise SPL estimation on the basis of the CAD/CAE computer models that are available early in the design stage. Results are presented that show the accuracy of the synthesis procedure and that show the ability of the combination of the synthesis procedure and numerically evaluated transfer functions to predict pass-by noise SPL for a realistic case in an evaluation time of less than a day.
A Numerical model to evaluate proposed ground-water allocations in southwest Kansas
Jorgensen, D.G.; Grubb, H.F.; Baker, C.H.; Hilmes, G.E.; Jenkins, E.D.
1982-01-01
A computer model was developed to assist the Southwest Kansas Groundwater Management District No. 3 in the evaluation of applications to appropriate ground water. The model calculated the drawdown due from a proposed well at all existing wells in the section of the proposed well and at all wells in the adjacent eight sections. The depletion expected in the 9-square-mile area due to all existing wells and the proposed well is computed and compared with allowable limits defined by the management district. An optional program permits the evaluation of allowable depletion for one or more townships. All options are designed to run interactively, thus allowing for immediate evaluation of proposed ground-water withdrawals. (USGS)
Three-Dimensional Numerical Evaluation of Thermal Performance of Uninsulated Wall Assemblies
Ridouane, El Hassan; Bianchi, Marcus V.A.
2011-11-01
This study describes a detailed 3D computational fluid dynamics model that evaluates the thermal performance of uninsulated wall assemblies. It accounts for conduction through framing, convection, and radiation and allows for material property variations with temperature. This research was presented at the ASME 2011 International Mechanical Engineering Congress and Exhibition; Denver, Colorado; November 11-17, 2011
Evaluation of Sulfur Flow Emplacement on Io from Galileo Data and Numerical Modeling
NASA Technical Reports Server (NTRS)
Williams, David A.; Greeley, Ronald; Lopes, Rosaly M. C.; Davies, Ashley G.
2001-01-01
Galileo images of bright lava flows surrounding Emakong Patera have been analyzed and numerical modeling has been performed to assess whether these flows could have resulted from the emplacement of sulfur lavas on Io. Images from the solid-state imaging.(SSI) camera show that these bright, white to yellow Emakong flows are up to 370 km long and contain dark, sinuous features that are interpreted to be lava conduits, approx. 300-500 m wide and > 100 km long. Near-Infrared Mapping Spectrometer (NIMS) thermal emission data yield a color temperature estimate of 344 K +/- 60 K (less than or equal to 131 C) within the Emakong caldera. We suggest that these bright flows likely resulted from either sulfur lavas or silicate lavas that have undergone extensive cooling, pyroclastic mantling, and/or alteration with bright sulfurous materials. The Emakong bright flows have estimated volumes of approx. 250-350 cu km, similar to some of the smaller Columbia River Basalt flows. If the Emakong flows did result from effusive sulfur eruptions, then they are orders of magnitude greater in volume than any terrestrial sulfur flows. Our numerical modeling results show that sulfur lavas on Io could have been emplaced as turbulent flows, which were capable of traveling tens to hundreds of kilometers, consistent with the predictions of Sagan [ 19793 and Fink et al. [ 19831. Our modeled flow distances are also consistent with the measured lengths of the Emakong channels and bright flows. Modeled thermal erosion rates are approx. 1-4 m/d for flows erupted at approx. 140-180 C, which are consistent with the melting rates of Kieffer et al. [2000]. The Emakong channels could be thermal erosional in nature; however, the morphologic signatures of thermal erosion channels cannot be discerned from available images. There are planned Galileo flybys of Io in 2001 which provide excellent opportunities to obtain high-resolution morphologic and color data of Emakong Patera. Such observations could, along
Wang, Haiqiang; Zhuang, Zhuokai; Sun, Chenglang; Zhao, Nan; Liu, Yue; Wu, Zhongbiao
2016-03-01
Wet scrubbing combined with ozone oxidation has become a promising technology for simultaneous removal of SO2 and NOx in exhaust gas. In this paper, a new 20-species, 76-step detailed kinetic mechanism was proposed between O3 and NOx. The concentration of N2O5 was measured using an in-situ IR spectrometer. The numerical evaluation results kept good pace with both the public experiment results and our experiment results. Key reaction parameters for the generation of NO2 and N2O5 during the NO ozonation process were investigated by a numerical simulation method. The effect of temperature on producing NO2 was found to be negligible. To produce NO2, the optimal residence time was 1.25sec and the molar ratio of O3/NO about 1. For the generation of N2O5, the residence time should be about 8sec while the temperature of the exhaust gas should be strictly controlled and the molar ratio of O3/NO about 1.75. This study provided detailed investigations on the reaction parameters of ozonation of NOx by a numerical simulation method, and the results obtained should be helpful for the design and optimization of ozone oxidation combined with the wet flue gas desulfurization methods (WFGD) method for the removal of NOx. PMID:26969050