Meek, Garrett A; Levine, Benjamin G
2014-07-01
Spikes in the time-derivative coupling (TDC) near surface crossings make the accurate integration of the time-dependent Schrödinger equation in nonadiabatic molecular dynamics simulations a challenge. To address this issue, we present an approximation to the TDC based on a norm-preserving interpolation (NPI) of the adiabatic electronic wave functions within each time step. We apply NPI and two other schemes for computing the TDC in numerical simulations of the Landau-Zener model, comparing the simulated transfer probabilities to the exact solution. Though NPI does not require the analytical calculation of nonadiabatic coupling matrix elements, it consistently yields unsigned population transfer probability errors of ∼0.001, whereas analytical calculation of the TDC yields errors of 0.0-1.0 depending on the time step, the offset of the maximum in the TDC from the beginning of the time step, and the coupling strength. The approximation of Hammes-Schiffer and Tully yields errors intermediate between NPI and the analytical scheme. PMID:26279558
Bangalore, Sai Santosh; Wang, Jelai; Allison, David B.
2009-01-01
In the fields of genomics and high dimensional biology (HDB), massive multiple testing prompts the use of extremely small significance levels. Because tail areas of statistical distributions are needed for hypothesis testing, the accuracy of these areas is important to confidently make scientific judgments. Previous work on accuracy was primarily focused on evaluating professionally written statistical software, like SAS, on the Statistical Reference Datasets (StRD) provided by National Institute of Standards and Technology (NIST) and on the accuracy of tail areas in statistical distributions. The goal of this paper is to provide guidance to investigators, who are developing their own custom scientific software built upon numerical libraries written by others. In specific, we evaluate the accuracy of small tail areas from cumulative distribution functions (CDF) of the Chi-square and t-distribution by comparing several open-source, free, or commercially licensed numerical libraries in Java, C, and R to widely accepted standards of comparison like ELV and DCDFLIB. In our evaluation, the C libraries and R functions are consistently accurate up to six significant digits. Amongst the evaluated Java libraries, Colt is most accurate. These languages and libraries are popular choices among programmers developing scientific software, so the results herein can be useful to programmers in choosing libraries for CDF accuracy. PMID:20161126
NASA Astrophysics Data System (ADS)
Hedrick, A. R.; Marks, D. G.; Winstral, A. H.; Marshall, H. P.
2014-12-01
The ability to forecast snow water equivalent, or SWE, in mountain catchments would benefit many different communities ranging from avalanche hazard mitigation to water resource management. Historical model runs of Isnobal, the physically based energy balance snow model, have been produced over the 2150 km2 Boise River Basin for water years 2012 - 2014 at 100-meter resolution. Spatially distributed forcing parameters such as precipitation, wind, and relative humidity are generated from automated weather stations located throughout the watershed, and are supplied to Isnobal at hourly timesteps. Similarly, the Weather Research & Forecasting (WRF) Model provides hourly predictions of the same forcing parameters from an atmospheric physics perspective. This work aims to quantitatively compare WRF model output to the spatial meteorologic fields developed to force Isnobal, with the hopes of eventually using WRF predictions to create accurate hourly forecasts of SWE over a large mountainous basin.
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, David C.; Goorvitch, D.
1994-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Accurate numerical solutions of conservative nonlinear oscillators
NASA Astrophysics Data System (ADS)
Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan
2014-12-01
The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1975-01-01
The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.
Accurate complex scaling of three dimensional numerical potentials
Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan; Deutsch, Thierry
2013-05-28
The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scaling of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.
Accurate derivative evaluation for any Grad-Shafranov solver
NASA Astrophysics Data System (ADS)
Ricketson, L. F.; Cerfon, A. J.; Rachh, M.; Freidberg, J. P.
2016-01-01
We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad-Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
The development of accurate and efficient methods of numerical quadrature
NASA Technical Reports Server (NTRS)
Feagin, T.
1973-01-01
Some new methods for performing numerical quadrature of an integrable function over a finite interval are described. Each method provides a sequence of approximations of increasing order to the value of the integral. Each approximation makes use of all previously computed values of the integrand. The points at which new values of the integrand are computed are selected in such a way that the order of the approximation is maximized. The methods are compared with the quadrature methods of Clenshaw and Curtis, Gauss, Patterson, and Romberg using several examples.
Accurate numerical solution of compressible, linear stability equations
NASA Technical Reports Server (NTRS)
Malik, M. R.; Chuang, S.; Hussaini, M. Y.
1982-01-01
The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.
Numerical evaluation of uniform beam modes.
Tang, Y.; Reactor Analysis and Engineering
2003-12-01
The equation for calculating the normal modes of a uniform beam under transverse free vibration involves the hyperbolic sine and cosine functions. These functions are exponential growing without bound. Tables for the natural frequencies and the corresponding normal modes are available for the numerical evaluation up to the 16th mode. For modes higher than the 16th, the accuracy of the numerical evaluation will be lost due to the round-off errors in the floating-point math imposed by the digital computers. Also, it is found that the functions of beam modes commonly presented in the structural dynamics books are not suitable for numerical evaluation. In this paper, these functions are rearranged and expressed in a different form. With these new equations, one can calculate the normal modes accurately up to at least the 100th mode. Mike's Arbitrary Precision Math, an arbitrary precision math library, is used in the paper to verify the accuracy.
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.
2015-09-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.
Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique
2013-06-01
The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979
Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques
Petersen, Richard C.
2014-01-01
Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms
NASA Technical Reports Server (NTRS)
VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.
2000-01-01
The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.
Efficient numerical evaluation of Feynman integrals
NASA Astrophysics Data System (ADS)
Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran
2016-03-01
Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)
Seth A Veitzer
2008-10-21
Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.
Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations
Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg
2007-08-10
In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.
Takahashi, F; Endo, A
2007-01-01
A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure. PMID:17510203
A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation
NASA Astrophysics Data System (ADS)
Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin
2016-07-01
In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.
Accurate Evaluation Method of Molecular Binding Affinity from Fluctuation Frequency
NASA Astrophysics Data System (ADS)
Hoshino, Tyuji; Iwamoto, Koji; Ode, Hirotaka; Ohdomari, Iwao
2008-05-01
Exact estimation of the molecular binding affinity is significantly important for drug discovery. The energy calculation is a direct method to compute the strength of the interaction between two molecules. This energetic approach is, however, not accurate enough to evaluate a slight difference in binding affinity when distinguishing a prospective substance from dozens of candidates for medicine. Hence more accurate estimation of drug efficacy in a computer is currently demanded. Previously we proposed a concept of estimating molecular binding affinity, focusing on the fluctuation at an interface between two molecules. The aim of this paper is to demonstrate the compatibility between the proposed computational technique and experimental measurements, through several examples for computer simulations of an association of human immunodeficiency virus type-1 (HIV-1) protease and its inhibitor (an example for a drug-enzyme binding), a complexation of an antigen and its antibody (an example for a protein-protein binding), and a combination of estrogen receptor and its ligand chemicals (an example for a ligand-receptor binding). The proposed affinity estimation has proven to be a promising technique in the advanced stage of the discovery and the design of drugs.
PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release
NASA Astrophysics Data System (ADS)
Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.
2016-09-01
The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.
Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods
NASA Astrophysics Data System (ADS)
Kozdon, J. E.; Wilcox, L.
2013-12-01
Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.
Numerical Simulation of the 2004 Indian Ocean Tsunami: Accurate Flooding and drying in Banda Aceh
NASA Astrophysics Data System (ADS)
Cui, Haiyang; Pietrzak, Julie; Stelling, Guus; Androsov, Alexey; Harig, Sven
2010-05-01
The Indian Ocean Tsunami on December 26, 2004 caused one of the largest tsunamis in recent times and led to widespread devastation and loss of life. One of the worst hit regions was Banda Aceh, which is the capital of the Aceh province, located in the northern part of Sumatra, 150km from the source of the earthquake. A German-Indonesian Tsunami Early Warning System (GITEWS) (www.gitews.de) is currently under active development. The work presented here is carried out within the GITEWS framework. One of the aims of this project is the development of accurate models with which to simulate the propagation, flooding and drying, and run-up of a tsunami. In this context, TsunAWI has been developed by the Alfred Wegener Institute; it is an explicit, () finite element model. However, the accurate numerical simulation of flooding and drying requires the conservation of mass and momentum. This is not possible in the current version of TsunAWi. The P1NC - P1element guarantees mass conservation in a global sense, yet as we show here it is important to guarantee mass conservation at the local level, that is within each individual cell. Here an unstructured grid, finite volume ocean model is presented. It is derived from the P1NC - P1 element, and is shown to be mass and momentum conserving. Then a number of simulations are presented, including dam break problems flooding over both a wet and a dry bed. Excellent agreement is found. Then we present simulations for Banda Aceh, and compare the results to on-site survey data, as well as to results from the original TsunAWI code.
Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows
Johnson, B M; Guan, X; Gammie, F
2008-04-11
In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
Numerical Evaluation of 2D Ground States
NASA Astrophysics Data System (ADS)
Kolkovska, Natalia
2016-02-01
A ground state is defined as the positive radial solution of the multidimensional nonlinear problem
A Novel Method for the Accurate Evaluation of Poisson's Ratio of Soft Polymer Materials
Lee, Jae-Hoon; Lee, Sang-Soo; Chang, Jun-Dong; Thompson, Mark S.; Kang, Dong-Joong; Park, Sungchan
2013-01-01
A new method with a simple algorithm was developed to accurately measure Poisson's ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H) with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson's ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson's ratio despite misalignment between specimens and experimental devices. In this study, Poisson's ratio of PVA-H was 0.44 ± 0.025 (n = 6) for 2.6–47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson's ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials. PMID:23737733
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of
Accurate evaluation of homogenous and nonhomogeneous gas emissivities
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Lee, K. P.
1984-01-01
Spectral transmittance and total band adsorptance of selected infrared bands of carbon dioxide and water vapor are calculated by using the line-by-line and quasi-random band models and these are compared with available experimental results to establish the validity of the quasi-random band model. Various wide-band model correlations are employed to calculate the total band absorptance and total emissivity of these two gases under homogeneous and nonhomogeneous conditions. These results are compared with available experimental results under identical conditions. From these comparisons, it is found that the quasi-random band model can provide quite accurate results and is quite suitable for most atmospheric applications.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)
A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...
Danshita, Ippei; Polkovnikov, Anatoli
2010-09-01
We study the quantum dynamics of supercurrents of one-dimensional Bose gases in a ring optical lattice to verify instanton methods applied to coherent macroscopic quantum tunneling (MQT). We directly simulate the real-time quantum dynamics of supercurrents, where a coherent oscillation between two macroscopically distinct current states occurs due to MQT. The tunneling rate extracted from the coherent oscillation is compared with that given by the instanton method. We find that the instanton method is quantitatively accurate when the effective Planck's constant is sufficiently small. We also find phase slips associated with the oscillations.
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
Accurate polarimeter with multicapture fitting for plastic lens evaluation
NASA Astrophysics Data System (ADS)
Domínguez, Noemí; Mayershofer, Daniel; Garcia, Cristina; Arasa, Josep
2016-02-01
Due to their manufacturing process, plastic injection molded lenses do not achieve a constant density throughout their volume. This change of density introduces tensions in the material, inducing local birefringence, which in turn is translated into a variation of the ordinary and extraordinary refractive indices that can be expressed as a retardation phase plane using the Jones matrix notation. The detection and measurement of the value of the retardation of the phase plane are therefore very useful ways to evaluate the quality of plastic lenses. We introduce a polariscopic device to obtain two-dimensional maps of the tension distribution in the bulk of a lens, based on detection of the local birefringence. In addition to a description of the device and the mathematical approach used, a set of initial measurements is presented that confirms the validity of the developed system for the testing of the uniformity of plastic lenses.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.
2006-01-01
Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.
Differential-equation-based representation of truncation errors for accurate numerical simulation
NASA Astrophysics Data System (ADS)
MacKinnon, Robert J.; Johnson, Richard W.
1991-09-01
High-order compact finite difference schemes for 2D convection-diffusion-type differential equations with constant and variable convection coefficients are derived. The governing equations are employed to represent leading truncation terms, including cross-derivatives, making the overall O(h super 4) schemes conform to a 3 x 3 stencil. It is shown that the two-dimensional constant coefficient scheme collapses to the optimal scheme for the one-dimensional case wherein the finite difference equation yields nodally exact results. The two-dimensional schemes are tested against standard model problems, including a Navier-Stokes application. Results show that the two schemes are generally more accurate, on comparable grids, than O(h super 2) centered differencing and commonly used O(h) and O(h super 3) upwinding schemes.
Towards more accurate numerical modeling of impedance based high frequency harmonic vibration
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2014-03-01
The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.
TOPICA: an accurate and efficient numerical tool for analysis and design of ICRF antennas
NASA Astrophysics Data System (ADS)
Lancellotti, V.; Milanesio, D.; Maggiora, R.; Vecchi, G.; Kyrytsya, V.
2006-07-01
The demand for a predictive tool to help in designing ion-cyclotron radio frequency (ICRF) antenna systems for today's fusion experiments has driven the development of codes such as ICANT, RANT3D, and the early development of TOPICA (TOrino Polytechnic Ion Cyclotron Antenna) code. This paper describes the substantive evolution of TOPICA formulation and implementation that presently allow it to handle the actual geometry of ICRF antennas (with curved, solid straps, a general-shape housing, Faraday screen, etc) as well as an accurate plasma description, accounting for density and temperature profiles and finite Larmor radius effects. The antenna is assumed to be housed in a recess-like enclosure. Both goals have been attained by formally separating the problem into two parts: the vacuum region around the antenna and the plasma region inside the toroidal chamber. Field continuity and boundary conditions allow formulating of a set of two coupled integral equations for the unknown equivalent (current) sources; then the equations are reduced to a linear system by a method of moments solution scheme employing 2D finite elements defined over a 3D non-planar surface triangular-cell mesh. In the vacuum region calculations are done in the spatial (configuration) domain, whereas in the plasma region a spectral (wavenumber) representation of fields and currents is adopted, thus permitting a description of the plasma by a surface impedance matrix. Owing to this approach, any plasma model can be used in principle, and at present the FELICE code has been employed. The natural outcomes of TOPICA are the induced currents on the conductors (antenna, housing, etc) and the electric field in front of the plasma, whence the antenna circuit parameters (impedance/scattering matrices), the radiated power and the fields (at locations other than the chamber aperture) are then obtained. An accurate model of the feeding coaxial lines is also included. The theoretical model and its TOPICA
TOPLHA: an accurate and efficient numerical tool for analysis and design of LH antennas
NASA Astrophysics Data System (ADS)
Milanesio, D.; Lancellotti, V.; Meneghini, O.; Maggiora, R.; Vecchi, G.; Bilato, R.
2007-09-01
Auxiliary ICRF heating systems in tokamaks often involve large complex antennas, made up of several conducting straps hosted in distinct cavities that open towards the plasma. The same holds especially true in the LH regime, wherein the antennas are comprised of arrays of many phased waveguides. Upon observing that the various cavities or waveguides couple to each other only through the EM fields existing over the plasma-facing apertures, we self-consistently formulated the EM problem by a convenient set of multiple coupled integral equations. Subsequent application of the Method of Moments yields a highly sparse algebraic system; therefore formal inversion of the system matrix happens to be not so memory demanding, despite the number of unknowns may be quite large (typically 105 or so). The overall strategy has been implemented in an enhanced version of TOPICA (Torino Polytechnic Ion Cyclotron Antenna) and in a newly developed code named TOPLHA (Torino Polytechnic Lower Hybrid Antenna). Both are simulation and prediction tools for plasma facing antennas that incorporate commercial-grade 3D graphic interfaces along with an accurate description of the plasma. In this work we present the new proposed formulation along with examples of application to real life large LH antenna systems.
Kottmann, Jakob S; Höfener, Sebastian; Bischoff, Florian A
2015-12-21
In the present work, we report an efficient implementation of configuration interaction singles (CIS) excitation energies and oscillator strengths using the multi-resolution analysis (MRA) framework to address the basis-set convergence of excited state computations. In MRA (ground-state) orbitals, excited states are constructed adaptively guaranteeing an overall precision. Thus not only valence but also, in particular, low-lying Rydberg states can be computed with consistent quality at the basis set limit a priori, or without special treatments, which is demonstrated using a small test set of organic molecules, basis sets, and states. We find that the new implementation of MRA-CIS excitation energy calculations is competitive with conventional LCAO calculations when the basis-set limit of medium-sized molecules is sought, which requires large, diffuse basis sets. This becomes particularly important if accurate calculations of molecular electronic absorption spectra with respect to basis-set incompleteness are required, in which both valence as well as Rydberg excitations can contribute to the molecule's UV/VIS fingerprint. PMID:25913482
The use of experimental bending tests to more accurate numerical description of TBC damage process
NASA Astrophysics Data System (ADS)
Sadowski, T.; Golewski, P.
2016-04-01
Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.
NASA Astrophysics Data System (ADS)
Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid
2016-07-01
We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].
NASA Astrophysics Data System (ADS)
Jiang, Shidong; Luo, Li-Shi
2016-07-01
The integral equation for the flow velocity u (x ; k) in the steady Couette flow derived from the linearized Bhatnagar-Gross-Krook-Welander kinetic equation is studied in detail both theoretically and numerically in a wide range of the Knudsen number k between 0.003 and 100.0. First, it is shown that the integral equation is a Fredholm equation of the second kind in which the norm of the compact integral operator is less than 1 on Lp for any 1 ≤ p ≤ ∞ and thus there exists a unique solution to the integral equation via the Neumann series. Second, it is shown that the solution is logarithmically singular at the endpoints. More precisely, if x = 0 is an endpoint, then the solution can be expanded as a double power series of the form ∑n=0∞∑m=0∞cn,mxn(xln x) m about x = 0 on a small interval x ∈ (0 , a) for some a > 0. And third, a high-order adaptive numerical algorithm is designed to compute the solution numerically to high precision. The solutions for the flow velocity u (x ; k), the stress Pxy (k), and the half-channel mass flow rate Q (k) are obtained in a wide range of the Knudsen number 0.003 ≤ k ≤ 100.0; and these solutions are accurate for at least twelve significant digits or better, thus they can be used as benchmark solutions.
Shen, Yan; Lou, Shuqin; Wang, Xin
2014-03-20
The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters. PMID:24663461
NASA Technical Reports Server (NTRS)
Levi, Keith
1989-01-01
Two procedures for the evaluation of the performance of expert systems are illustrated: one procedure evaluates predictive accuracy; the other procedure is complementary in that it uncovers the factors that contribute to predictive accuracy. Using these procedures, it is argued that expert systems should be more accurate than human experts in two senses. One sense is that expert systems must be more accurate to be cost-effective. Previous research is reviewed and original results are presented which show that simple statistical models typically perform better than human experts for the task of combining evidence from a given set of information sources. The results also suggest the second sense in which expert systems should be more accurate than human experts. They reveal that expert systems should share factors that contribute to human accuracy, but not factors that detract from human accuracy. Thus the thesis is that one should both require and expect systems to be more accurate than humans.
Smalarz, Laura; Wells, Gary L
2014-04-01
Giving confirming feedback to mistaken eyewitnesses has robust distorting effects on their retrospective judgments (e.g., how certain they were, their view, etc.). Does feedback harm evaluators' abilities to discriminate between accurate and mistaken identification testimony? Participant-witnesses to a simulated crime made accurate or mistaken identifications from a lineup and then received confirming feedback or no feedback. Each then gave videotaped testimony about their identification, and a new sample of participant-evaluators judged the accuracy and credibility of the testimonies. Among witnesses who were not given feedback, evaluators were significantly more likely to believe the testimony of accurate eyewitnesses than they were to believe the testimony of mistaken eyewitnesses, indicating significant discrimination. Among witnesses who were given confirming feedback, however, evaluators believed accurate and mistaken witnesses at nearly identical rates, indicating no ability to discriminate. Moreover, there was no evidence of overbelief in the absence of feedback whereas there was significant overbelief in the confirming feedback conditions. Results demonstrate that a simple comment following a witness' identification decision ("Good job, you got the suspect") can undermine fact-finders' abilities to discern whether the witness made an accurate or a mistaken identification. PMID:24341835
Accurate Histological Techniques to Evaluate Critical Temperature Thresholds for Prostate In Vivo
NASA Astrophysics Data System (ADS)
Bronskill, Michael; Chopra, Rajiv; Boyes, Aaron; Tang, Kee; Sugar, Linda
2007-05-01
Various histological techniques have been compared to evaluate the boundaries of thermal damage produced by ultrasound in vivo in a canine model. When all images are accurately co-registered, H&E stained micrographs provide the best assessment of acute cellular damage. Estimates of the boundaries of 100% and 0% cell killing correspond to maximum temperature thresholds of 54.6 ± 1.7°C and 51.5 ± 1.9°C, respectively.
New On-Chip De-Embedding for Accurate Evaluation of Symmetric Devices
NASA Astrophysics Data System (ADS)
Goto, Yosuke; Natsukari, Youhei; Fujishima, Minoru
2008-04-01
For the millimeter-wave wireless transceivers, miniaturized on-chip passive devices are employed to increase wireless communication speed. For using miniaturized devices, it is necessary to evaluate test vehicles in advance, in which de-embedding is applied to on-chip evaluation. Although open-short de-embedding is currently most popular, accurate de-embedding is difficult because the ground plane in a short dummy pattern is not ideal in practice. To overcome this problem, we have proposed a new de-embedding method using only a through dummy pattern, called the through-only de-embedding method. By this through-only de-embedding method, we show that a small on-chip inductor with more than 100 pico-henries can be evaluated within 1.18% error.
NASA Astrophysics Data System (ADS)
Zhang, Na; Yao, Jun; Huang, Zhaoqin; Wang, Yueying
2013-06-01
Numerical simulation in naturally fractured media is challenging because of the coexistence of porous media and fractures on multiple scales that need to be coupled. We present a new approach to reservoir simulation that gives accurate resolution of both large-scale and fine-scale flow patterns. Multiscale methods are suitable for this type of modeling, because it enables capturing the large scale behavior of the solution without solving all the small features. Dual-porosity models in view of their strength and simplicity can be mainly used for sugar-cube representation of fractured media. In such a representation, the transfer function between the fracture and the matrix block can be readily calculated for water-wet media. For a mixed-wet system, the evaluation of the transfer function becomes complicated due to the effect of gravity. In this work, we use a multiscale finite element method (MsFEM) for two-phase flow in fractured media using the discrete-fracture model. By combining MsFEM with the discrete-fracture model, we aim towards a numerical scheme that facilitates fractured reservoir simulation without upscaling. MsFEM uses a standard Darcy model to approximate the pressure and saturation on a coarse grid, whereas fine scale effects are captured through basis functions constructed by solving local flow problems using the discrete-fracture model. The accuracy and the robustness of MsFEM are shown through several examples. In the first example, we consider several small fractures in a matrix and then compare the results solved by the finite element method. Then, we use the MsFEM in more complex models. The results indicate that the MsFEM is a promising path toward direct simulation of highly resolution geomodels.
SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments
Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin
2016-01-01
Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529
Eter, Wael A; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin
2016-01-01
Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, (111)In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of (111)In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529
Numerical models for the evaluation of geothermal systems
Bodvarsson, G.S.; Pruess, K.; Lippmann, M.J.
1986-08-01
We have carried out detailed simulations of various fields in the USA (Bada, New Mexico; Heber, California); Mexico (Cerro Prieto); Iceland (Krafla); and Kenya (Olkaria). These simulation studies have illustrated the usefulness of numerical models for the overall evaluation of geothermal systems. The methodology for modeling the behavior of geothermal systems, different approaches to geothermal reservoir modeling and how they can be applied in comprehensive evaluation work are discussed.
Factors Influencing Undergraduates' Self-Evaluation of Numerical Competence
ERIC Educational Resources Information Center
Tariq, Vicki N.; Durrani, Naureen
2012-01-01
This empirical study explores factors influencing undergraduates' self-evaluation of their numerical competence, using data from an online survey completed by 566 undergraduates from a diversity of academic disciplines, across all four faculties at a post-1992 UK university. Analysis of the data, which included correlation and multiple regression…
Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam
NASA Astrophysics Data System (ADS)
Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad
2015-05-01
Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.
NASA Astrophysics Data System (ADS)
Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith
2015-09-01
Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Uncertainty evaluation in numerical modeling of complex devices
NASA Astrophysics Data System (ADS)
Cheng, X.; Monebhurrun, V.
2014-10-01
Numerical simulation is an efficient tool for exploring and understanding the physics of complex devices, e.g. mobile phones. For meaningful results, it is important to evaluate the uncertainty of the numerical simulation. Uncertainty quantification in specific absorption rate (SAR) calculation using a full computer-aided design (CAD) mobile phone model is a challenging task. Since a typical SAR numerical simulation is computationally expensive, the traditional Monte Carlo (MC) simulation method proves inadequate. The unscented transformation (UT) is an alternative and numerically efficient method herein investigated to evaluate the uncertainty in the SAR calculation using the realistic models of two commercially available mobile phones. The electromagnetic simulation process is modeled as a nonlinear mapping with the uncertainty in the inputs e.g. the relative permittivity values of the mobile phone material properties, inducing an uncertainty in the output, e.g. the peak spatial-average SAR value.The numerical simulation results demonstrate that UT may be a potential candidate for the uncertainty quantification in SAR calculations since only a few simulations are necessary to obtain results similar to those obtained after hundreds or thousands of MC simulations.
Numerical evaluation of the performance of active noise control systems
NASA Technical Reports Server (NTRS)
Mollo, C. G.; Bernhard, R. J.
1990-01-01
This paper presents a generalized numerical technique for evaluating the optimal performance of active noise controllers. In this technique, the indirect BEM numerical procedures are used to derive the active noise controllers for optimal control of enclosed harmonic sound fields where the strength of the noise sources or the description of the enclosure boundary may not be known. The performance prediction for a single-input single-output system is presented, together with the analysis of the stability and observability of an active noise-control system employing detectors. The numerical procedures presented can be used for the design of both the physical configuration and the electronic components of the optimal active noise controller.
Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals
NASA Technical Reports Server (NTRS)
Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.
2007-01-01
Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.
Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study
NASA Astrophysics Data System (ADS)
Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans
2015-03-01
Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.
Li, Dandan; Hu, Bo; Wang, Qing; Liu, Hongchang; Pan, Feng; Wu, Wei
2015-01-01
Safflower (Carthamus tinctorius L.) has received a significant amount of attention as a medicinal plant and oilseed crop. Gene expression studies provide a theoretical molecular biology foundation for improving new traits and developing new cultivars. Real-time quantitative PCR (RT-qPCR) has become a crucial approach for gene expression analysis. In addition, appropriate reference genes (RGs) are essential for accurate and rapid relative quantification analysis of gene expression. In this study, fifteen candidate RGs involved in multiple metabolic pathways of plants were finally selected and validated under different experimental treatments, at different seed development stages and in different cultivars and tissues for real-time PCR experiments. These genes were ABCS, 60SRPL10, RANBP1, UBCL, MFC, UBCE2, EIF5A, COA, EF1-β, EF1, GAPDH, ATPS, MBF1, GTPB and GST. The suitability evaluation was executed by the geNorm and NormFinder programs. Overall, EF1, UBCE2, EIF5A, ATPS and 60SRPL10 were the most stable genes, and MBF1, as well as MFC, were the most unstable genes by geNorm and NormFinder software in all experimental samples. To verify the validation of RGs selected by the two programs, the expression analysis of 7 CtFAD2 genes in safflower seeds at different developmental stages under cold stress was executed using different RGs in RT-qPCR experiments for normalization. The results showed similar expression patterns when the most stable RGs selected by geNorm or NormFinder software were used. However, the differences were detected using the most unstable reference genes. The most stable combination of genes selected in this study will help to achieve more accurate and reliable results in a wide variety of samples in safflower. PMID:26457898
Numerical evaluation of gas core length in free surface vortices
NASA Astrophysics Data System (ADS)
Cristofano, L.; Nobili, M.; Caruso, G.
2014-11-01
The formation and evolution of free surface vortices represent an important topic in many hydraulic intakes, since strong whirlpools introduce swirl flow at the intake, and could cause entrainment of floating matters and gas. In particular, gas entrainment phenomena are an important safety issue for Sodium cooled Fast Reactors, because the introduction of gas bubbles within the core causes dangerous reactivity fluctuation. In this paper, a numerical evaluation of the gas core length in free surface vortices is presented, according to two different approaches. In the first one, a prediction method, developed by the Japanese researcher Sakai and his team, has been applied. This method is based on the Burgers vortex model, and it is able to estimate the gas core length of a free surface vortex starting from two parameters calculated with single-phase CFD simulations. The two parameters are the circulation and the downward velocity gradient. The other approach consists in performing a two-phase CFD simulation of a free surface vortex, in order to numerically reproduce the gas- liquid interface deformation. Mapped convergent mesh is used to reduce numerical error and a VOF (Volume Of Fluid) method was selected to track the gas-liquid interface. Two different turbulence models have been tested and analyzed. Experimental measurements of free surface vortices gas core length have been executed, using optical methods, and numerical results have been compared with experimental measurements. The computational domain and the boundary conditions of the CFD simulations were set consistently with the experimental test conditions.
Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui
2014-01-01
The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers. PMID:24586313
Study on Applicability of Numerical Simulation to Evaluation of Gas Entrainment From Free Surface
Kei Ito; Takaaki Sakai; Hiroyuki Ohshima
2006-07-01
An onset condition of gas entrainment (GE) due to free surface vortex has been studied to establish a design of fast breeder reactor with higher coolant velocity than conventional designs, because the GE might cause the reactor operation instability and therefore should be avoided. The onset condition of the GE has been investigated experimentally and theoretically, however, dependency of the vortex type GE on local geometry configuration of each experimental system and local velocity distribution has prevented researchers from formulating the universal onset condition of the vortex type GE. A real scale test is considered as an accurate method to evaluate the occurrence of the vortex type GE, but the real scale test is generally expensive and not useful in the design study of large and complicated FBR systems, because frequent displacement of inner equipments accompanied by the design change is difficult in the real scale test. Numerical simulation seems to be promising method as an alternative to the real scale test. In this research, to evaluate the applicability of the numerical simulation to the design work, numerical simulations were conducted on the basic experimental system of the vortex type GE. This basic experiment consisted of rectangular flow channel and two important equipments for vortex type GE in the channel, i.e. vortex generation and suction equipments. Generated vortex grew rapidly interacting with the suction flow and the grown vortex formed a free surface dent (gas core). When the tip of the gas core or the bubbles detached from the tip of the gas core reached the suction mouth, the gas was entrained to the suction tube. The results of numerical simulation under the experimental conditions were compared to the experiment in terms of velocity distributions and free surface shape. As a result, the numerical simulation showed qualitatively good agreement with experimental data. The numerical simulation results were similar to the experimental
Cartwright, Michael S; Dupuis, Janae E; Bargoil, Jessica M; Foster, Dana C
2015-09-01
Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the
Cycle-accurate evaluation of reconfigurable photonic networks-on-chip
NASA Astrophysics Data System (ADS)
Debaes, Christof; Artundo, Iñigo; Heirman, Wim; Van Campenhout, Jan; Thienpont, Hugo
2010-05-01
There is little doubt that the most important limiting factors of the performance of next-generation Chip Multiprocessors (CMPs) will be the power efficiency and the available communication speed between cores. Photonic Networks-on-Chip (NoCs) have been suggested as a viable route to relieve the off- and on-chip interconnection bottleneck. Low-loss integrated optical waveguides can transport very high-speed data signals over longer distances as compared to on-chip electrical signaling. In addition, with the development of silicon microrings, photonic switches can be integrated to route signals in a data-transparent way. Although several photonic NoC proposals exist, their use is often limited to the communication of large data messages due to a relatively long set-up time of the photonic channels. In this work, we evaluate a reconfigurable photonic NoC in which the topology is adapted automatically (on a microsecond scale) to the evolving traffic situation by use of silicon microrings. To evaluate this system's performance, the proposed architecture has been implemented in a detailed full-system cycle-accurate simulator which is capable of generating realistic workloads and traffic patterns. In addition, a model was developed to estimate the power consumption of the full interconnection network which was compared with other photonic and electrical NoC solutions. We find that our proposed network architecture significantly lowers the average memory access latency (35% reduction) while only generating a modest increase in power consumption (20%), compared to a conventional concentrated mesh electrical signaling approach. When comparing our solution to high-speed circuit-switched photonic NoCs, long photonic channel set-up times can be tolerated which makes our approach directly applicable to current shared-memory CMPs.
The Good, the Strong, and the Accurate: Preschoolers' Evaluations of Informant Attributes
ERIC Educational Resources Information Center
Fusaro, Maria; Corriveau, Kathleen H.; Harris, Paul L.
2011-01-01
Much recent evidence shows that preschoolers are sensitive to the accuracy of an informant. Faced with two informants, one of whom names familiar objects accurately and the other inaccurately, preschoolers subsequently prefer to learn the names and functions of unfamiliar objects from the more accurate informant. This study examined the inference…
Xu, Jing; Ding, Yunhong; Peucheret, Christophe; Xue, Weiqi; Seoane, Jorge; Zsigri, Beáta; Jeppesen, Palle; Mørk, Jesper
2011-01-01
Although patterning effects (PEs) are known to be a limiting factor of ultrafast photonic switches based on semiconductor optical amplifiers (SOAs), a simple approach for their evaluation in numerical simulations and experiments is missing. In this work, we experimentally investigate and verify a theoretical prediction of the pseudo random binary sequence (PRBS) length needed to capture the full impact of PEs. A wide range of SOAs and operation conditions are investigated. The very simple form of the PRBS length condition highlights the role of two parameters, i.e. the recovery time of the SOAs as well as the operation bit rate. Furthermore, a simple and effective method for probing the maximum PEs is demonstrated, which may relieve the computational effort or the experimental difficulties associated with the use of long PRBSs for the simulation or characterization of SOA-based switches. Good agrement with conventional PRBS characterization is obtained. The method is suitable for quick and systematic estimation and optimization of the switching performance. PMID:21263552
Thermal numerical simulator for laboratory evaluation of steamflood oil recovery
Sarathi, P.
1991-04-01
A thermal numerical simulator running on an IBM AT compatible personal computer is described. The simulator was designed to assist laboratory design and evaluation of steamflood oil recovery. An overview of the historical evolution of numerical thermal simulation, NIPER's approach to solving these problems with a desk top computer, the derivation of equations and a description of approaches used to solve these equations, and verification of the simulator using published data sets and sensitivity analysis are presented. The developed model is a three-phase, two-dimensional multicomponent simulator capable of being run in one or two dimensions. Mass transfer among the phases and components is dictated by pressure- and temperature-dependent vapor-liquid equilibria. Gravity and capillary pressure phenomena were included. Energy is transferred by conduction, convection, vaporization and condensation. The model employs a block centered grid system with a five-point discretization scheme. Both areal and vertical cross-sectional simulations are possible. A sequential solution technique is employed to solve the finite difference equations. The study clearly indicated the importance of heat loss, injected steam quality, and injection rate to the process. Dependence of overall recovery on oil volatility and viscosity is emphasized. The process is very sensitive to relative permeability values. Time-step sensitivity runs indicted that the current version is time-step sensitive and exhibits conditional stability. 75 refs., 19 figs., 19 tabs.
Quantitative evaluation of numerical integration schemes for Lagrangian particle dispersion models
NASA Astrophysics Data System (ADS)
Ramli, Huda Mohd.; Esler, J. Gavin
2016-07-01
A rigorous methodology for the evaluation of integration schemes for Lagrangian particle dispersion models (LPDMs) is presented. A series of one-dimensional test problems are introduced, for which the Fokker-Planck equation is solved numerically using a finite-difference discretisation in physical space and a Hermite function expansion in velocity space. Numerical convergence errors in the Fokker-Planck equation solutions are shown to be much less than the statistical error associated with a practical-sized ensemble (N = 106) of LPDM solutions; hence, the former can be used to validate the latter. The test problems are then used to evaluate commonly used LPDM integration schemes. The results allow for optimal time-step selection for each scheme, given a required level of accuracy. The following recommendations are made for use in operational models. First, if computational constraints require the use of moderate to long time steps, it is more accurate to solve the random displacement model approximation to the LPDM rather than use existing schemes designed for long time steps. Second, useful gains in numerical accuracy can be obtained, at moderate additional computational cost, by using the relatively simple "small-noise" scheme of Honeycutt.
Evaluating the Impact of Aerosols on Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Freitas, Saulo; Silva, Arlindo; Benedetti, Angela; Grell, Georg; Members, Wgne; Zarzur, Mauricio
2015-04-01
The Working Group on Numerical Experimentation (WMO, http://www.wmo.int/pages/about/sec/rescrosscut/resdept_wgne.html) has organized an exercise to evaluate the impact of aerosols on NWP. This exercise will involve regional and global models currently used for weather forecast by the operational centers worldwide and aims at addressing the following questions: a) How important are aerosols for predicting the physical system (NWP, seasonal, climate) as distinct from predicting the aerosols themselves? b) How important is atmospheric model quality for air quality forecasting? c) What are the current capabilities of NWP models to simulate aerosol impacts on weather prediction? Toward this goal we have selected 3 strong or persistent events of aerosol pollution worldwide that could be fairly represented in current NWP models and that allowed for an evaluation of the aerosol impact on weather prediction. The selected events includes a strong dust storm that blew off the coast of Libya and over the Mediterranean, an extremely severe episode of air pollution in Beijing and surrounding areas, and an extreme case of biomass burning smoke in Brazil. The experimental design calls for simulations with and without explicitly accounting for aerosol feedbacks in the cloud and radiation parameterizations. In this presentation we will summarize the results of this study focusing on the evaluation of model performance in terms of its ability to faithfully simulate aerosol optical depth, and the assessment of the aerosol impact on the predictions of near surface wind, temperature, humidity, rainfall and the surface energy budget.
Numerical Analysis for Structural Safety Evaluation of Butterfly Valves
NASA Astrophysics Data System (ADS)
Shin, Myung-Seob; Yoon, Joon-Yong; Park, Han-Yung
2010-06-01
Butterfly valves are widely used in current industry to control the fluid flow. They are used for both on-off and throttling applications involving large flows at relatively low operating pressure especially in large size pipelines. For the industrial application of butterfly valves, it must be ensured that the valve could be used safety under the fatigue life and the deformations produced by the pressure of the fluid. In this study, we carried out the structure analysis of the body and the valve disc of the butterfly valve and the numerical simulation was performed by using ANSYS v11.0. The reliability of valve is evaluated under the investigation of the deformation, the leak test and the durability of the valve.
Factors influencing undergraduates' self-evaluation of numerical competence
NASA Astrophysics Data System (ADS)
Tariq, Vicki N.; Durrani, Naureen
2012-04-01
This empirical study explores factors influencing undergraduates' self-evaluation of their numerical competence, using data from an online survey completed by 566 undergraduates from a diversity of academic disciplines, across all four faculties at a post-1992 UK university. Analysis of the data, which included correlation and multiple regression analyses, revealed that undergraduates exhibiting greater confidence in their mathematical and numeracy skills, as evidenced by their higher self-evaluation scores and their higher scores on the confidence sub-scale contributing to the measurement of attitude, possess more cohesive, rather than fragmented, conceptions of mathematics, and display more positive attitudes towards mathematics/numeracy. They also exhibit lower levels of mathematics anxiety. Students exhibiting greater confidence also tended to be those who were relatively young (i.e. 18-29 years), whose degree programmes provided them with opportunities to practise and further develop their numeracy skills, and who possessed higher pre-university mathematics qualifications. The multiple regression analysis revealed two positive predictors (overall attitude towards mathematics/numeracy and possession of a higher pre-university mathematics qualification) and five negative predictors (mathematics anxiety, lack of opportunity to practise/develop numeracy skills, being a more mature student, being enrolled in Health and Social Care compared with Science and Technology, and possessing no formal mathematics/numeracy qualification compared with a General Certificate of Secondary Education or equivalent qualification) accounted for approximately 64% of the variation in students' perceptions of their numerical competence. Although the results initially suggested that male students were significantly more confident than females, one compounding variable was almost certainly the students' highest pre-university mathematics or numeracy qualification, since a higher
An accurate method of extracting fat droplets in liver images for quantitative evaluation
NASA Astrophysics Data System (ADS)
Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie
2015-03-01
The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.
Casimir problem of spherical dielectrics: numerical evaluation for general permittivities.
Brevik, I; Aarseth, J B; Høye, J S
2002-08-01
The Casimir mutual free energy F for a system of two dielectric concentric nonmagnetic spherical bodies is calculated, at arbitrary temperatures. The present paper is a continuation of an earlier investigation [Phys. Rev. E 63, 051101 (2001)], in which F was evaluated in full only for the case of ideal metals (refractive index n= infinity ). Here, analogous results are presented for dielectrics, for some chosen values of n. Our basic calculational method stems from quantum statistical mechanics. The Debye expansions for the Riccati-Bessel functions when carried out to a high order are found to be very useful in practice (thereby overflow/underflow problems are easily avoided), and also to give accurate results even for the lowest values of l down to l=1. Another virtue of the Debye expansions is that the limiting case of metals becomes quite amenable to an analytical treatment in spherical geometry. We first discuss the zero-frequency TE mode problem from a mathematical viewpoint and then, as a physical input, invoke the actual dispersion relations. The result of our analysis, based upon the adoption of the Drude dispersion relation at low frequencies, is that the zero-frequency TE mode does not contribute for a real metal. Accordingly, F turns out in this case to be only one-half of the conventional value at high temperatures. The applicability of the Drude model in this context has, however, been questioned recently, and we do not aim at a complete discussion of this issue here. Existing experiments are low-temperature experiments, and are so far not accurate enough to distinguish between the different predictions. We also calculate explicitly the contribution from the zero-frequency mode for a dielectric. For a dielectric, this zero-frequency problem is absent. PMID:12241249
Technology Transfer Automated Retrieval System (TEKTRAN)
The three evapotranspiration (ET) measurement/retrieval techniques used in this study, lysimeter, scintillometer and remote sensing vary in their level of complexity, accuracy, resolution and applicability. The lysimeter with its point measurement is the most accurate and direct method to measure ET...
NASA Astrophysics Data System (ADS)
Che, Xiao-Hua; Qiao, Wen-Xiao; Ju, Xiao-Dong; Wang, Rui-Jia
2016-03-01
We developed a novel cement evaluation logging tool, named the azimuthally acoustic bond tool (AABT), which uses a phased-arc array transmitter with azimuthal detection capability. We combined numerical simulations and field tests to verify the AABT tool. The numerical simulation results showed that the radiation direction of the subarray corresponding to the maximum amplitude of the first arrival matches the azimuth of the channeling when it is behind the casing. With larger channeling size in the circumferential direction, the amplitude difference of the casing wave at different azimuths becomes more evident. The test results showed that the AABT can accurately locate the casing collars and evaluate the cement bond quality with azimuthal resolution at the casing—cement interface, and can visualize the size, depth, and azimuth of channeling. In the case of good casing—cement bonding, the AABT can further evaluate the cement bond quality at the cement—formation interface with azimuthal resolution by using the amplitude map and the velocity of the formation wave.
In the field of bioassessment, tolerance has traditionally referred to the degree to which organisms can withstand environmental degradation. This concept has been around for many years and its use is widespread. In numerous cases, tolerance values (TVs) have been assigned to i...
Numerical Weather Predictions Evaluation Using Spatial Verification Methods
NASA Astrophysics Data System (ADS)
Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.
2014-12-01
During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain--Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is cofinanced by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007--2013).
Evaluation of kinetic uncertainty in numerical models of petroleum generation
Peters, K.E.; Walters, C.C.; Mankiewicz, P.J.
2006-01-01
Oil-prone marine petroleum source rocks contain type I or type II kerogen having Rock-Eval pyrolysis hydrogen indices greater than 600 or 300-600 mg hydrocarbon/g total organic carbon (HI, mg HC/g TOC), respectively. Samples from 29 marine source rocks worldwide that contain mainly type II kerogen (HI = 230-786 mg HC/g TOC) were subjected to open-system programmed pyrolysis to determine the activation energy distributions for petroleum generation. Assuming a burial heating rate of 1??C/m.y. for each measured activation energy distribution, the calculated average temperature for 50% fractional conversion of the kerogen in the samples to petroleum is approximately 136 ?? 7??C, but the range spans about 30??C (???121-151??C). Fifty-two outcrop samples of thermally immature Jurassic Oxford Clay Formation were collected from five locations in the United Kingdom to determine the variations of kinetic response for one source rock unit. The samples contain mainly type I or type II kerogens (HI = 230-774 mg HC/g TOC). At a heating rate of 1??C/m.y., the calculated temperatures for 50% fractional conversion of the Oxford Clay kerogens to petroleum differ by as much as 23??C (127-150??C). The data indicate that kerogen type, as defined by hydrogen index, is not systematically linked to kinetic response, and that default kinetics for the thermal decomposition of type I or type II kerogen can introduce unacceptable errors into numerical simulations. Furthermore, custom kinetics based on one or a few samples may be inadequate to account for variations in organofacies within a source rock. We propose three methods to evaluate the uncertainty contributed by kerogen kinetics to numerical simulations: (1) use the average kinetic distribution for multiple samples of source rock and the standard deviation for each activation energy in that distribution; (2) use source rock kinetics determined at several locations to describe different parts of the study area; and (3) use a weighted
NASA Astrophysics Data System (ADS)
van den Heever, S. C.; Tao, W. K.; Skofronick Jackson, G.; Tanelli, S.; L'Ecuyer, T. S.; Petersen, W. A.; Kummerow, C. D.
2015-12-01
Cloud, aerosol and precipitation processes play a fundamental role in the water and energy cycle. It is critical to accurately represent these microphysical processes in numerical models if we are to better predict cloud and precipitation properties on weather through climate timescales. Much has been learned about cloud properties and precipitation characteristics from NASA satellite missions such as TRMM, CloudSat, and more recently GPM. Furthermore, data from these missions have been successfully utilized in evaluating the microphysical schemes in cloud-resolving models (CRMs) and global models. However, there are still many uncertainties associated with these microphysics schemes. These uncertainties can be attributed, at least in part, to the fact that microphysical processes cannot be directly observed or measured, but instead have to be inferred from those cloud properties that can be measured. Evaluation of microphysical parameterizations are becoming increasingly important as enhanced computational capabilities are facilitating the use of more sophisticated schemes in CRMs, and as future global models are being run on what has traditionally been regarded as cloud-resolving scales using CRM microphysical schemes. In this talk we will demonstrate how TRMM, CloudSat and GPM data have been used to evaluate different aspects of current CRM microphysical schemes, providing examples of where these approaches have been successful. We will also highlight CRM microphysical processes that have not been well evaluated and suggest approaches for addressing such issues. Finally, we will introduce a potential NASA satellite mission, the Cloud and Precipitation Processes Mission (CAPPM), which would facilitate the development and evaluation of different microphysical-dynamical feedbacks in numerical models.
EEMD based pitch evaluation method for accurate grating measurement by AFM
NASA Astrophysics Data System (ADS)
Li, Changsheng; Yang, Shuming; Wang, Chenying; Jiang, Zhuangde
2016-09-01
The pitch measurement and AFM calibration precision are significantly influenced by the grating pitch evaluation method. This paper presents the ensemble empirical mode decomposition (EEMD) based pitch evaluation method to relieve the accuracy deterioration caused by high and low frequency components of scanning profile during pitch evaluation. The simulation analysis shows that the application of EEMD can improve the pitch accuracy of the FFT-FT algorithm. The pitch error is small when the iteration number of the FFT-FT algorithms was 8. The AFM measurement of the 500 nm-pitch one-dimensional grating shows that the EEMD based pitch evaluation method could improve the pitch precision, especially the grating line position precision, and greatly expand the applicability of the gravity center algorithm when particles and impression marks were distributed on the sample surface. The measurement indicates that the nonlinearity was stable, and the nonlinearity of x axis and forward scanning was much smaller than their counterpart. Finally, a detailed pitch measurement uncertainty evaluation model suitable for commercial AFMs was demonstrated and a pitch uncertainty in the sub-nanometer range was achieved. The pitch uncertainty was reduced about 10% by EEMD.
NASA Astrophysics Data System (ADS)
Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime
2016-03-01
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.
Variable impedance cardiography waveforms: how to evaluate the preejection period more accurately
NASA Astrophysics Data System (ADS)
Ermishkin, V. V.; Kolesnikov, V. A.; Lukoshkova, E. V.; Mokh, V. P.; Sonina, R. S.; Dupik, N. V.; Boitsov, S. A.
2012-12-01
Impedance method has been successfully applied for left ventricular function assessment during functional tests. The preejection period (PEP), the interval between Q peak in ECG and a specific mark on impedance cardiogram (ICG) which corresponds to aortic valve opening, is an important indicator of the contractility state and its neurogenic control. Accurate identification of ejection onset by ICG is often problematic, especially in the cardiologic patients, due to peculiar waveforms. An essential obstacle is variability of the shape of the ICG waveform during the exercise and subsequent recovery. A promissing solution can be introduction of an additional pulse sensor placed in the nearby region. We tested this idea in 28 healthy subjects and 6 cardiologic patients using a dual-channel impedance cardiograph for simultaneous recording from the aortic and neck regions, and an earlobe photoplethysmograph. Our findings suggest that incidence of abnormal complicated ICG waveforms increases with age. The combination of standard ICG with ear photoplethysmography and/or additional impedance channel significantly improves the efficacy and accuracy of PEP estimation.
NASA Astrophysics Data System (ADS)
Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.
2013-12-01
The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales
Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Hu, Weigang
2016-01-01
Purpose 4DCT delineated internal target volume (ITV) was applied to determine the tumor motion and used as planning target in treatment planning in lung cancer stereotactic body radiotherapy (SBRT). This work is to study the accuracy of using ITV to predict the real target dose in lung cancer SBRT. Materials and methods Both for phantom and patient cases, the ITV and gross tumor volumes (GTVs) were contoured on the maximum intensity projection (MIP) CT and ten CT phases, respectively. A SBRT plan was designed using ITV as the planning target on average projection (AVG) CT. This plan was copied to each CT phase and the dose distribution was recalculated. The GTV_4D dose was acquired through accumulating the GTV doses over all ten phases and regarded as the real target dose. To analyze the ITV dose error, the ITV dose was compared to the real target dose by endpoints of D99, D95, D1 (doses received by the 99%, 95% and 1% of the target volume), and dose coverage endpoint of V100(relative volume receiving at least the prescription dose). Results The phantom study shows that the ITV underestimates the real target dose by 9.47%∼19.8% in D99, 4.43%∼15.99% in D95, and underestimates the dose coverage by 5% in V100. The patient cases show that the ITV underestimates the real target dose and dose coverage by 3.8%∼10.7% in D99, 4.7%∼7.2% in D95, and 3.96%∼6.59% in V100 in motion target cases. Conclusions Cautions should be taken that ITV is not accurate enough to predict the real target dose in lung cancer SBRT with large tumor motions. Restricting the target motion or reducing the target dose heterogeneity could reduce the ITV dose underestimation effect in lung SBRT. PMID:26968812
Direct and indirect ophthalmoscopy for a more accurate baseline evaluation in aircrew members.
Blount, W C
1977-03-01
The currently required Federal Aviation Agency visual evaluation for commercial and airline pilots often does not detect quiescent retinal disease, unless there is a specific history or a current change in visual acuity which dictates the need for a dilated ophthalmoscopic evaluation. Statistics indicate that there may be a significant number of undetected retinal changes which can cause sudden and irreversible alterations in visual acuity during an airman's career. The requirements for an ophthalmoscopic examination should include, at the time of entry as an aircrew member into the aviation industry, a dilated fundus examination by the binocular indirect and direct ophthalmoscopic methods. In addition, documentary photography, visual fields, and other specific studies as indicated for these patients would be accomplished. These studies should be required by both the Federal Aviation Agency and the military services just as baseline ECG's chest films, SMA 12, and other laboratory studies are utilized. PMID:857802
Arakawa, Mototaka; Kushibiki, Jun-ichi; Aoki, Naoya
2004-05-01
The effective radius of a bulk-wave ultrasonic transducer as a circular piston source, fabricated on one end of a synthetic silica (SiO2) glass buffer rod, was evaluated for accurate velocity measurements of dispersive specimens over a wide frequency range. The effective radius was determined by comparing measured and calculated phase variations due to diffraction in an ultrasonic transmission line of the SiO2 buffer rod/water-couplant/SiO2 standard specimen, using radio-frequency (RF) tone burst ultrasonic waves. Fourteen devices with different device parameters were evaluated. The velocities of the nondispersive standard specimen (C-7940) were found to be 5934.10 +/- 0.35 m/s at 70 to 290 MHz, after diffraction correction using the nominal radius (0.75 mm) for an ultrasonic device with an operating center frequency of about 400 MHz. Corrected velocities were more accurately found to be 5934.15 +/- 0.03 m/s by using the effective radius (0.780 mm) for the diffraction correction. Bulk-wave ultrasonic devices calibrated by this experimental procedure enable conducting extremely accurate velocity dispersion measurements. PMID:15217227
Asthma control cost-utility randomized trial evaluation (ACCURATE): the goals of asthma treatment
2011-01-01
Background Despite the availability of effective therapies, asthma remains a source of significant morbidity and use of health care resources. The central research question of the ACCURATE trial is whether maximal doses of (combination) therapy should be used for long periods in an attempt to achieve complete control of all features of asthma. An additional question is whether patients and society value the potential incremental benefit, if any, sufficiently to concur with such a treatment approach. We assessed patient preferences and cost-effectiveness of three treatment strategies aimed at achieving different levels of clinical control: 1. sufficiently controlled asthma 2. strictly controlled asthma 3. strictly controlled asthma based on exhaled nitric oxide as an additional disease marker Design 720 Patients with mild to moderate persistent asthma from general practices with a practice nurse, age 18-50 yr, daily treatment with inhaled corticosteroids (more then 3 months usage of inhaled corticosteroids in the previous year), will be identified via patient registries of general practices in the Leiden, Nijmegen, and Amsterdam areas in The Netherlands. The design is a 12-month cluster-randomised parallel trial with 40 general practices in each of the three arms. The patients will visit the general practice at baseline, 3, 6, 9, and 12 months. At each planned and unplanned visit to the general practice treatment will be adjusted with support of an internet-based asthma monitoring system supervised by a central coordinating specialist nurse. Patient preferences and utilities will be assessed by questionnaire and interview. Data on asthma control, treatment step, adherence to treatment, utilities and costs will be obtained every 3 months and at each unplanned visit. Differences in societal costs (medication, other (health) care and productivity) will be compared to differences in the number of limited activity days and in quality adjusted life years (Dutch EQ5D, SF6D
NASA Astrophysics Data System (ADS)
Oh, K.; Han, M.; Kim, K.; Heo, Y.; Moon, C.; Park, S.; Nam, S.
2016-02-01
For quality assurance in radiation therapy, several types of dosimeters are used such as ionization chambers, radiographic films, thermo-luminescent dosimeter (TLD), and semiconductor dosimeters. Among them, semiconductor dosimeters are particularly useful for in vivo dosimeters or high dose gradient area such as the penumbra region because they are more sensitive and smaller in size compared to typical dosimeters. In this study, we developed and evaluated Cadmium Telluride (CdTe) dosimeters, one of the most promising semiconductor dosimeters due to their high quantum efficiency and charge collection efficiency. Such CdTe dosimeters include single crystal form and polycrystalline form depending upon the fabrication process. Both types of CdTe dosimeters are commercially available, but only the polycrystalline form is suitable for radiation dosimeters, since it is less affected by volumetric effect and energy dependence. To develop and evaluate polycrystalline CdTe dosimeters, polycrystalline CdTe films were prepared by thermal evaporation. After that, CdTeO3 layer, thin oxide layer, was deposited on top of the CdTe film by RF sputtering to improve charge carrier transport properties and to reduce leakage current. Also, the CdTeO3 layer which acts as a passivation layer help the dosimeter to reduce their sensitivity changes with repeated use due to radiation damage. Finally, the top and bottom electrodes, In/Ti and Pt, were used to have Schottky contact. Subsequently, the electrical properties under high energy photon beams from linear accelerator (LINAC), such as response coincidence, dose linearity, dose rate dependence, reproducibility, and percentage depth dose, were measured to evaluate polycrystalline CdTe dosimeters. In addition, we compared the experimental data of the dosimeter fabricated in this study with those of the silicon diode dosimeter and Thimble ionization chamber which widely used in routine dosimetry system and dose measurements for radiation
NASA Technical Reports Server (NTRS)
Canright, R. B., Jr.; Semler, T. T.
1972-01-01
Several approximations to the Doppler broadening functions psi(x, theta) and chi(x, theta) are compared with respect to accuracy and speed of evaluation. A technique, due to A. M. Turning (1943), is shown to be at least as accurate as direct numerical quadrature and somewhat faster than Gaussian quadrature. FORTRAN 4 listings are included.
NASA Astrophysics Data System (ADS)
Sakai, Yasumasa; Taki, Hirofumi; Kanai, Hiroshi
2016-07-01
In our previous study, the viscoelasticity of the radial artery wall was estimated to diagnose endothelial dysfunction using a high-frequency (22 MHz) ultrasound device. In the present study, we employed a commercial ultrasound device (7.5 MHz) and estimated the viscoelasticity using arterial pressure and diameter, both of which were measured at the same position. In a phantom experiment, the proposed method successfully estimated the elasticity and viscosity of the phantom with errors of 1.8 and 30.3%, respectively. In an in vivo measurement, the transient change in the viscoelasticity was measured for three healthy subjects during flow-mediated dilation (FMD). The proposed method revealed the softening of the arterial wall originating from the FMD reaction within 100 s after avascularization. These results indicate the high performance of the proposed method in evaluating vascular endothelial function just after avascularization, where the function is difficult to be estimated by a conventional FMD measurement.
Evaluation of a low-cost and accurate ocean temperature logger on subsurface mooring systems
Tian, Chuan; Deng, Zhiqun; Lu, Jun; Xu, Xiaoyang; Zhao, Wei; Xu, Ming
2014-06-23
Monitoring seawater temperature is important to understanding evolving ocean processes. To monitor internal waves or ocean mixing, a large number of temperature loggers are typically mounted on subsurface mooring systems to obtain high-resolution temperature data at different water depths. In this study, we redesigned and evaluated a compact, low-cost, self-contained, high-resolution and high-accuracy ocean temperature logger, TC-1121. The newly designed TC-1121 loggers are smaller, more robust, and their sampling intervals can be automatically changed by indicated events. They have been widely used in many mooring systems to study internal wave and ocean mixing. The logger’s fundamental design, noise analysis, calibration, drift test, and a long-term sea trial are discussed in this paper.
Evaluation of the EURO-CORDEX RCMs to accurately simulate the Etesian wind system
NASA Astrophysics Data System (ADS)
Dafka, Stella; Xoplaki, Elena; Toreti, Andrea; Zanis, Prodromos; Tyrlis, Evangelos; Luterbacher, Jürg
2016-04-01
The Etesians are among the most persistent regional scale wind systems in the lower troposphere that blow over the Aegean Sea during the extended summer season. ΑAn evaluation of the high spatial resolution, EURO-CORDEX Regional Climate Models (RCMs) is here presented. The study documents the performance of the individual models in representing the basic spatiotemporal pattern of the Etesian wind system for the period 1989-2004. The analysis is mainly focused on evaluating the abilities of the RCMs in simulating the surface wind over the Aegean Sea and the associated large scale atmospheric circulation. Mean Sea Level Pressure (SLP), wind speed and geopotential height at 500 hPa are used. The simulated results are validated against reanalysis datasets (20CR-v2c and ERA20-C) and daily observational measurements (12:00 UTC) from the mainland Greece and Aegean Sea. The analysis highlights the general ability of the RCMs to capture the basic features of the Etesians, but also indicates considerable deficiencies for selected metrics, regions and subperiods. Some of these deficiencies include the significant underestimation (overestimation) of the mean SLP in the northeastern part of the analysis domain in all subperiods (for May and June) when compared to 20CR-v2c (ERA20-C), the significant overestimation of the anomalous ridge over the Balkans and central Europe and the underestimation of the wind speed over the Aegean Sea. Future work will include an assessment of the Etesians for the next decades using EURO-CORDEX projections under different RCP scenarios and estimate the future potential for wind energy production.
Congenital spinal dermal tract: how accurate is clinical and radiological evaluation?
Tisdall, Martin M; Hayward, Richard D; Thompson, Dominic N P
2015-06-01
OBJECT A dermal sinus tract is a common form of occult spinal dysraphism. The presumed etiology relates to a focal failure of disjunction resulting in a persistent adhesion between the neural and cutaneous ectoderm. Clinical and radiological features can appear innocuous, leading to delayed diagnosis and failure to appreciate the implications or extent of the abnormality. If it is left untreated, complications can include meningitis, spinal abscess, and inclusion cyst formation. The authors present their experience in 74 pediatric cases of spinal dermal tract in an attempt to identify which clinical and radiological factors are associated with an infective presentation and to assess the reliability of MRI in evaluating this entity. METHODS Consecutive cases of spinal dermal tract treated with resection between 1998 and 2010 were identified from the departmental surgical database. Demographics, clinical history, and radiological and operative findings were collected from the patient records. The presence or absence of active infection (abscess, meningitis) at the time of neurosurgical presentation and any history of local sinus discharge or infection was assessed. Magnetic resonance images were reviewed to evaluate the extent of the sinus tract and determine the presence of an inclusion cyst. Radiological and operative findings were compared. RESULTS The surgical course was uncomplicated in 90% of 74 cases eligible for analysis. Magnetic resonance imaging underreported the presence of both an intradural tract (MRI 46%, operative finding 86%) and an intraspinal inclusion cyst (MRI 15%, operative finding 24%). A history of sinus discharge (OR 12.8, p = 0.0003) and the intraoperative identification of intraspinal inclusion cysts (OR 5.6, p = 0.023) were associated with an infective presentation. There was no significant association between the presence of an intradural tract discovered at surgery and an infective presentation. CONCLUSIONS Surgery for the treatment of
NASA Astrophysics Data System (ADS)
Prykäri, Tuukka; Czajkowski, Jakub; Alarousu, Erkki; Myllylä, Risto
2010-05-01
Optical coherence tomography (OCT), a technique for the noninvasive imaging of turbid media, based on low-coherence interferometry, was originally developed for the imaging of biological tissues. Since the development of the technique, most of its applications have been related to the area of biomedicine. However, from early stages, the vertical resolution of the technique has already been improved to a submicron scale. This enables new possibilities and applications. This article presents the possible applications of OCT in paper industry, where submicron or at least a resolution close to one micron is required. This requirement comes from the layered structure of paper products, where layer thickness may vary from single microns to tens of micrometers. This is especially similar to the case with high-quality paper products, where several different coating layers are used to obtain a smooth surface structure and a high gloss. In this study, we demonstrate that optical coherence tomography can be used to measure and evaluate the quality of the coating layer of a premium glossy photopaper. In addition, we show that for some paper products, it is possible to measure across the entire thickness range of a paper sheet. Furthermore, we suggest that in addition to topography and tomography images of objects, it is possible to obtain information similar to gloss by tracking the magnitude of individual interference signals in optical coherence tomography.
Semi-numerical evaluation of one-loop corrections
Ellis, R.K.; Giele, W.T.; Zanderighi, G.; /Fermilab
2005-08-01
We present a semi-numerical algorithm to calculate one-loop virtual corrections to scattering amplitudes. The divergences of the loop amplitudes are regulated using dimensional regularization. We treat in detail the case of amplitudes with up to five external legs and massless internal lines, although the method is more generally applicable. Tensor integrals are reduced to generalized scalar integrals, which in turn are reduced to a set of known basis integrals using recursion relations. The reduction algorithm is modified near exceptional configurations to ensure numerical stability. To test the procedure we apply these techniques to one-loop corrections to the Higgs to four quark process for which analytic results have recently become available.
Flocke, N
2009-08-14
In this paper it is shown that shifted Jacobi polynomials G(n)(p,q,x) can be used in connection with the Gaussian quadrature modified moment technique to greatly enhance the accuracy of evaluation of Rys roots and weights used in Gaussian integral evaluation in quantum chemistry. A general four-term inhomogeneous recurrence relation is derived for the shifted Jacobi polynomial modified moments over the Rys weight function e(-Tx)/square root x. It is shown that for q=1/2 this general four-term inhomogeneous recurrence relation reduces to a three-term p-dependent inhomogeneous recurrence relation. Adjusting p to proper values depending on the Rys exponential parameter T, the method is capable of delivering highly accurate results for large number of roots and weights in the most difficult to treat intermediate T range. Examples are shown, and detailed formulas together with practical suggestions for their efficient implementation are also provided. PMID:19691378
NASA Astrophysics Data System (ADS)
Flocke, N.
2009-08-01
In this paper it is shown that shifted Jacobi polynomials Gn(p,q,x) can be used in connection with the Gaussian quadrature modified moment technique to greatly enhance the accuracy of evaluation of Rys roots and weights used in Gaussian integral evaluation in quantum chemistry. A general four-term inhomogeneous recurrence relation is derived for the shifted Jacobi polynomial modified moments over the Rys weight function e-Tx/√x . It is shown that for q =1/2 this general four-term inhomogeneous recurrence relation reduces to a three-term p-dependent inhomogeneous recurrence relation. Adjusting p to proper values depending on the Rys exponential parameter T, the method is capable of delivering highly accurate results for large number of roots and weights in the most difficult to treat intermediate T range. Examples are shown, and detailed formulas together with practical suggestions for their efficient implementation are also provided.
EVALUATION OF NUMERICAL SCHEMES FOR SOLVING A CONSERVATION OF SPECIES EQUATION WITH CHEMICAL TERMS
Numerical methods are investigated for solving a system of continuity equations that contain linear and nonlinear chemistry as source and sink terms. It is shown that implicit, finite-difference approximations, when applied to the chemical kinetic terms, yield accurate results wh...
A Numerical Simulation Approach for Reliability Evaluation of CFRP Composite
NASA Astrophysics Data System (ADS)
Liu, D. S.-C.; Jenab, K.
2013-02-01
Due to the superior mechanical properties of carbon fiber reinforced plastic (CFRP) materials, they are vastly used in industries such as aircraft manufacturers. The aircraft manufacturers are switching metal to composite structures while studying reliability (R-value) of CFRP. In this study, a numerical simulation method to determine the reliability of Multiaxial Warp Knitted (MWK) textiles used to make CFRP composites is proposed. This method analyzes the distribution of carbon fiber angle misalignments, from a chosen 0° direction, caused by the sewing process of the textile, and finds the R-value, a value between 0 and 1. The application of this method is demonstrated by an illustrative example.
Schultz, Zachery D.; Warrick, Jay W.; Guckenberger, David J.; Pezzi, Hannah M.; Sperger, Jamie M.; Heninger, Erika; Saeed, Anwaar; Leal, Ticiana; Mattox, Kara; Traynor, Anne M.; Campbell, Toby C.; Berry, Scott M.; Beebe, David J.; Lang, Joshua M.
2016-01-01
Background Expression of programmed-death ligand 1 (PD-L1) in non-small cell lung cancer (NSCLC) is typically evaluated through invasive biopsies; however, recent advances in the identification of circulating tumor cells (CTCs) may be a less invasive method to assay tumor cells for these purposes. These liquid biopsies rely on accurate identification of CTCs from the diverse populations in the blood, where some tumor cells share characteristics with normal blood cells. While many blood cells can be excluded by their high expression of CD45, neutrophils and other immature myeloid subsets have low to absent expression of CD45 and also express PD-L1. Furthermore, cytokeratin is typically used to identify CTCs, but neutrophils may stain non-specifically for intracellular antibodies, including cytokeratin, thus preventing accurate evaluation of PD-L1 expression on tumor cells. This holds even greater significance when evaluating PD-L1 in epithelial cell adhesion molecule (EpCAM) positive and EpCAM negative CTCs (as in epithelial-mesenchymal transition (EMT)). Methods To evaluate the impact of CTC misidentification on PD-L1 evaluation, we utilized CD11b to identify myeloid cells. CTCs were isolated from patients with metastatic NSCLC using EpCAM, MUC1 or Vimentin capture antibodies and exclusion-based sample preparation (ESP) technology. Results Large populations of CD11b+CD45lo cells were identified in buffy coats and stained non-specifically for intracellular antibodies including cytokeratin. The amount of CD11b+ cells misidentified as CTCs varied among patients; accounting for 33–100% of traditionally identified CTCs. Cells captured with vimentin had a higher frequency of CD11b+ cells at 41%, compared to 20% and 18% with MUC1 or EpCAM, respectively. Cells misidentified as CTCs ultimately skewed PD-L1 expression to varying degrees across patient samples. Conclusions Interfering myeloid populations can be differentiated from true CTCs with additional staining criteria
Lift capability prediction for helicopter rotor blade-numerical evaluation
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Cîrciu, Ionicǎ; Luculescu, Doru
2016-06-01
The main objective of this paper is to describe the key physical features for modelling the unsteady aerodynamic effects found on helicopter rotor blade operating under nominally attached flow conditions away from stall. The unsteady effects were considered as phase differences between the forcing function and the aerodynamic response, being functions of the reduced frequency, the Mach number and the mode forcing. For a helicopter rotor, the reduced frequency at any blade element can't be exactly calculated but a first order approximation for the reduced frequency gives useful information about the degree of unsteadiness. The sources of unsteady effects were decomposed into perturbations to the local angle of attack and velocity field. The numerical calculus and graphics were made in FLUENT and MAPLE soft environments. This mathematical model is applicable for aerodynamic design of wind turbine rotor blades, hybrid energy systems optimization and aeroelastic analysis.
Analytical solutions of moisture flow equations and their numerical evaluation
Gibbs, A.G.
1981-04-01
The role of analytical solutions of idealized moisture flow problems is discussed. Some different formulations of the moisture flow problem are reviewed. A number of different analytical solutions are summarized, including the case of idealized coupled moisture and heat flow. The evaluation of special functions which commonly arise in analytical solutions is discussed, including some pitfalls in the evaluation of expressions involving combinations of special functions. Finally, perturbation theory methods are summarized which can be used to obtain good approximate analytical solutions to problems which are too complicated to solve exactly, but which are close to an analytically solvable problem.
Evaluation and purchase of confocal microscopes: Numerous factors to consider
The purchase of a confocal microscope can be a complex and difficult decision for an individual scientist, group or evaluation committee. This is true even for scientists that have used confocal technology for many years. The task of reaching the optimal decision becomes almost i...
NASA Astrophysics Data System (ADS)
Shi, W. D.; Zhang, G. J.; Zhang, D. S.
2013-12-01
The objective of this paper is to evaluate the predictive capability of three turbulence models for the simulation of unsteady cavitating flows around a 2D Clark-y hydrofoil. Three turbulence models were standard k-ε model, hybrid model of density correction model (DCM) and filter-based model (FBM) and an improved partially-averaged Navier-Stokes model (PANS) based on k-ε model. Using the above-mentioned turbulence models and a homogeneous cavitation model, the unsteady cloud cavitation flows around the hydrofoil were numerically simulated and the time evolutions of cavity shape and lift evolutions over time were obtained. The results with comparison to a tunnel experiment data show that the hybrid model and PANS model can accurately capture unsteady cavity shedding details, fluctuation frequency and amplitude of lift and drag. The k-ε model has a poor agreement with the real experimental visualizations and this is mainly attributed to an over prediction of the turbulent viscosity in the rear part of the cavity, which limits the reentrant jet fully reaching the leading edge. The adverse pressure gradient plays an important role in the progression of the reentrant jet. Both the shock wave generated by the collapse of the cloud cavity and the growth of attached sheet cavity contribute to the increase of adverse pressure gradient.
Numerical Evaluation of Lateral Diffusion Inside Diffusive Gradients in Thin Films Samplers
2015-01-01
Using numerical simulation of diffusion inside diffusive gradients in thin films (DGT) samplers, we show that the effect of lateral diffusion inside the sampler on the solute flux into the sampler is a nonlinear function of the diffusion layer thickness and the physical sampling window size. In contrast, earlier work concluded that this effect was constant irrespective of parameters of the sampler geometry. The flux increase caused by lateral diffusion inside the sampler was determined to be ∼8.8% for standard samplers, which is considerably lower than the previous estimate of ∼20%. Lateral diffusion is also propagated to the diffusive boundary layer (DBL), where it leads to a slightly stronger decrease in the mass uptake than suggested by the common 1D diffusion model that is applied for evaluating DGT results. We introduce a simple correction procedure for lateral diffusion and demonstrate how the effect of lateral diffusion on diffusion in the DBL can be accounted for. These corrections often result in better estimates of the DBL thickness (δ) and the DGT-measured concentration than earlier approaches and will contribute to more accurate concentration measurements in solute monitoring in waters. PMID:25877251
Borring, J.; Gundtoft, H.E.; Borum, K.K.; Toft, P.
1997-08-01
In an effort to improve their ultrasonic scanning technique for accurate determination of the cladding thickness in LEU fuel plates, new equipment and modifications to the existing hardware and software have been tested and evaluated. The authors are now able to measure an aluminium thickness down to 0.25 mm instead of the previous 0.35 mm. Furthermore, they have shown how the measuring sensitivity can be improved from 0.03 mm to 0.01 mm. It has now become possible to check their standard fuel plates for DR3 against the minimum cladding thickness requirements non-destructively. Such measurements open the possibility for the acceptance of a thinner nominal cladding than normally used today.
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Singer, Bart A.
2003-01-01
We evaluate the applicability of a production computational fluid dynamics code for conducting detached eddy simulation for unsteady flows. A second-order accurate Navier-Stokes code developed at NASA Langley Research Center, known as TLNS3D, is used for these simulations. We focus our attention on high Reynolds number flow (Re = 5 x 10(sup 4) - 1.4 x 10(sup 5)) past a circular cylinder to simulate flows with large-scale separations. We consider two types of flow situations: one in which the flow at the separation point is laminar, and the other in which the flow is already turbulent when it detaches from the surface of the cylinder. Solutions are presented for two- and three-dimensional calculations using both the unsteady Reynolds-averaged Navier-Stokes paradigm and the detached eddy simulation treatment. All calculations use the standard Spalart-Allmaras turbulence model as the base model.
3-D numerical evaluation of density effects on tracer tests.
Beinhorn, M; Dietrich, P; Kolditz, O
2005-12-01
In this paper we present numerical simulations carried out to assess the importance of density-dependent flow on tracer plume development. The scenario considered in the study is characterized by a short-term tracer injection phase into a fully penetrating well and a natural hydraulic gradient. The scenario is thought to be typical for tracer tests conducted in the field. Using a reference case as a starting point, different model parameters were changed in order to determine their importance to density effects. The study is based on a three-dimensional model domain. Results were interpreted using concentration contours and a first moment analysis. Tracer injections of 0.036 kg per meter of saturated aquifer thickness do not cause significant density effects assuming hydraulic gradients of at least 0.1%. Higher tracer input masses, as used for geoelectrical investigations, may lead to buoyancy-induced flow in the early phase of a tracer test which in turn impacts further plume development. This also holds true for shallow aquifers. Results of simulations with different tracer injection rates and durations imply that the tracer input scenario has a negligible effect on density flow. Employing model cases with different realizations of a log conductivity random field, it could be shown that small variations of hydraulic conductivity in the vicinity of the tracer injection well have a major control on the local tracer distribution but do not mask effects of buoyancy-induced flow. PMID:16183165
Numerical evaluation of one-loop diagrams near exceptional momentum configurations
Walter T Giele; Giulia Zanderighi; E.W.N. Glover
2004-07-06
One problem which plagues the numerical evaluation of one-loop Feynman diagrams using recursive integration by part relations is a numerical instability near exceptional momentum configurations. In this contribution we will discuss a generic solution to this problem. As an example we consider the case of forward light-by-light scattering.
Numerical evaluation of single central jet for turbine disk cooling
NASA Astrophysics Data System (ADS)
Subbaraman, M. R.; Hadid, A. H.; McConnaughey, P. K.
The cooling arrangement of the Space Shuttle Main Engine High Pressure Oxidizer Turbopump (HPOTP) incorporates two jet rings, each of which produces 19 high-velocity coolant jets. At some operating conditions, the frequency of excitation associated with the 19 jets coincides with the natural frequency of the turbine blades, contributing to fatigue cracking of blade shanks. In this paper, an alternate turbine disk cooling arrangement, applicable to disk faces of zero hub radius, is evaluated, which consists of a single coolant jet impinging at the center of the turbine disk. Results of the CFD analysis show that replacing the jet ring with a single central coolant jet in the HPOTP leads to an acceptable thermal environment at the disk rim. Based on the predictions of flow and temperature fields for operating conditions, the single central jet cooling system was recommended for implementation into the development program of the Technology Test Bed Engine at NASA Marshall Space Flight Center.
[Numerical evaluation of soil quality under different conservation tillage patterns].
Wu, Yu-Hong; Tian, Xiao-Hong; Chi, Wen-Bo; Nan, Xiong-Xiong; Yan, Xiao-Li; Zhu, Rui-Xiang; Tong, Yan-An
2010-06-01
A 9-year field experiment was conducted on the Guanzhong Plain of Shaanxi Province to study the effects of subsoiling, rotary tillage, straw return, no-till seeding, and traditional tillage on the soil physical and chemical properties and the grain yield in a winter wheat-summer maize rotation system, and a comprehensive evaluation was made on the soil quality under these tillage patterns by the method of principal components analysis (PCA). Comparing with traditional tillage, all the conservation tillage patterns improved soil fertility quality and soil physical properties. Under conservative tillage, the activities of soil urease and alkaline phosphatase increased significantly, soil quality index increased by 19.8%-44.0%, and the grain yield of winter wheat and summer maize (expect that under no till seeding with straw covering) increased by 13%-28% and 3%-12%, respectively. Subsoiling every other year, straw-chopping combined with rotary tillage, and straw-mulching combined with subsoiling not only increased crop yield, but also improved soil quality. Based on the economic and ecological benefits, the practices of subsoiling and straw return should be promoted. PMID:20873622
Johnson, B M; Guan, X; Gammie, C F
2008-06-24
The descriptions of some of the numerical tests in our original paper are incomplete, making reproduction of the results difficult. We provide the missing details here. The relevant tests are described in section 4 of the original paper (Figures 8-11).
Brezovský, Jan
2016-01-01
An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools’ predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations
Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan
2016-05-01
An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To
Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard
2005-08-01
MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.
What the Numbers Mean: Providing a Context for Numerical Student Evaluations of Courses.
ERIC Educational Resources Information Center
Trout, Paul A.
1997-01-01
Analysis of the content of and student responses to college course evaluations suggests that, in general, students are seeking entertainment, comfort, high grades, and less work and are hostile to the necessary routines and rigors of higher education. The commonly used numerical evaluation form is not only unreliable and invalid, it is an…
Evaluating the use of high-resolution numerical weather forecast for debris flow prediction.
NASA Astrophysics Data System (ADS)
Nikolopoulos, Efthymios I.; Bartsotas, Nikolaos S.; Borga, Marco; Kallos, George
2015-04-01
The sudden occurrence combined with the high destructive power of debris flows pose a significant threat to human life and infrastructures. Therefore, developing early warning procedures for the mitigation of debris flows risk is of great economical and societal importance. Given that rainfall is the predominant factor controlling debris flow triggering, it is indisputable that development of effective debris flows warning procedures requires accurate knowledge of the properties (e.g. duration, intensity) of the triggering rainfall. Moreover, efficient and timely response of emergency operations depends highly on the lead-time provided by the warning systems. Currently, the majority of early warning systems for debris flows are based on nowcasting procedures. While the latter may be successful in predicting the hazard, they provide warnings with a relatively short lead-time (~6h). Increasing the lead-time is necessary in order to improve the pre-incident operations and communication of the emergency, thus coupling warning systems with weather forecasting is essential for advancing early warning procedures. In this work we evaluate the potential of using high-resolution (1km) rainfall fields forecasted with a state-of-the-art numerical weather prediction model (RAMS/ICLAMS), in order to predict the occurrence of debris flows. Analysis is focused over the Upper Adige region, Northeast Italy, an area where debris flows are frequent. Seven storm events that generated a large number (>80) of debris flows during the period 2007-2012 are analyzed. Radar-based rainfall estimates, available from the operational C-band radar located at Mt Macaion, are used as the reference to evaluate the forecasted rainfall fields. Evaluation is mainly focused on assessing the error in forecasted rainfall properties (magnitude, duration) and the correlation in space and time with the reference field. Results show that the forecasted rainfall fields captured very well the magnitude and
vom Saal, Frederick S.; Welshons, Wade V.
2016-01-01
There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273
vom Saal, Frederick S; Welshons, Wade V
2014-12-01
There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273
NASA Astrophysics Data System (ADS)
Hrubý, Jan
2012-04-01
Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.
Lindgren, Richard J.; Taylor, Charles J.; Houston, Natalie A.
2009-01-01
A substantial number of public water system wells in south-central Texas withdraw groundwater from the karstic, highly productive Edwards aquifer. However, the use of numerical groundwater flow models to aid in the delineation of contributing areas for public water system wells in the Edwards aquifer is problematic because of the complex hydrogeologic framework and the presence of conduit-dominated flow paths in the aquifer. The U.S. Geological Survey, in cooperation with the Texas Commission on Environmental Quality, evaluated six published numerical groundwater flow models (all deterministic) that have been developed for the Edwards aquifer San Antonio segment or Barton Springs segment, or both. This report describes the models developed and evaluates each with respect to accessibility and ease of use, range of conditions simulated, accuracy of simulations, agreement with dye-tracer tests, and limitations of the models. These models are (1) GWSIM model of the San Antonio segment, a FORTRAN computer-model code that pre-dates the development of MODFLOW; (2) MODFLOW conduit-flow model of San Antonio and Barton Springs segments; (3) MODFLOW diffuse-flow model of San Antonio and Barton Springs segments; (4) MODFLOW Groundwater Availability Modeling [GAM] model of the Barton Springs segment; (5) MODFLOW recalibrated GAM model of the Barton Springs segment; and (6) MODFLOW-DCM (dual conductivity model) conduit model of the Barton Springs segment. The GWSIM model code is not commercially available, is limited in its application to the San Antonio segment of the Edwards aquifer, and lacks the ability of MODFLOW to easily incorporate newly developed processes and packages to better simulate hydrologic processes. MODFLOW is a widely used and tested code for numerical modeling of groundwater flow, is well documented, and is in the public domain. These attributes make MODFLOW a preferred code with regard to accessibility and ease of use. The MODFLOW conduit-flow model
Numeric and symbolic evaluation of the pfaffian of general skew-symmetric matrices
NASA Astrophysics Data System (ADS)
González-Ballestero, C.; Robledo, L. M.; Bertsch, G. F.
2011-10-01
Evaluation of pfaffians arises in a number of physics applications, and for some of them a direct method is preferable to using the determinantal formula. We discuss two methods for the numerical evaluation of pfaffians. The first is tridiagonalization based on Householder transformations. The main advantage of this method is its numerical stability that makes unnecessary the implementation of a pivoting strategy. The second method considered is based on Aitken's block diagonalization formula. It yields to a kind of LU (similar to Cholesky's factorization) decomposition (under congruence) of arbitrary skew-symmetric matrices that is well suited both for the numeric and symbolic evaluations of the pfaffian. Fortran subroutines (FORTRAN 77 and 90) implementing both methods are given. We also provide simple implementations in Python and Mathematica for purpose of testing, or for exploratory studies of methods that make use of pfaffians.
NASA Astrophysics Data System (ADS)
Ahmed, Mahmoud; Eslamian, Morteza
2015-07-01
Laminar natural convection in differentially heated ( β = 0°, where β is the inclination angle), inclined ( β = 30° and 60°), and bottom-heated ( β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.
Ahmed, Mahmoud; Eslamian, Morteza
2015-12-01
Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number. PMID:26183389
Ryabinkin, Ilya G; Nagesh, Jayashree; Izmaylov, Artur F
2015-11-01
We have developed a numerical differentiation scheme that eliminates evaluation of overlap determinants in calculating the time-derivative nonadiabatic couplings (TDNACs). Evaluation of these determinants was the bottleneck in previous implementations of mixed quantum-classical methods using numerical differentiation of electronic wave functions in the Slater determinant representation. The central idea of our approach is, first, to reduce the analytic time derivatives of Slater determinants to time derivatives of molecular orbitals and then to apply a finite-difference formula. Benchmark calculations prove the efficiency of the proposed scheme showing impressive several-order-of-magnitude speedups of the TDNAC calculation step for midsize molecules. PMID:26538034
NASA Technical Reports Server (NTRS)
Constantinides, E. D.; Marhefka, R. J.
1992-01-01
The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals of such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. Here, a convergent series solution form for the incomplete Airy functions is derived. Asymptotic expansions involving several terms were also developed and serve as large argument approximations. The combination of the series solution form with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.
Zradziński, Patryk
2015-01-01
Due to the various physical mechanisms of interaction between a worker's body and the electromagnetic field at various frequencies, the principles of numerical simulations have been discussed for three areas of worker exposure: to low frequency magnetic field, to low and intermediate frequency electric field and to radiofrequency electromagnetic field. This paper presents the identified difficulties in applying numerical simulations to evaluate physical estimators of direct and indirect effects of exposure to electromagnetic fields at various frequencies. Exposure of workers operating a plastic sealer have been taken as an example scenario of electromagnetic field exposure at the workplace for discussion of those difficulties in applying numerical simulations. The following difficulties in reliable numerical simulations of workers' exposure to the electromagnetic field have been considered: workers' body models (posture, dimensions, shape and grounding conditions), working environment models (objects most influencing electromagnetic field distribution) and an analysis of parameters for which exposure limitations are specified in international guidelines and standards. PMID:26323781
Zradziński, Patryk
2015-01-01
Due to the various physical mechanisms of interaction between a worker's body and the electromagnetic field at various frequencies, the principles of numerical simulations have been discussed for three areas of worker exposure: to low frequency magnetic field, to low and intermediate frequency electric field and to radiofrequency electromagnetic field. This paper presents the identified difficulties in applying numerical simulations to evaluate physical estimators of direct and indirect effects of exposure to electromagnetic fields at various frequencies. Exposure of workers operating a plastic sealer have been taken as an example scenario of electromagnetic field exposure at the workplace for discussion of those difficulties in applying numerical simulations. The following difficulties in reliable numerical simulations of workers’ exposure to the electromagnetic field have been considered: workers’ body models (posture, dimensions, shape and grounding conditions), working environment models (objects most influencing electromagnetic field distribution) and an analysis of parameters for which exposure limitations are specified in international guidelines and standards. PMID:26323781
A Framework for Evaluating Regional-Scale Numerical Photochemical Modeling Systems
This paper discusses the need for critically evaluating regional-scale (~ 200-2000 km) three dimensional numerical photochemical air quality modeling systems to establish a model's credibility in simulating the spatio-temporal features embedded in the observations. Because of li...
Asuero, A G; Navas, M J; Jiminez-Trillo, J L
1986-02-01
The spectrophotometric methods applicable to the numerical evaluation of acidity constants of monobasic acids are briefly reviewed. The equations are presented in a form suitable for easy calculation with a programmable pocket calculator. The aim of this paper is to cover a gap in the education analytical literature. PMID:18964064
Numerical evaluation of the three-dimensional searchlight problem in a half-space
Kornreich, D.E.; Ganapol, B.D.
1997-11-01
The linear Boltzmann equation for the transport of neutral particles is investigated with the objective of generating a benchmark-quality calculation for the three-dimensional searchlight problem in a semi-infinite medium. The derivation assumes stationarity, one energy group, and isotropic scattering. The scalar flux (both surface and interior) and the current at the surface are the quantities of interest. The source considered is a pencil-beam incident at a point on the surface of a semi-infinite medium. The scalar flux will have two-dimensional variation only if the beam is normal; otherwise, it is three-dimensional. The solutions are obtained by using Fourier and Laplace transform models. The transformed transport equation is formulated so that it can be related to a one-dimensional pseudo problem, thus providing some analytical leverage for the inversions. The numerical inversions use standard numerical techniques such as Gauss-Legendre quadrature, summation of infinite series, H-function iteration and evaluation, and Euler-Knopp acceleration. The numerical evaluations of the scalar flux and current at the surface are relatively simple, and the interior scalar flux is relatively difficult to calculate because of the embedded two-dimensional Fourier transform inversion, Laplace transform inversion, and H-function evaluation. Comparisons of these numerical solutions to results from the MCNP probabilistic code and the THREE-DANT discrete ordinates code are provided and help confirm proper operation of the analytical code.
Generalization Evaluation of Machine Learning Numerical Observers for Image Quality Assessment.
Kalayeh, Mahdi M; Marin, Thibault; Brankov, Jovan G
2013-06-01
In this paper, we present two new numerical observers (NO) based on machine learning for image quality assessment. The proposed NOs aim to predict human observer performance in a cardiac perfusion-defect detection task for single-photon emission computed tomography (SPECT) images. Human observer (HumO) studies are now considered to be the gold standard for task-based evaluation of medical images. However such studies are impractical for use in early stages of development for imaging devices and algorithms, because they require extensive involvement of trained human observers who must evaluate a large number of images. To address this problem, numerical observers (also called model observers) have been developed as a surrogate for human observers. The channelized Hotelling observer (CHO), with or without internal noise model, is currently the most widely used NO of this kind. In our previous work we argued that development of a NO model to predict human observers' performance can be viewed as a machine learning (or system identification) problem. This consideration led us to develop a channelized support vector machine (CSVM) observer, a kernel-based regression model that greatly outperformed the popular and widely used CHO. This was especially evident when the numerical observers were evaluated in terms of generalization performance. To evaluate generalization we used a typical situation for the practical use of a numerical observer: after optimizing the NO (which for a CHO might consist of adjusting the internal noise model) based upon a broad set of reconstructed images, we tested it on a broad (but different) set of images obtained by a different reconstruction method. In this manuscript we aim to evaluate two new regression models that achieve accuracy higher than the CHO and comparable to our earlier CSVM method, while dramatically reducing model complexity and computation time. The new models are defined in a Bayesian machine-learning framework: a channelized
Numerical criteria for the evaluation of ab initio predictions of protein structure.
Zemla, A; Venclovas, C; Reinhardt, A; Fidelis, K; Hubbard, T J
1997-01-01
As part of the CASP2 protein structure prediction experiment, a set of numerical criteria were defined for the evaluation of "ab initio" predictions. The evaluation package comprises a series of electronic submission formats, a submission validator, evaluation software, and a series of scripts to summarize the results for the CASP2 meeting and for presentation via the World Wide Web (WWW). The evaluation package is accessible for use on new predictions via WWW so that results can be compared to those submitted to CASP2. With further input from the community, the evaluation criteria are expected to evolve into a comprehensive set of measures capturing the overall quality of a prediction as well as critical detail essential for further development of prediction methods. We discuss present measures, limitations of the current criteria, and possible improvements. PMID:9485506
NASA Technical Reports Server (NTRS)
Weston, K. C.; Reynolds, A. C., Jr.; Alikhan, A.; Drago, D. W.
1974-01-01
Numerical solutions for radiative transport in a class of anisotropically scattering materials are presented. Conditions for convergence and divergence of the iterative method are given and supported by computed results. The relation of two flux theories to the equation of radiative transfer for isotropic scattering is discussed. The adequacy of the two flux approach for the reflectance, radiative flux and radiative flux divergence of highly scattering media is evaluated with respect to solutions of the radiative transfer equation.
Selection of a numerical unsaturated flow code for tilted capillary barrier performance evaluation
Webb, S.W.
1996-09-01
Capillary barriers consisting of tilted fine-over-coarse layers have been suggested as landfill covers as a means to divert water infiltration away from sensitive underground regions under unsaturated flow conditions, especially for arid and semi-arid regions. Typically, the HELP code is used to evaluate landfill cover performance and design. Unfortunately, due to its simplified treatment of unsaturated flow and its essentially one-dimensional nature, HELP is not adequate to treat the complex multidimensional unsaturated flow processes occurring in a tilted capillary barrier. In order to develop the necessary mechanistic code for the performance evaluation of tilted capillary barriers, an efficient and comprehensive unsaturated flow code needs to be selected for further use and modification. The present study evaluates a number of candidate mechanistic unsaturated flow codes for application to tilted capillary barriers. Factors considered included unsaturated flow modeling, inclusion of evapotranspiration, nodalization flexibility, ease of modification, and numerical efficiency. A number of unsaturated flow codes are available for use with different features and assumptions. The codes chosen for this evaluation are TOUGH2, FEHM, and SWMS{_}2D. All three codes chosen for this evaluation successfully simulated the capillary barrier problem chosen for the code comparison, although FEHM used a reduced grid. The numerical results are a strong function of the numerical weighting scheme. For the same weighting scheme, similar results were obtained from the various codes. Based on the CPU time of the various codes and the code capabilities, the TOUGH2 code has been selected as the appropriate code for tilted capillary barrier performance evaluation, possibly in conjunction with the infiltration, runoff, and evapotranspiration models of HELP. 44 refs.
ERIC Educational Resources Information Center
Au, Wayne
2011-01-01
Current and former leaders of many major urban school districts, including Washington, D.C.'s Michelle Rhee and New Orleans' Paul Vallas, have sought to use tests to evaluate teachers. In fact, the use of high-stakes standardized tests to evaluate teacher performance in the manner of value-added measurement (VAM) has become one of the cornerstones…
Giannaros, Theodore M; Melas, Dimitrios; Matzarakis, Andreas
2015-02-01
The evaluation of thermal bioclimate can be conducted employing either observational or modeling techniques. The advantage of the numerical modeling approach lies in that it can be applied in areas where there is lack of observational data, providing a detailed insight on the prevailing thermal bioclimatic conditions. However, this approach should be exploited carefully since model simulations can be frequently biased. The aim of this paper is to examine the suitability of a mesoscale atmospheric model in terms of evaluating thermal bioclimate. For this, the numerical weather prediction Weather Research and Forecasting (WRF) model and the radiation RayMan model are employed for simulating thermal bioclimatic conditions in Greece during a 1-year time period. The physiologically equivalent temperature (PET) is selected as an index for evaluating thermal bioclimate, while synoptic weather station data are exploited for verifying model performance. The results of the present study shed light on the strengths and weaknesses of the numerical modeling approach. Overall, it is shown that model simulations can provide a useful alternative tool for studying thermal bioclimate. Specifically for Greece, the WRF/RayMan modeling system was found to perform adequately well in reproducing the spatial and temporal variations of PET. PMID:24771280
Zhang, Jing; Tian, Jiabin; Ta, Na; Huang, Xinsheng; Rao, Zhushi
2016-08-01
Finite element method was employed in this study to analyze the change in performance of implantable hearing devices due to the consideration of soft tissues' viscoelasticity. An integrated finite element model of human ear including the external ear, middle ear and inner ear was first developed via reverse engineering and analyzed by acoustic-structure-fluid coupling. Viscoelastic properties of soft tissues in the middle ear were taken into consideration in this model. The model-derived dynamic responses including middle ear and cochlea functions showed a better agreement with experimental data at high frequencies above 3000 Hz than the Rayleigh-type damping. On this basis, a coupled finite element model consisting of the human ear and a piezoelectric actuator attached to the long process of incus was further constructed. Based on the electromechanical coupling analysis, equivalent sound pressure and power consumption of the actuator corresponding to viscoelasticity and Rayleigh damping were calculated using this model. The analytical results showed that the implant performance of the actuator evaluated using a finite element model considering viscoelastic properties gives a lower output above about 3 kHz than does Rayleigh damping model. Finite element model considering viscoelastic properties was more accurate to numerically evaluate implantable hearing devices. PMID:27276992
Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui
2016-06-14
Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823
Numerical evaluation of welded tube wall profiles from scanned X-ray line source data
NASA Astrophysics Data System (ADS)
Lunin, V.; Podobedov, D.; Ewert, U.; Redmer, B.
2001-04-01
This investigation presents an iterative algorithm for inversion of X-ray line scanning data of a multi-angle inspection. The main focus is the development of a robust algorithm that may successfully evaluate the influence of local surface geometry in welding regions. An idea here is to repetitively solve the forward problem with iterated profile parameters until the solution agrees with measurement. For accurate parameterization of a particular inner crack, this procedure can be combined with an analysis of the residual image obtained by subtracting the projection image caused by reconstructed surface wall profiles, from the original data.
Ridouane, E. H.; Bianchi, M.
2011-11-01
This study describes a detailed three-dimensional computational fluid dynamics modeling to evaluate the thermal performance of uninsulated wall assemblies accounting for conduction through framing, convection, and radiation. The model allows for material properties variations with temperature. Parameters that were varied in the study include ambient outdoor temperature and cavity surface emissivity. Understanding the thermal performance of uninsulated wall cavities is essential for accurate prediction of energy use in residential buildings. The results can serve as input for building energy simulation tools for modeling the temperature dependent energy performance of homes with uninsulated walls.
Combined experimental and numerical evaluation of a prototype nano-PCM enhanced wallboard
Biswas, Kaushik; LuPh.D., Jue; Soroushian, Parviz; Shrestha, Som S
2014-01-01
In the United States, forty-eight (48) percent of the residential end-use energy consumption is spent on space heating and air conditioning. Reducing envelope-generated heating and cooling loads through application of phase change material (PCM)-enhanced building envelopes can facilitate maximizing the energy efficiency of buildings. Combined experimental testing and numerical modeling of PCM-enhanced envelope components are two important aspects of the evaluation of their energy benefits. An innovative phase change material (nano-PCM) was developed with PCM encapsulated with expanded graphite (interconnected) nanosheets, which is highly conductive for enhanced thermal storage and energy distribution, and is shape-stable for convenient incorporation into lightweight building components. A wall with cellulose cavity insulation and prototype PCM-enhanced interior wallboards was built and tested in a natural exposure test (NET) facility in a hot-humid climate location. The test wall contained PCM wallboards and regular gypsum wallboard, for a side-by-side annual comparison study. Further, numerical modeling of the walls containing the nano-PCM wallboard was performed to determine its actual impact on wall-generated heating and cooling loads. The model was first validated using experimental data, and then used for annual simulations using Typical Meteorological Year (TMY3) weather data. This article presents the measured performance and numerical analysis evaluating the energy-saving potential of the nano-PCM-enhanced wallboard.
Sobsey, Constance A; Han, Jun; Lin, Karen; Swardfager, Walter; Levitt, Anthony; Borchers, Christoph H
2016-09-01
Malondialdhyde (MDA) is a commonly used marker of lipid peroxidation in oxidative stress. To provide a sensitive analytical method that is compatible with high throughput, we developed a multiple reaction monitoring-mass spectrometry (MRM-MS) approach using 3-nitrophenylhydrazine chemical derivatization, isotope-labeling, and liquid chromatography (LC) with electrospray ionization (ESI)-tandem mass spectrometry assay to accurately quantify MDA in human plasma. A stable isotope-labeled internal standard was used to compensate for ESI matrix effects. The assay is linear (R(2)=0.9999) over a 20,000-fold concentration range with a lower limit of quantitation of 30fmol (on-column). Intra- and inter-run coefficients of variation (CVs) were <2% and ∼10% respectively. The derivative was stable for >36h at 5°C. Standards spiked into plasma had recoveries of 92-98%. When compared to a common LC-UV method, the LC-MS method found near-identical MDA concentrations. A pilot project to quantify MDA in patient plasma samples (n=26) in a study of major depressive disorder with winter-type seasonal pattern (MDD-s) confirmed known associations between MDA concentrations and obesity (p<0.02). The LC-MS method provides high sensitivity and high reproducibility for quantifying MDA in human plasma. The simple sample preparation and rapid analysis time (5x faster than LC-UV) offers high throughput for large-scale clinical applications. PMID:27437618
Evaluation of the Performance of the Hybrid Lattice Boltzmann Based Numerical Flux
NASA Astrophysics Data System (ADS)
Zheng, H. W.; Shu, C.
2016-06-01
It is well known that the numerical scheme is a key factor to the stability and accuracy of a Navier-Stokes solver. Recently, a new hybrid lattice Boltzmann numerical flux (HLBFS) is developed by Shu's group. It combines two different LBFS schemes by a switch function. It solves the Boltzmann equation instead of the Euler equation. In this article, the main object is to evaluate the ability of this HLBFS scheme by our in-house cell centered hybrid mesh based Navier-Stokes code. Its performance is examined by several widely-used bench-mark test cases. The comparisons on results between calculation and experiment are conducted. They show that the scheme can capture the shock wave as well as the resolving of boundary layer.
Numerical evaluation of two-center integrals over Slater type orbitals
NASA Astrophysics Data System (ADS)
Kurt, S. A.; Yükçü, N.
2016-03-01
Slater Type Orbitals (STOs) which one of the types of exponential type orbitals (ETOs) are used usually as basis functions in the multicenter molecular integrals to better understand physical and chemical properties of matter. In this work, we develop algorithms for two-center overlap and two-center two-electron hybrid and Coulomb integrals which are calculated with help of translation method for STOs and some auxiliary functions by V. Magnasco's group. We use Mathematica programming language to produce algorithms for these calculations. Numerical results for some quantum numbers are presented in the tables. Consequently, we compare our obtained numerical results with the other known literature results and other details of evaluation method are discussed.
Yonetani, Yusuke; Nitta, Kouichi; Matoba, Osamu
2010-02-01
We evaluate numerically the effect of shrinkage of photopolymer on the bit error rate or signal-to-noise ratio in a reflection-type holographic data storage system with angular multiplexing. In the evaluation, we use a simple model where the material is divided into layered structures and then the shrinkage rate is proportional to the intensity in each layer. We present the effectiveness of the proposed model from the experimental results in the recording of the plane waves both in a transmission-type hologram and a reflection-type one. Several kinds of shrinkage rates are used to evaluate the characteristics of angular multiplexing in the reflection-type holographic memory. PMID:20119021
NASA Astrophysics Data System (ADS)
Omoniyi, Bayonle; Stow, Dorrik
2016-04-01
One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.
Numerical Study and Performance Evaluation for Pulse Detonation Engine with Exhaust Nozzle
NASA Astrophysics Data System (ADS)
Kimura, Yuichiro; Tsuboi, Nobuyuki; Hayashi, A. Koichi; Yamada, Eisuke
This paper presents the propulsive performance evaluation for the H2/Air Pulse Detonation Engine (PDE) with a converging-diverging exhaust nozzle by the system-level modeling and multi-cycle numerical simulations. This study deals with the two-dimensional and axisymmetric compressible Euler equations with a detail chemical reaction model. First, single-shot propulsive performance of simplified-PDE, which is without exhaust nozzle, is evaluated to show the validity of the numerical and performance evaluation method. The influences of the initial conditions, ignition energy, grid resolution, and scale effects on the propulsive performance are studied with the multi-cycle simulations. The present results are compared with the results calculated by Ma et al. and Harris et al. and the difference between their results and the present simulations are approximately 2-3% because their chemical reactions use one-step model with one-γ model. The effects of the specific heat ratio should be estimated for various nozzle configurations and flight conditions.
The numerical evaluation of the maximum-likelihood estimate of a subset of mixture proportions
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
Necessary and sufficient conditions are given for a maximum likelihood estimate of a subset of mixture proportions. From these conditions, likelihood equations are derived satisfied by the maximum-likelihood estimate and a successive-approximations procedure is discussed as suggested by equations for numerically evaluating the maximum-likelihood estimate. It is shown that, with probability one for large samples, this procedure converges locally to the maximum-likelihood estimate whenever a certain step-size lies between zero and two. Furthermore, optimal rates of local convergence are obtained for a step-size which is bounded below by a number between one and two.
NASA Astrophysics Data System (ADS)
Ishikawa, Atsushi; Nakai, Hiromi
2016-04-01
Gibbs free energy of hydration of a proton and standard hydrogen electrode potential were evaluated using high-level quantum chemical calculations. The solvent effect was included using the cluster-continuum model, which treated short-range effects by quantum chemical calculations of proton-water complexes, and the long-range effects by a conductor-like polarizable continuum model. The harmonic solvation model (HSM) was employed to estimate enthalpy and entropy contributions due to nuclear motions of the clusters by including the cavity-cluster interactions. Compared to the commonly used ideal gas model, HSM treatment significantly improved the contribution of entropy, showing a systematic convergence toward the experimental data.
NASA Astrophysics Data System (ADS)
Lin, C.; Gillespie, J.; Schuder, M. D.; Duberstein, W.; Beverland, I. J.; Heal, M. R.
2015-01-01
Low-power, and relatively low-cost, gas sensors have potential to improve understanding of intra-urban air pollution variation by enabling data capture over wider networks than is possible with 'traditional' reference analysers. We evaluated an Aeroqual Ltd. Series 500 semiconducting metal oxide O3 and an electrochemical NO2 sensor against UK national network reference analysers for more than 2 months at an urban background site in central Edinburgh. Hourly-average Aeroqual O3 sensor observations were highly correlated (R2 = 0.91) and of similar magnitude to observations from the UV-absorption reference O3 analyser. The Aeroqual NO2 sensor observations correlated poorly with the reference chemiluminescence NO2 analyser (R2 = 0.02), but the deviations between Aeroqual and reference analyser values ([NO2]Aeroq - [NO2]ref) were highly significantly correlated with concurrent Aeroqual O3 sensor observations [O3]Aeroq. This permitted effective linear calibration of the [NO2]Aeroq data, evaluated using 'hold out' subsets of the data (R2 ≥ 0.85). These field observations under temperate environmental conditions suggest that the Aeroqual Series 500 NO2 and O3 monitors have good potential to be useful ambient air monitoring instruments in urban environments provided that the O3 and NO2 gas sensors are calibrated against reference analysers and deployed in parallel.
NASA Astrophysics Data System (ADS)
Jahanshahi, Mohammad R.; Masri, Sami F.
2013-03-01
In mechanical, aerospace and civil structures, cracks are important defects that can cause catastrophes if neglected. Visual inspection is currently the predominant method for crack assessment. This approach is tedious, labor-intensive, subjective and highly qualitative. An inexpensive alternative to current monitoring methods is to use a robotic system that could perform autonomous crack detection and quantification. To reach this goal, several image-based crack detection approaches have been developed; however, the crack thickness quantification, which is an essential element for a reliable structural condition assessment, has not been sufficiently investigated. In this paper, a new contact-less crack quantification methodology, based on computer vision and image processing concepts, is introduced and evaluated against a crack quantification approach which was previously developed by the authors. The proposed approach in this study utilizes depth perception to quantify crack thickness and, as opposed to most previous studies, needs no scale attachment to the region under inspection, which makes this approach ideal for incorporation with autonomous or semi-autonomous mobile inspection systems. Validation tests are performed to evaluate the performance of the proposed approach, and the results show that the new proposed approach outperforms the previously developed one.
NASA Astrophysics Data System (ADS)
Wen, Xiulan; Zhao, Yibing; Wang, Dongxia; Zhu, Xiaochu; Xue, Xiaoqiang
2013-03-01
Although significant progress has been made in precision machining of free-form surfaces recently, inspection of such surfaces remains a difficult problem. In order to solve the problem that no specific standards for the verification of free-form surface profile are available, the profile parameters of free-form surface are proposed by referring to ISO standards regarding form tolerances and considering its complexity and non-rotational symmetry. Non-uniform rational basis spline(NURBS) for describing free-form surface is formulated. Crucial issues in surface inspection and profile error verification are localization between the design coordinate system(DCS) and measurement coordinate system(MCS) for searching the closest points on the design model corresponding to measured points. A quasi particle swarm optimization(QPSO) is proposed to search the transformation parameters to implement localization between DCS and MCS. Surface subdivide method which does the searching in a recursively reduced range of the parameters u and v of the NURBS design model is developed to find the closest points. In order to verify the effectiveness of the proposed methods, the design model is generated by NURBS and the measurement data of simulation example are generated by transforming the design model to arbitrary position and orientation, and the parts are machined based on the design model and are measured on CMM. The profile errors of simulation example and actual parts are calculated by the proposed method. The results verify that the evaluation precision of freeform surface profile error by the proposed method is higher 10%-22% than that by CMM software. The proposed method deals with the hard problem that it has a lower precision in profile error evaluation of free-form surface.
Analytical expression for gas-particle equilibration time scale and its numerical evaluation
NASA Astrophysics Data System (ADS)
Anttila, Tatu; Lehtinen, Kari E. J.; Dal Maso, Miikka
2016-05-01
We have derived a time scale τeq that describes the characteristic time for a single compound i with a saturation vapour concentration Ceff,i to reach thermodynamic equilibrium between the gas and particle phases. The equilibration process was assumed to take place via gas-phase diffusion and absorption into a liquid-like phase present in the particles. It was further shown that τeq combines two previously derived and often applied time scales τa and τs that account for the changes in the gas and particle phase concentrations of i resulting from the equilibration, respectively. The validity of τeq was tested by comparing its predictions against results from a numerical model that explicitly simulates the transfer of i between the gas and particle phases. By conducting a large number of simulations where the values of the key input parameters were varied randomly, it was found out that τeq yields highly accurate results when i is a semi-volatile compound in the sense that the ratio of total (gas and particle phases) concentration of i to the saturation vapour concentration of i, μ, is below unity. On the other hand, the comparison of analytical and numerical time scales revealed that using τa or τs alone to calculate the equilibration time scale may lead to considerable errors. It was further shown that τeq tends to overpredict the equilibration time when i behaves as a non-volatile compound in a sense that μ > 1. Despite its simplicity, the time scale derived here has useful applications. First, it can be used to assess if semi-volatile compounds reach thermodynamic equilibrium during dynamic experiments that involve changes in the compound volatility. Second, the time scale can be used in modeling of secondary organic aerosol (SOA) to check whether SOA forming compounds equilibrate over a certain time interval.
An Experimental-Numerical Evaluation of Thermal Contact Conductance in Fin-Tube Heat Exchangers
NASA Astrophysics Data System (ADS)
Kim, Chang Nyung; Jeong, Jin; Youn, Baek; Kil, Seong Ho
The contact between fin collar and tube surface of a fin-tube heat exchanger is secured through mechanical expansion of tubes. However, the characteristics of heat transfer through the interfaces between the tubes and fins have not been clearly understood because the interfaces consist partially of metal-to-metal contact and partially of air. The objective of the present study is to develop a new method utilizing an experimental-numerical method for the estimation of the thermal contact resistance between the fin collar and tube surface and to evaluate the factors affecting the thermal contact resistance in a fin-tube heat exchanger. In this study, heat transfer characteristics of actual heat exchanger assemblies have been tested in a vacuum chamber using water as an internal fluid, and a finite difference numerical scheme has been employed to reduce the experimental data for the evaluation of the thermal contact conductance. The present study has been conducted for fin-tube heat exchangers of tube diameter of 7mm with different tube expansion ratios, fin spacings, and fin types. The results show, with an appropriate error analysis, that these parameters as well as hydrophilic fin coating affect notably the thermal contact conductance. It has been found out that the thermal contact resistance takes fairly large portion of the total thermal resistance in a fin-tube heat exchanger and it turns out that careful consideration is needed in a manufacturing process of heat exchangers to reduce the thermal contact resistance.
Song, Kwang Hyun; Snyder, Karen Chin; Kim, Jinkoo; Li, Haisen; Ning, Wen; Rusnac, Robert; Jackson, Paul; Gordon, James; Siddiqui, Salim M; Chetty, Indrin J
2016-01-01
2.5 MV electronic portal imaging, available on Varian TrueBeam machines, was characterized using various phantoms in this study. Its low-contrast detectability, spatial resolution, and contrast-to-noise ratio (CNR) were compared with those of conventional 6 MV and kV planar imaging. Scatter effect in large patient body was simulated by adding solid water slabs along the beam path. The 2.5 MV imaging mode was also evaluated using clinically acquired images from 24 patients for the sites of brain, head and neck, lung, and abdomen. With respect to 6 MV, the 2.5 MV achieved higher contrast and preserved sharpness on bony structures with only half of the imaging dose. The quality of 2.5 MV imaging was comparable to that of kV imaging when the lateral separation of patient was greater than 38 cm, while the kV image quality degraded rapidly as patient separation increased. Based on the results of patient images, 2.5 MV imaging was better for cranial and extracranial SRS than the 6 MV imaging. PMID:27455505
Ratcliff, Laura E; Grisanti, Luca; Genovese, Luigi; Deutsch, Thierry; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang; Beljonne, David; Cornil, Jérôme
2015-05-12
A fast and accurate scheme has been developed to evaluate two key molecular parameters (on-site energies and transfer integrals) that govern charge transport in organic supramolecular architecture devices. The scheme is based on a constrained density functional theory (CDFT) approach implemented in the linear-scaling BigDFT code that exploits a wavelet basis set. The method has been applied to model disordered structures generated by force-field simulations. The role of the environment on the transport parameters has been taken into account by building large clusters around the active molecules involved in the charge transfer. PMID:26574411
Numerical evaluation of the Feynman integral-over-paths in real and imaginary-time
NASA Astrophysics Data System (ADS)
Register, L. F.; Stroscio, M. A.; Littlejohn, M. A.
New techniques are described for Monte Carlo evaluation of the propagation of quantum mechanical systems in both real and imaginary-time using the Feynman integral-over-paths formulation of quantum mechanics. For imaginary-time calculations path translation is used to augment the technique of Lawande et. al. This simple-yet-powerful technique allows the equilibrium probability density to be accurately evaluated in the presence of multiple potential wells. It is shown that path translation permits the calculation of the unknown ground-state energy of one confining potential by comparison with the known ground-state energy of another. A double finite-square-well potential and a finite-square-well/parabolic-well pair are presented as examples. For real-time calculations, a weighted analytical averaging of the exponential in the classical action is performed over a region of paths. This "windowed action" has both real and imaginary components. The imaginary component yields an exponentially decaying probability for selecting paths, thereby providing a basis for the Monte Carlo evaluation of the real-time integral-over-paths. Examples of a wave-packet in a parabolic well and a wave-packet impinging upon a potential barrier are considered.
Evaluation of numerical sediment quality targets for the St. Louis River Area of Concern
Crane, J.L.; MacDonald, D.D.; Ingersoll, C.G.; Smorong, D.E.; Lindskoog, R.A.; Severn, C.G.; Berger, T.A.; Field, L.J.
2002-01-01
Numerical sediment quality targets (SQTs) for the protection of sediment-dwelling organisms have been established for the St. Louis River Area of Concern (AOC), 1 of 42 current AOCs in the Great Lakes basin. The two types of SQTs were established primarily from consensus-based sediment quality guidelines. Level I SQTs are intended to identify contaminant concentrations below which harmful effects on sediment-dwelling organisms are unlikely to be observed. Level II SQTs are intended to identify contaminant concentrations above which harmful effects on sediment-dwelling organisms are likely to be observed. The predictive ability of the numerical SQTs was evaluated using the matching sediment chemistry and toxicity data set for the St. Louis River AOC. This evaluation involved determination of the incidence of toxicity to amphipods (Hyalella azteca) and midges (Chironomus tentans) within five ranges of Level II SQT quotients (i.e., mean probable effect concentration quotients [PEC-Qs]). The incidence of toxicity was determined based on the results of 10-day toxicity tests with amphipods (endpoints: survival and growth) and 10-day toxicity tests with midges (endpoints: survival and growth). For both toxicity tests, the incidence of toxicity increased as the mean PEC-Q ranges increased. The incidence of toxicity observed in these tests was also compared to that for other geographic areas in the Great Lakes region and in North America for 10- to 14-day amphipod (H. azteca) and 10- to 14-day midge (C. tentans or C. riparius) toxicity tests. In general, the predictive ability of the mean PEC-Qs was similar across geographic areas. The results of these predictive ability evaluations indicate that collectively the mean PEC-Qs provide a reliable basis for classifying sediments as toxic or not toxic in the St. Louis River AOC, in the larger geographic areas of the Great Lakes, and elsewhere in North America.
Numerical evaluation of the groundwater drainage system for underground storage caverns
NASA Astrophysics Data System (ADS)
Park, Eui Seob; Chae, Byung Gon
2015-04-01
A novel concept storing cryogenic liquefied natural gas in a hard rock lined cavern has been developed and tested for several years as an alternative. In this concept, groundwater in rock mass around cavern has to be fully drained until the early stage of construction and operation to avoid possible adverse effect of groundwater near cavern. And then rock mass should be re-saturated to form an ice ring, which is the zone around cavern including ice instead of water in several joints within the frozen rock mass. The drainage system is composed of the drainage tunnel excavated beneath the cavern and drain holes drilled on rock surface of the drainage tunnel. In order to de-saturate sufficiently rock mass around the cavern, the position and horizontal spacing of drain holes should be designed efficiently. In this paper, a series of numerical study results related to the drainage system of the full-scale cavern are presented. The rock type in the study area consists mainly of banded gneiss and mica schist. Gneiss is in slightly weathered state and contains a little joint and fractures. Schist contains several well-developed schistosities that mainly stand vertically, so that vertical joints are better developed than the horizontals in the area. Lugeon tests revealed that upper aquifer and bedrock are divided in the depth of 40-50m under the surface. Groundwater level was observed in twenty monitoring wells and interpolated in the whole area. Numerical study using Visual Modflow and Seep/W has been performed to evaluate the efficiency of drainage system for underground liquefied natural gas storage cavern in two hypothetically designed layouts and determine the design parameters. In Modflow analysis, groundwater flow change in an unconfined aquifer was simulated during excavation of cavern and operation of drainage system. In Seep/W analysis, amount of seepage and drainage was also estimated in a representative vertical section of each cavern. From the results
SEQUESTRATION OF METALS IN ACTIVE CAP MATERIALS: A LABORATORY AND NUMERICAL EVALUATION
Dixon, K.; Knox, A.
2012-02-13
Active capping involves the use of capping materials that react with sediment contaminants to reduce their toxicity or bioavailability. Although several amendments have been proposed for use in active capping systems, little is known about their long-term ability to sequester metals. Recent research has shown that the active amendment apatite has potential application for metals contaminated sediments. The focus of this study was to evaluate the effectiveness of apatite in the sequestration of metal contaminants through the use of short-term laboratory column studies in conjunction with predictive, numerical modeling. A breakthrough column study was conducted using North Carolina apatite as the active amendment. Under saturated conditions, a spike solution containing elemental As, Cd, Co, Se, Pb, Zn, and a non-reactive tracer was injected into the column. A sand column was tested under similar conditions as a control. Effluent water samples were periodically collected from each column for chemical analysis. Relative to the non-reactive tracer, the breakthrough of each metal was substantially delayed by the apatite. Furthermore, breakthrough of each metal was substantially delayed by the apatite compared to the sand column. Finally, a simple 1-D, numerical model was created to qualitatively predict the long-term performance of apatite based on the findings from the column study. The results of the modeling showed that apatite could delay the breakthrough of some metals for hundreds of years under typical groundwater flow velocities.
Evaluation and Numerical Simulation of Tsunami for Coastal Nuclear Power Plants of India
Sharma, Pavan K.; Singh, R.K.; Ghosh, A.K.; Kushwaha, H.S.
2006-07-01
Recent tsunami generated on December 26, 2004 due to Sumatra earthquake of magnitude 9.3 resulted in inundation at the various coastal sites of India. The site selection and design of Indian nuclear power plants demand the evaluation of run up and the structural barriers for the coastal plants: Besides it is also desirable to evaluate the early warning system for tsunami-genic earthquakes. The tsunamis originate from submarine faults, underwater volcanic activities, sub-aerial landslides impinging on the sea and submarine landslides. In case of a submarine earthquake-induced tsunami the wave is generated in the fluid domain due to displacement of the seabed. There are three phases of tsunami: generation, propagation, and run-up. Reactor Safety Division (RSD) of Bhabha Atomic Research Centre (BARC), Trombay has initiated computational simulation for all the three phases of tsunami source generation, its propagation and finally run up evaluation for the protection of public life, property and various industrial infrastructures located on the coastal regions of India. These studies could be effectively utilized for design and implementation of early warning system for coastal region of the country apart from catering to the needs of Indian nuclear installations. This paper presents some results of tsunami waves based on different analytical/numerical approaches with shallow water wave theory. (authors)
A Numerical Evaluation on the Viability of Heap Thermophilic Bioleaching of Chalcopyrite
NASA Astrophysics Data System (ADS)
Vilcaez, J.; Suto, K.; Inoue, C.
2007-03-01
The present numerical evaluation explores into the interactions among the many variables governing the mass and heat transport processes that take place in a heap thermophilic bioleaching system. The necessity of using mesophiles together with thermophiles is proved by tracing the activity of both microorganisms individually at each point throughout the heap. The role of key variables such as the fraction of FeS2 per CuFeS2 leached was quantified and its importance highlighted. In this evaluation, the heat transfer process plays the main role because of the heat accumulation required to maintain the heap temperature within the range of 60 °C to 80 °C where thermophilic microorganisms are capable of completing the unfinished dissolution of copper started by mesophilic microorganisms at 30 °C. The evaluation was done taking into consideration: biological activity as function of the temperature in the heap, heat loss due to conduction and advection from the top and bottom of the heap, and mass transfer between the gas and liquid phases as a function of temperature. The exothermic nature of the leaching reactions of CuFeS2 and FeS2 makes the system auto-thermal.
Numerical simulation and fracture evaluation method of dual laterolog in organic shale
NASA Astrophysics Data System (ADS)
Tan, Maojin; Wang, Peng; Li, Jun; Liu, Qiong; Yang, Qinshan
2014-01-01
Fracture identification and parameter evaluation are important for logging interpretation of organic shale, especially fracture evaluation from conventional logs in case the imaging log is not available. It is helpful to study dual laterolog responses of the fractured shale reservoir. First, a physical model is set up according to the property of organic shale, and three-dimensional finite element method (FEM) based on the principle of dual laterolog is introduced and applied to simulate dual laterolog responses in various shale models, which can help identify the fractures in shale formations. Then, through a number of numerical simulations of dual laterolog for various shale models with different base rock resistivities and fracture openings, the corresponding equations of various cases are constructed respectively, and the fracture porosity can be calculated consequently. Finally, we apply this methodology proposed above to a case study of organic shale, and the fracture porosity and fracture opening are calculated. The results are consistent with the fracture parameters processed from Full borehole Micro-resistivity Imaging (FMI). It indicates that the method is applicable for fracture evaluation of organic shale.
Numerical evaluation of the radiation from unbaffled, finite plates using the FFT
NASA Technical Reports Server (NTRS)
Williams, E. G.
1983-01-01
An iteration technique is described which numerically evaluates the acoustic pressure and velocity on and near unbaffled, finite, thin plates vibrating in air. The technique is based on Rayleigh's integral formula and its inverse. These formulas are written in their angular spectrum form so that the fast Fourier transform (FFT) algorithm may be used to evaluate them. As an example of the technique the pressure on the surface of a vibrating, unbaffled disk is computed and shown to be in excellent agreement with the exact solution using oblate spheroidal functions. Furthermore, the computed velocity field outside the disk shows the well-known singularity at the rim of the disk. The radiated fields from unbaffled flat sources of any geometry with prescribed surface velocity may be evaluated using this technique. The use of the FFT to perform the integrations in Rayleigh's formulas provides a great savings in computation time compared with standard integration algorithms, especially when an array processor can be used to implement the FFT.
NASA Technical Reports Server (NTRS)
Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.
1996-01-01
A numerical model of heat transfer using combined conduction, radiation and convection in AADSF was used to evaluate temperature gradients in the vicinity of the crystal/melt interface for variety of hot and cold zone set point temperatures specifically for the growth of mercury cadmium telluride (MCT). Reverse usage of hot and cold zones was simulated to aid the choice of proper orientation of crystal/melt interface regarding residual acceleration vector without actual change of furnace location on board the orbiter. It appears that an additional booster heater will be extremely helpful to ensure desired temperature gradient when hot and cold zones are reversed. Further efforts are required to investigate advantages/disadvantages of symmetrical furnace design (i.e. with similar length of hot and cold zones).
Numerical evaluation of the Bose-ghost propagator in minimal Landau gauge on the lattice
NASA Astrophysics Data System (ADS)
Cucchieri, Attilio; Mendes, Tereza
2016-07-01
We present numerical details of the evaluation of the so-called Bose-ghost propagator in lattice minimal Landau gauge, for the SU(2) case in four Euclidean dimensions. This quantity has been proposed as a carrier of the confining force in the Gribov-Zwanziger approach and, as such, its infrared behavior could be relevant for the understanding of color confinement in Yang-Mills theories. Also, its nonzero value can be interpreted as direct evidence of Becchi-Rouet-Stora-Tyutin-symmetry breaking, which is induced when restricting the functional measure to the first Gribov region Ω . Our simulations are done for lattice volumes up to 1204 and for physical lattice extents up to 13.5 fm. We investigate the infinite-volume and continuum limits.
NASA Astrophysics Data System (ADS)
Chillara, Vamshi Krishna; Lissenden, Cliff J.
2016-01-01
Interest in using the higher harmonic generation of ultrasonic guided wave modes for nondestructive evaluation continues to grow tremendously as the understanding of nonlinear guided wave propagation has enabled further analysis. The combination of the attractive properties of guided waves with the attractive properties of higher harmonic generation provides a very unique potential for characterization of incipient damage, particularly in plate and shell structures. Guided waves can propagate relatively long distances, provide access to hidden structural components, have various displacement polarizations, and provide many opportunities for mode conversions due to their multimode character. Moreover, higher harmonic generation is sensitive to changing aspects of the microstructures such as to the dislocation density, precipitates, inclusions, and voids. We review the recent advances in the theory of nonlinear guided waves, as well as the numerical simulations and experiments that demonstrate their utility.
Numerical evaluation of a 13.5-nm high-brightness microplasma extreme ultraviolet source
Hara, Hiroyuki Arai, Goki; Dinh, Thanh-Hung; Higashiguchi, Takeshi; Jiang, Weihua; Miura, Taisuke; Endo, Akira; Ejima, Takeo; Li, Bowen; Dunne, Padraig; O'Sullivan, Gerry; Sunahara, Atsushi
2015-11-21
The extreme ultraviolet (EUV) emission and its spatial distribution as well as plasma parameters in a microplasma high-brightness light source are characterized by the use of a two-dimensional radiation hydrodynamic simulation. The expected EUV source size, which is determined by the expansion of the microplasma due to hydrodynamic motion, was evaluated to be 16 μm (full width) and was almost reproduced by the experimental result which showed an emission source diameter of 18–20 μm at a laser pulse duration of 150 ps [full width at half-maximum]. The numerical simulation suggests that high brightness EUV sources should be produced by use of a dot target based microplasma with a source diameter of about 20 μm.
Design and numerical evaluation of a volume coil array for parallel MR imaging at ultrahigh fields
Pang, Yong; Wong, Ernest W.H.; Yu, Baiying
2014-01-01
In this work, we propose and investigate a volume coil array design method using different types of birdcage coils for MR imaging. Unlike the conventional radiofrequency (RF) coil arrays of which the array elements are surface coils, the proposed volume coil array consists of a set of independent volume coils including a conventional birdcage coil, a transverse birdcage coil, and a helix birdcage coil. The magnetic fluxes of these three birdcage coils are intrinsically cancelled, yielding a highly decoupled volume coil array. In contrast to conventional non-array type volume coils, the volume coil array would be beneficial in improving MR signal-to-noise ratio (SNR) and also gain the capability of implementing parallel imaging. The volume coil array is evaluated at the ultrahigh field of 7T using FDTD numerical simulations, and the g-factor map at different acceleration rates was also calculated to investigate its parallel imaging performance. PMID:24649435
Numerical surrogates for human observers in myocardial motion evaluation from SPECT image
Marin, Thibault; Kalayehis, Mahdi M.; Parages, Felipe M.; Brankov, Jovan G.
2014-01-01
In medical imaging, the gold standard for image-quality assessment is a task-based approach in which one evaluates human observer performance for a given diagnostic task (e.g., detection of a myocardial perfusion or motion defect). To facilitate practical task-based image-quality assessment, model observers are needed as approximate surrogates for human observers. In cardiac-gated SPECT imaging, diagnosis relies on evaluation of the myocardial motion as well as perfusion. Model observers for the perfusion-defect detection task have been studied previously, but little effort has been devoted toward development of a model observer for cardiac-motion defect detection. In this work describe two model observers for predicting human observer performance in detection of cardiac-motion defects. Both proposed methods rely on motion features extracted using previously reported deformable mesh model for myocardium motion estimation. The first method is based on a Hotelling linear discriminant that is similar in concept to that used commonly for perfusion-defect detection. In the second method, based on relevance vector machines (RVM) for regression, we compute average human observer performance by first directly predicting individual human observer scores, and then using multi reader receiver operating characteristic (ROC) analysis. Our results suggest that the proposed RVM model observer can predict human observer performance accurately, while the new Hotelling motion-defect detector is somewhat less effective. PMID:23981533
Evaluation of Site Effects Using Numerical and Experimental Analyses In Cittas Di Castello (italy)
NASA Astrophysics Data System (ADS)
Pergalani, F.; de Franco, R.; Compagnoni, M.; Caielli, G.
In the paper the results of the numerical and experimental analyses, in a site of the Umbria Region (Città di Castello - PG), finalized to the evaluations of site effects are shown. The aim of the work was to compare the two type of analyses, to give some methodologies that may be used at the level of urban planning, to consider these as- pects. Therefore a series of geologic, geomorphologic (1:5.000 scale), geotechnic and seismic analyses have been carried out, to identify the areas affected to local effects and to characterize the lithotechnic units. The expected seismic inputs are been indi- viduated and 2D (Quad4M, Hudson et al., 1993) numerical analyses have been done. An experimental analysis, using the registrations of small events, has been done. The results, for the two approaches, were performed in terms of elastic pseudo-acceleration spectra and amplification factors, as a ratio between spectral intensity (Housner, 1952), calculated using the pseudo-velocity spectra, in the periods of 0.1-0.5 s and 0.1-2.5 s of output and input. The results have been analyzed and compared, to give a method- ology that may be exhaustive and precise. The conclusions can be summarized in the following points: u° the results of the two approaches are coherent; u° the differences between the two approaches are: the use of the numerical analysis is easy and quick but, in this case, the use of 2D analysis produces a simplification of real geometry; the use of experimental analysis allows to consider the 3D conditions, but, in this case, the registrations of events characterized by low energy, do not allow to consider the non linear behavior of materials, moreover it is necessary to perform the registrations for a period depending from the seismicity of the region (1 month - two years); u° the possi- bility of integration of the two methodologies allows to perform a complete analysis, using the advantages of the two methods. Housner G.W., Spectrum Intensities of strong
NASA Astrophysics Data System (ADS)
Shrestha, D. L.; Robertson, D. E.; Wang, Q. J.; Pagano, T. C.; Hapuarachchi, H. A. P.
2013-05-01
The quality of precipitation forecasts from four Numerical Weather Prediction (NWP) models is evaluated over the Ovens catchment in Southeast Australia. Precipitation forecasts are compared with observed precipitation at point and catchment scales and at different temporal resolutions. The four models evaluated are the Australian Community Climate Earth-System Simulator (ACCESS) including ACCESS-G with a 80 km resolution, ACCESS-R 37.5 km, ACCESS-A 12 km, and ACCESS-VT 5 km. The skill of the NWP precipitation forecasts varies considerably between rain gauging stations. In general, high spatial resolution (ACCESS-A and ACCESS-VT) and regional (ACCESS-R) NWP models overestimate precipitation in dry, low elevation areas and underestimate in wet, high elevation areas. The global model (ACCESS-G) consistently underestimates the precipitation at all stations and the bias increases with station elevation. The skill varies with forecast lead time and, in general, it decreases with the increasing lead time. When evaluated at finer spatial and temporal resolution (e.g. 5 km, hourly), the precipitation forecasts appear to have very little skill. There is moderate skill at short lead times when the forecasts are averaged up to daily and/or catchment scale. The precipitation forecasts fail to produce a diurnal cycle shown in observed precipitation. Significant sampling uncertainty in the skill scores suggests that more data are required to get a reliable evaluation of the forecasts. The non-smooth decay of skill with forecast lead time can be attributed to diurnal cycle in the observation and sampling uncertainty. Future work is planned to assess the benefits of using the NWP rainfall forecasts for short-term streamflow forecasting. Our findings here suggest that it is necessary to remove the systematic biases in rainfall forecasts, particularly those from low resolution models, before the rainfall forecasts can be used for streamflow forecasting.
NASA Astrophysics Data System (ADS)
Shrestha, D. L.; Robertson, D. E.; Wang, Q. J.; Pagano, T. C.; Hapuarachchi, P.
2012-11-01
The quality of precipitation forecasts from four Numerical Weather Prediction (NWP) models is evaluated over the Ovens catchment in southeast Australia. Precipitation forecasts are compared with observed precipitation at point and catchment scales and at different temporal resolutions. The four models evaluated are the Australian Community Climate Earth-System Simulator (ACCESS) including ACCESS-G with a 80 km resolution, ACCESS-R 37.5 km, ACCESS-A 12 km, and ACCESS-VT 5 km. The high spatial resolution NWP models (ACCESS-A and ACCESS-VT) appear to be relatively free of bias (i.e. <30%) for 24 h total precipitation forecasts. The low resolution models (ACCESS-R and ACCESS-G) have widespread systematic biases as large as 70%. When evaluated at finer spatial and temporal resolution (e.g. 5 km, hourly) against station observations, the precipitation forecasts appear to have very little skill. There is moderate skill at short lead times when the forecasts are averaged up to daily and/or catchment scale. The skill decreases with increasing lead times and the global model ACCESS-G does not have significant skill beyond 7 days. The precipitation forecasts fail to produce a diurnal cycle shown in observed precipitation. Significant sampling uncertainty in the skill scores suggests that more data are required to get a reliable evaluation of the forecasts. Future work is planned to assess the benefits of using the NWP rainfall forecasts for short-term streamflow forecasting. Our findings here suggest that it is necessary to remove the systematic biases in rainfall forecasts, particularly those from low resolution models, before the rainfall forecasts can be used for streamflow forecasting.
A numerical model for the analysis and evaluation of global 137Cs fallout.
Shimada, Y; Morisawa, S; Inoue, Y
1996-02-01
Fallout 137Cs from atmospheric nuclear detonation tests has been monitored worldwide since the late 1950's. The deviation and the correlation among these monitoring data were analyzed, and their surface deposition characteristics were estimated by the compartment model developed in this research. In the analysis, the scale of space (i.e., size of each compartment) and the degree of detail (i.e., number of compartments) were statistically determined using the global distribution data of 137Cs. The mathematical model was evaluated by comparing the numerically stimulated results with the fallout monitoring data including the 137Cs concentration in sea water. The major findings obtained in this research include that the deposition pattern of 137Cs is dependent on the latitude zone but not on the longitude, the mathematical model is promising for evaluating the dynamic performance of 137Cs in global atmospheric environment and its surface deposition, 137Cs is accumulated more in both the surface and deep ocean water of the North Pacific Ocean and the North Atlantic Ocean than that of other oceans, the 137Cs inventory is decreasing after the peak time in 1965, and the 137Cs inventory in the deep ocean water is decreasing more slowly than that in the surface ocean water. PMID:8567283
A numerical model for the analysis and evaluation of global {sup 137}Cs fallout
Shimada, Y.; Morisawa, S.; Inoue, Y.
1996-02-01
Fallout {sup 137}Cs from atmospheric nuclear detonation test have been monitored worldwide since the late 1950`s. The deviation and the correlation among these monitoring data were analyzed, and their surface deposition characteristics were estimated by the compartment model developed in this research. In the analysis, the scale of space (i.e., size of each compartment) and the degree of detail (i.e., number of compartments) were statistically determined using the global distribution data of {sup 137}Cs. The mathematical model was evaluated by comparing the numerically simulated results with the fallout monitoring data including the {sup 137}Cs concentration in sea water. The major findings obtained in this research include that the deposition pattern of {sup 137}Cs is dependent on the latitude zone but not on the longitude, the mathematical model is promising for evaluating the dynamic performance of {sup 137}Cs in global atmospheric environment and its surface deposition, {sup 137}Cs is accumulated more in both the surface and deep ocean water of the North Pacific Ocean and the North Atlantic ocean than that of other oceans, the {sup 137}Cs inventory is decreasing after the peak time in 1965, and the {sup 137}Cs inventory in the deep ocean water is decreasing more slowly than that in the surface ocean water. 26 refs., 10 figs., 3 tabs.
Numerical evaluation of seismic response of shallow foundation on loose silt and silty sand
NASA Astrophysics Data System (ADS)
Asgari, Ali; Golshani, Aliakbar; Bagheri, Mohsen
2014-03-01
This study includes the results of a set of numerical simulations carried out for sands containing plastic/non-plastic fines, and silts with relative densities of approximately 30-40% under different surcharges on the shallow foundation using FLAC 2D. Each model was subjected to three ground motion events, obtained by scaling the amplitude of the El Centro (1940), Kobe (1995) and Kocaeli (1999) Q12earthquakes. Dynamic behaviour of loose deposits underlying shallow foundations is evaluated through fully coupled nonlinear effective stress dynamic analyses. Effects of nonlinear soil structure interaction (SSI) were also considered by using interface elements. This parametric study evaluates the effects of soil type, structure weight, liquefiable soil layer thickness, event parameters (e.g., moment magnitude of earthquake ( M w ), peak ground acceleration PGA, PGV/PGA ratio and the duration of strong motion ( D 5-95) and their interactions on the seismic responses. Investigation on the effects of these parameters and their complex interactions can be a valuable tool to gain new insights for improved seismic design and construction.
NASA Astrophysics Data System (ADS)
Baierl, M.; Kordilla, J.; Reimann, T.; Dörfliger, N.; Sauter, M.; Geyer, T.
2012-04-01
This work deals with the analysis of pumping tests in strongly heterogeneous media. Pumping tests were performed in the catchment area of the Lez spring (South of France), which is composed of carbonate rocks. Pumping rates for the different tests varied between 0.04 l/s - 0.7 l/s, i.e. the radius of influence of the cone of depression is small. The investigated boreholes are characterised by tight rocks, moderate fractures and karstified zones. The observed drawdown curves are clearly influenced by the rock characteristics. Single drawdown curves show S-shape character. Data evaluation was performed with the solution approaches of Theis (1935) and Gringarten-Ramey (1974), which are implemented in the employed software AQTESOLV (Pro 4.0). Parameters were varied in reliable data ranges with consideration of reported values in the literature. The Theis method analyses unsteady flow in homogeneous confined aquifers. The Gringarten-Ramey solution describes the drawdown in a well connected to a single horizontal fracture. The Theis curve fails to represent the characteristics for nearly all of the measured drawdown curves, while the Gringarten-Ramey method shows moderate graphical fits with a small residual sum of squares between fitted and observed drawdown curves. This highlights the importance of heterogeneities in the hydraulic parameter field at local scale. The determined hydraulic conductivities of the rock are in reasonable ranges varying between 1E-04 m/s and 1E-08 m/s. Wellbore skin effects need to be discussed further in detail. While the analytical solutions are only valid for specific geometrical and hydraulic configurations, numerical models can be applied to simulate pumping tests in complex heterogeneous media with different boundary conditions. For that reason, a two dimensional, axisymmetric numerical model, using COMSOL (Multiphysics 4.1), is set up. In a first step, the model is validated with the simulated curves from the analytical solutions under
A New Look at Stratospheric Sudden Warmings. Part II: Evaluation of Numerical Model Simulations
NASA Technical Reports Server (NTRS)
Charlton, Andrew J.; Polvani, Lorenza M.; Perlwitz, Judith; Sassi, Fabrizio; Manzini, Elisa; Shibata, Kiyotaka; Pawson, Steven; Nielsen, J. Eric; Rind, David
2007-01-01
The simulation of major midwinter stratospheric sudden warmings (SSWs) in six stratosphere-resolving general circulation models (GCMs) is examined. The GCMs are compared to a new climatology of SSWs, based on the dynamical characteristics of the events. First, the number, type, and temporal distribution of SSW events are evaluated. Most of the models show a lower frequency of SSW events than the climatology, which has a mean frequency of 6.0 SSWs per decade. Statistical tests show that three of the six models produce significantly fewer SSWs than the climatology, between 1.0 and 2.6 SSWs per decade. Second, four process-based diagnostics are calculated for all of the SSW events in each model. It is found that SSWs in the GCMs compare favorably with dynamical benchmarks for SSW established in the first part of the study. These results indicate that GCMs are capable of quite accurately simulating the dynamics required to produce SSWs, but with lower frequency than the climatology. Further dynamical diagnostics hint that, in at least one case, this is due to a lack of meridional heat flux in the lower stratosphere. Even though the SSWs simulated by most GCMs are dynamically realistic when compared to the NCEP-NCAR reanalysis, the reasons for the relative paucity of SSWs in GCMs remains an important and open question.
Jin, J.-Y.; Ryu, Samuel; Faber, Kathleen; Mikkelsen, Tom; Chen Qing; Li Shidong; Movsas, Benjamin
2006-12-15
The purpose of this study was to evaluate the accuracy of a two-dimensional (2D) to three-dimensional (3D) image-fusion-guided target localization system and a mask based stereotactic system for fractionated stereotactic radiotherapy (FSRT) of cranial lesions. A commercial x-ray image guidance system originally developed for extracranial radiosurgery was used for FSRT of cranial lesions. The localization accuracy was quantitatively evaluated with an anthropomorphic head phantom implanted with eight small radiopaque markers (BBs) in different locations. The accuracy and its clinical reliability were also qualitatively evaluated for a total of 127 fractions in 12 patients with both kV x-ray images and MV portal films. The image-guided system was then used as a standard to evaluate the overall uncertainty and reproducibility of the head mask based stereotactic system in these patients. The phantom study demonstrated that the maximal random error of the image-guided target localization was {+-}0.6 mm in each direction in terms of the 95% confidence interval (CI). The systematic error varied with measurement methods. It was approximately 0.4 mm, mainly in the longitudinal direction, for the kV x-ray method. There was a 0.5 mm systematic difference, primarily in the lateral direction, between the kV x-ray and the MV portal methods. The patient study suggested that the accuracy of the image-guided system in patients was comparable to that in the phantom. The overall uncertainty of the mask system was {+-}4 mm, and the reproducibility was {+-}2.9 mm in terms of 95% CI. The study demonstrated that the image guidance system provides accurate and precise target positioning.
Numerical simulation of small perturbation transonic flows
NASA Technical Reports Server (NTRS)
Seebass, A. R.; Yu, N. J.
1976-01-01
The results of a systematic study of small perturbation transonic flows are presented. Both the flow over thin airfoils and the flow over wedges were investigated. Various numerical schemes were employed in the study. The prime goal of the research was to determine the efficiency of various numerical procedures by accurately evaluating the wave drag, both by computing the pressure integral around the body and by integrating the momentum loss across the shock. Numerical errors involved in the computations that affect the accuracy of drag evaluations were analyzed. The factors that effect numerical stability and the rate of convergence of the iterative schemes were also systematically studied.
NASA Astrophysics Data System (ADS)
Charles, Winsbert Curt
Seismic protective techniques utilizing specialized energy dissipation devices within the lateral resisting frames have been successfully used to limit inelastic deformation in reinforced concrete buildings by increasing damping and/or altering the stiffness of these structures. However, there is a need to investigate and develop systems with self-centering capabilities; systems that are able to assist in returning a structure to its original position after an earthquake. In this project, the efficacy of a shape memory alloy (SMA) based device, as a structural recentering device is evaluated through numerical analysis using the OpenSees framework. OpenSees is a software framework for simulating the seismic response of structural and geotechnical systems. OpenSees has been developed as the computational platform for research in performance-based earthquake engineering at the Pacific Earthquake Engineering Research Center (PEER). A non-ductile reinforced concrete building, which is modelled using OpenSees and verified with available experimental data is used for the analysis in this study. The model is fitted with Tension/Compression (TC) SMA devices. The performance of the SMA recentering device is evaluated for a set of near-field and far-field ground motions. Critical performance measures of the analysis include residual displacements, interstory drift and acceleration (horizontal and vertical) for different types of ground motions. The results show that the TC device's performance is unaffected by the type of ground motion. The analysis also shows that the inclusion of the device in the lateral force resisting system of the building resulted in a 50% decrease in peak horizontal displacement, and inter-story drift elimination of residual deformations, acceleration was increased up to 110%.
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Takase, Kazuyuki
Thermal-hydraulic design of the current boiling water reactor (BWR) is performed with the subchannel analysis codes which incorporated the correlations based on empirical results including actual-size tests. Then, for the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) core, an actual size test of an embodiment of its design is required to confirm or modify such correlations. In this situation, development of a method that enables the thermal-hydraulic design of nuclear reactors without these actual size tests is desired, because these tests take a long time and entail great cost. For this reason, we developed an advanced thermal-hydraulic design method for FLWRs using innovative two-phase flow simulation technology. In this study, a detailed Two-Phase Flow simulation code using advanced Interface Tracking method: TPFIT is developed to calculate the detailed information of the two-phase flow. In this paper, firstly, we tried to verify the TPFIT code by comparing it with the existing 2-channel air-water mixing experimental results. Secondary, the TPFIT code was applied to simulation of steam-water two-phase flow in a model of two subchannels of a current BWRs and FLWRs rod bundle. The fluid mixing was observed at a gap between the subchannels. The existing two-phase flow correlation for fluid mixing is evaluated using detailed numerical simulation data. This data indicates that pressure difference between fluid channels is responsible for the fluid mixing, and thus the effects of the time average pressure difference and fluctuations must be incorporated in the two-phase flow correlation for fluid mixing. When inlet quality ratio of subchannels is relatively large, it is understood that evaluation precision of the existing two-phase flow correlations for fluid mixing are relatively low.
Critical evaluation of three hemodynamic models for the numerical simulation of intra-stent flows.
Chabi, Fatiha; Champmartin, Stéphane; Sarraf, Christophe; Noguera, Ricardo
2015-07-16
We evaluate here three hemodynamic models used for the numerical simulation of bare and stented artery flows. We focus on two flow features responsible for intra-stent restenosis: the wall shear stress and the re-circulation lengths around a stent. The studied models are the Poiseuille profile, the simplified pulsatile profile and the complete pulsatile profile based on the analysis of Womersley. The flow rate of blood in a human left coronary artery is considered to compute the velocity profiles. "Ansys Fluent 14.5" is used to solve the Navier-Stokes and continuity equations. As expected our results show that the Poiseuille profile is questionable to simulate the complex flow dynamics involved in intra-stent restenosis. Both pulsatile models give similar results close to the strut but diverge far from it. However, the computational time for the complete pulsatile model is five times that of the simplified pulsatile model. Considering the additional "cost" for the complete model, we recommend using the simplified pulsatile model for future intra-stent flow simulations. PMID:26044195
Development and Evaluation of a Remedial Numerical Skills Workbook for Navy Training. Final Report.
ERIC Educational Resources Information Center
Bowman, Harry L.; And Others
A remedial Navy-relevant numerical skills workbook was developed and field tested for use in Navy recruit training commands and as part of the Navy Junior Reserve Officers Training curriculum. Research and curriculum specialists from the Department of the Navy and Memphis State University identified Navy-relevant topics requiring numerical skill…
Kim, M. K.; Kim, J. H.; Choi, I. K.
2012-07-01
In this study, a seismic fragility evaluation of the piping system in a nuclear power plant was performed. For the evaluation of seismic fragility of the piping system, this research was progressed as three steps. At first, several piping element capacity tests were performed. The monotonic and cyclic loading tests were conducted under the same internal pressure level of actual nuclear power plants to evaluate the performance. The cracks and wall thinning were considered as degradation factors of the piping system. Second, a shaking tale test was performed for an evaluation of seismic capacity of a selected piping system. The multi-support seismic excitation was performed for the considering a difference of an elevation of support. Finally, a numerical analysis was performed for the assessment of seismic fragility of piping system. As a result, a seismic fragility for piping system of NPP in Korea by using a shaking table test and numerical analysis. (authors)
Numerical Evaluation of Love's Solution for Tidal Amplitude: Extreme tides possible
NASA Astrophysics Data System (ADS)
Hurford, T. A.; Greenberg, R.; Frey, S.
2002-09-01
Numerical evaluation of Love's 1911 solution [1] for the tidal amplitude of a uniform, compressible, self-gravitating body reveals portions of parameter space where extremely large (or even large negative) tides are possible. Love's solution depends only on (a) the ratio of gravity to the rigidity, ρ g R / μ , and (b) the ratio of rigidity to Lamé constant, μ / λ . The solution is not continuous; it includes singularities, around which values approach plus-or-minus infinity, even for parameters in a range plausible for planetary bodies. The effect involves runaway self-gravity. For rocky bodies up to Earth-sized, the solution is well behaved and the tidal amplitude is within ~ 20 % of that given by the standard Love number for an incompressible body. For a moderately larger or less rigid planet, the Love number could be enhancedgreatly, possibly to the point of disruption. A thermally evolving planet could hit such singularities as it evolves through elastic-parameter space. Similarly, a growing planet could hit these conditions as ρ g R increases, possibly placing constraints on planet formation. For example, a large rocky planet not much larger than the Earth or Venus could hit conditions of extreme tides and be susceptible to possible disruption, conceivably placing an upper limit on growth. The growing core of a giant planet might also be affected. Depending on elastic parameters, planetary satellites may also experience more extreme tides than usually assumed, with potentially important effects on their thermal, geophysical, and orbital evolution. [1] Love, A.E.H., Some Problems of Geodynamics, New York Dover Publications, 1967
Evaluation of numerical weather predictions performed in the context of the project DAPHNE
NASA Astrophysics Data System (ADS)
Tegoulias, Ioannis; Pytharoulis, Ioannis; Bampzelis, Dimitris; Karacostas, Theodore
2014-05-01
The region of Thessaly in central Greece is one of the main areas of agricultural production in Greece. Severe weather phenomena affect the agricultural production in this region with adverse effects for farmers and the national economy. For this reason the project DAPHNE aims at tackling the problem of drought by means of weather modification through the development of the necessary tools to support the application of a rainfall enhancement program. In the present study the numerical weather prediction system WRF-ARW is used, in order to assess its ability to represent extreme weather phenomena in the region of Thessaly. WRF is integrated in three domains covering Europe, Eastern Mediterranean and Central-Northern Greece (Thessaly and a large part of Macedonia) using telescoping nesting with grid spacing of 15km, 5km and 1.667km, respectively. The cases examined span throughout the transitional and warm period (April to September) of the years 2008 to 2013, including days with thunderstorm activity. Model results are evaluated against all available surface observations and radar products, taking into account the spatial characteristics and intensity of the storms. Preliminary results indicate a good level of agreement between the simulated and observed fields as far as the standard parameters (such as temperature, humidity and precipitation) are concerned. Moreover, the model generally exhibits a potential to represent the occurrence of the convective activity, but not its exact spatiotemporal characteristics. Acknowledgements This research work has been co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-2013)
2010-01-01
Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references for normalization of gene
NASA Technical Reports Server (NTRS)
Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.
2007-01-01
Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.
Liu, Mengge; Chen, Guang; Guo, Hailong; Fan, Baolei; Liu, Jianjun; Fu, Qiang; Li, Xiu; Lu, Xiaomin; Zhao, Xianen; Li, Guoliang; Sun, Zhiwei; Xia, Lian; Zhu, Shuyun; Yang, Daoshan; Cao, Ziping; Wang, Hua; Suo, Yourui; You, Jinmao
2015-09-16
Determination of plant growth regulators (PGRs) in a signal transduction system (STS) is significant for transgenic food safety, but may be challenged by poor accuracy and analyte instability. In this work, a microwave-assisted extraction-derivatization (MAED) method is developed for six acidic PGRs in oil samples, allowing an efficient (<1.5 h) and facile (one step) pretreatment. Accuracies are greatly improved, particularly for gibberellin A3 (-2.72 to -0.65%) as compared with those reported (-22 to -2%). Excellent selectivity and quite low detection limits (0.37-1.36 ng mL(-1)) are enabled by fluorescence detection-mass spectrum monitoring. Results show the significant differences in acidic PGRs between transgenic and nontransgenic oils, particularly 1-naphthaleneacetic acid (1-NAA), implying the PGRs induced variations of components and genes. This study provides, for the first time, an accurate and efficient determination for labile PGRs involved in STS and a promising concept for objectively evaluating the safety of transgenic foods. PMID:26309068
Fast and accurate propagation of coherent light
Lewis, R. D.; Beylkin, G.; Monzón, L.
2013-01-01
We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184
NASA Astrophysics Data System (ADS)
Andersson, A.
2005-08-01
The ability to predict surface defects in outer panels is of vital importance in the automotive industry, especially for brands in the premium car segment. Today, measures to prevent these defects can not be taken until a test part has been manufactured, which requires a great deal of time and expense. The decision as to whether a certain surface is of acceptable quality or not is based on subjective evaluation. It is quite possible to detect a defect by measurement, but it is not possible to correlate measured defects and the subjective evaluation. If all results could be based on the same criteria, it would be possible to compare a surface by both FE simulations, experiments and subjective evaluation with the same result. In order to find a solution concerning the prediction of surface defects, a laboratory tool was manufactured and analysed both experimentally and numerically. The tool represents the area around a fuel filler lid and the aim was to recreate surface defects, so-called "teddy bear ears". A major problem with the evaluation of such defects is that the panels are evaluated manually and to a great extent subjectivity is involved in the classification and judgement of the defects. In this study the same computer software was used for the evaluation of both the experimental and the numerical results. In this software the surface defects were indicated by a change in the curvature of the panel. The results showed good agreement between numerical and experimental results. Furthermore, the evaluation software gave a good indication of the appearance of the surface defects compared to an analysis done in existing tools for surface quality measurements. Since the agreement between numerical and experimental results was good, this indicates that these tools can be used for an early verification of surface defects in outer panels.
Numerical Evaluation and Comparison of Kalantari's Zero Bounds for Complex Polynomials
Dehmer, Matthias; Tsoy, Yury Robertovich
2014-01-01
In this paper, we investigate the performance of zero bounds due to Kalantari and Dehmer by using special classes of polynomials. Our findings are evidenced by numerical as well as analytical results. PMID:25350861
Numerical evaluation of the scale problem on the wind flow of a windbreak
Liu, Benli; Qu, Jianjun; Zhang, Weimin; Tan, Lihai; Gao, Yanhong
2014-01-01
The airflow field around wind fences with different porosities, which are important in determining the efficiency of fences as a windbreak, is typically studied via scaled wind tunnel experiments and numerical simulations. However, the scale problem in wind tunnels or numerical models is rarely researched. In this study, we perform a numerical comparison between a scaled wind-fence experimental model and an actual-sized fence via computational fluid dynamics simulations. The results show that although the general field pattern can be captured in a reduced-scale wind tunnel or numerical model, several flow characteristics near obstacles are not proportional to the size of the model and thus cannot be extrapolated directly. For example, the small vortex behind a low-porosity fence with a scale of 1:50 is approximately 4 times larger than that behind a full-scale fence. PMID:25311174
A numerical model for CO effect evaluation in HT-PEMFCs: Part 1 - Experimental validation
NASA Astrophysics Data System (ADS)
Cozzolino, R.; Chiappini, D.; Tribioli, L.
2016-06-01
In this paper, a self-made numerical model of a high temperature polymer electrolyte membrane fuel cell is presented. In particular, the experimental activity has been addressed to the impact on cell performance of the CO content in the anode gas feeding, for the whole operating range, and a numerical code has been implemented and validated against these experimental results. The proposed numerical model employs a zero-dimensional framework coupled with a semi-empirical approach, which aims at providing a smart and flexible tool useful for investigating the membrane behavior under different working conditions. Results show an acceptable agreement between numerical and experimental data, confirming the potentiality and reliability of the developed tool, despite its simplicity.
NASA Astrophysics Data System (ADS)
Zaniboni, Filippo; Armigliato, Alberto; Pagnoni, Gianluca; Tinti, Stefano
2013-04-01
Small landslides are very common along the submarine margins, due to steep slopes and continuous material deposition that increment mass instability and supply collapse occurrences, even without earthquake triggering. This kind of events can have relevant consequences when occurring close to the coast, because they are characterized by sudden change of velocity and relevant speed achievement, reflecting into high tsunamigenic potential. This is the case for example of the slide of Rhodes Island (Greece), named Northern Rhodes Slide (NRS), where unusual 3-4 m waves were registered on 24 March 2002, provoking some damage in the coastal stretch of the city of Rhodes (Papadopoulos et al., 2007). The event was not associated with earthquake occurrence, and eyewitnesses supported the hypothesis of a non-seismic source for the tsunami, placed 1 km offshore. Subsequent marine geophysical surveys (Sakellariou et al., 2002) evidenced the presence of several detachment niches at about 300-400 m depth along the northern steep slope, one of which can be considered responsible of the observed tsunami, fitting with the previously mentioned supposition. In this work, that is carried out in the frame of the European funded project NearToWarn, we evaluated the tsunami effects due to the NRS by means of numerical modelling: after having reconstructed the sliding body basing on morphological assumptions (obtaining an esteemed volume of 33 million m3), we simulated the sliding motion through the in-house built code UBO-BLOCK1, adopting a Lagrangian approach and splitting the sliding mass into a "chain" of interacting blocks. This provides the complete dynamics of the landslide, including the shape changes that relevantly influence the tsunami generation. After the application of an intermediate code, accounting for the slide impulse filtering through the water depth, the tsunami propagation in the sea around the island of Rhodes and up to near coasts of Turkey was simulated via the
NASA Astrophysics Data System (ADS)
Jung, Minseok; Kihara, Hisashi; Abe, Ken-ichi; Takahashi, Yusuke
2016-06-01
A three-dimensional numerical simulation model that considers the effect of the angle of attack was developed to evaluate plasma flows around reentry vehicles. In this simulation model, thermochemical nonequilibrium of flowfields is considered by using a four-temperature model for high-accuracy simulations. Numerical simulations were performed for the orbital reentry experiment of the Japan Aerospace Exploration Agency, and the results were compared with experimental data to validate the simulation model. A comparison of measured and predicted results showed good agreement. Moreover, to evaluate the effect of the angle of attack, we performed numerical simulations around the Atmospheric Reentry Demonstrator of the European Space Agency by using an axisymmetric model and a three-dimensional model. Although there were no differences in the flowfields in the shock layer between the results of the axisymmetric and the three-dimensional models, the formation of the electron number density, which is an important parameter in evaluating radio-frequency blackout, was greatly changed in the wake region when a non-zero angle of attack was considered. Additionally, the number of altitudes at which radio-frequency blackout was predicted in the numerical simulations declined when using the three-dimensional model for considering the angle of attack.
NASA Astrophysics Data System (ADS)
Nobukawa, Teruyoshi; Nomura, Takanori
2015-08-01
A multilayer recording using a varifocal lens generated with a phase-only spatial light modulator (SLM) is proposed. A phase-only SLM is used for not only improving interference efficiency between signal and reference beams but also shifting a focus plane along an optical axis. A focus plane can be shifted by adding a spherical phase to a phase modulation pattern displayed on a phase-only SLM. A focal shift with adding a spherical phase was numerically confirmed. In addition, shift selectivity and recording performance of the proposed multilayer recording method were numerically evaluated in coaxial holographic data storage.
Pohlmann, Karl; Ye, Ming; Pohll, Greg; Chapman, Jenny
2007-01-19
Numerical groundwater models are based on conceptualizations of hydrogeologic systems that are by necessity developed from limited information and therefore are simplifications of real conditions. Each aspect (e.g. recharge, hydrostratigraphy, boundary conditions) of the groundwater model is often based on a single conceptual model that is considered to be the best representation given the available data. However, the very nature of their construction means that each conceptual model is inherently uncertain and the available information may be insufficient to refute plausible alternatives, thereby raising the possibility that the flow model is underestimating overall uncertainty. In this study we use the Death Valley Regional Flow System model developed by the U.S. Geological Survey as a framework to predict regional groundwater flow southward into Yucca Flat on the Nevada Test Site. An important aspect of our work is to evaluate the uncertainty associated with multiple conceptual models of groundwater recharge and subsurface hydrostratigraphy and quantify the impacts of this uncertainty on model predictions. In our study, conceptual model uncertainty arises from two sources: (1) alternative interpretations of the hydrostratigraphy in the northern portion of Yucca Flat where, owing to sparse data, the hydrogeologic system can be conceptualized in different ways, and (2) uncertainty in groundwater recharge in the region as evidenced by the existence of several independent approaches for estimating this aspect of the hydrologic system. The composite prediction of groundwater flow is derived from the regional model that formally incorporates the uncertainty in these alternative input models using the maximum likelihood Bayesian model averaging method. An assessment of the joint predictive uncertainty of the input conceptual models is also produced. During this process, predictions of the alternative models are weighted by model probability, which is the degree of
Technology Transfer Automated Retrieval System (TEKTRAN)
In-situ determination of ice formation and thawing in soils is difficult despite its importance for many environmental processes. A sensible heat balance (SHB) method using a sequence of heat pulse probes has been shown to accurately measure water evaporation in subsurface soil, and it has the poten...
Yoshimi, Satoshi; Ochi, Hidenori; Murakami, Eisuke; Uchida, Takuro; Kan, Hiromi; Akamatsu, Sakura; Hayes, C Nelson; Abe, Hiromi; Miki, Daiki; Hiraga, Nobuhiko; Imamura, Michio; Aikata, Hiroshi; Chayama, Kazuaki
2015-01-01
Daclatasvir and asunaprevir dual oral therapy is expected to achieve high sustained virological response (SVR) rates in patients with HCV genotype 1b infection. However, presence of the NS5A-Y93H substitution at baseline has been shown to be an independent predictor of treatment failure for this regimen. By using the Invader assay, we developed a system to rapidly and accurately detect the presence of mutant strains and evaluate the proportion of patients harboring a pre-treatment Y93H mutation. This assay system, consisting of nested PCR followed by Invader reaction with well-designed primers and probes, attained a high overall assay success rate of 98.9% among a total of 702 Japanese HCV genotype 1b patients. Even in serum samples with low HCV titers, more than half of the samples could be successfully assayed. Our assay system showed a better lower detection limit of Y93H proportion than using direct sequencing, and Y93H frequencies obtained by this method correlated well with those of deep-sequencing analysis (r = 0.85, P <0.001). The proportion of the patients with the mutant strain estimated by this assay was 23.6% (164/694). Interestingly, patients with the Y93H mutant strain showed significantly lower ALT levels (p=8.8 x 10-4), higher serum HCV RNA levels (p=4.3 x 10-7), and lower HCC risk (p=6.9 x 10-3) than those with the wild type strain. Because the method is both sensitive and rapid, the NS5A-Y93H mutant strain detection system established in this study may provide important pre-treatment information valuable not only for treatment decisions but also for prediction of disease progression in HCV genotype 1b patients. PMID:26083687
Xiao, Meng; Pang, Lu; Chen, Sharon C-A; Fan, Xin; Zhang, Li; Li, Hai-Xia; Hou, Xin; Cheng, Jing-Wei; Kong, Fanrong; Zhao, Yu-Pei; Xu, Ying-Chun
2016-01-01
Species identification of Nocardia is not straightforward due to rapidly evolving taxonomy, insufficient discriminatory power of conventional phenotypic methods and also of single gene locus analysis including 16S rRNA gene sequencing. Here we evaluated the ability of a 5-locus (16S rRNA, gyrB, secA1, hsp65 and rpoB) multilocus sequence analysis (MLSA) approach as well as that of matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) in comparison with sequencing of the 5'-end 606 bp partial 16S rRNA gene to provide identification of 25 clinical isolates of Nocardia. The 5'-end 606 bp 16S rRNA gene sequencing successfully assigned 24 of 25 (96%) clinical isolates to species level, namely Nocardia cyriacigeorgica (n = 12, 48%), N. farcinica (n = 9, 36%), N. abscessus (n = 2, 8%) and N. otitidiscaviarum (n = 1, 4%). MLSA showed concordance with 16S rRNA gene sequencing results for the same 24 isolates. However, MLSA was able to identify the remaining isolate as N. wallacei, and clustered N. cyriacigeorgica into three subgroups. None of the clinical isolates were correctly identified to the species level by MALDI-TOF MS analysis using the manufacturer-provided database. A small "in-house" spectral database was established incorporating spectra of five clinical isolates representing the five species identified in this study. After complementation with the "in-house" database, of the remaining 20 isolates, 19 (95%) were correctly identified to species level (score ≥ 2.00) and one (an N. abscessus strain) to genus level (score ≥ 1.70 and < 2.00). In summary, MLSA showed superior discriminatory power compared with the 5'-end 606 bp partial 16S rRNA gene sequencing for species identification of Nocardia. MALDI-TOF MS can provide rapid and accurate identification but is reliant on a robust mass spectra database. PMID:26808813
NASA Astrophysics Data System (ADS)
Lea, James M.; Mair, Douglas WF; Rea, Brice R.
2014-05-01
Several different methodologies have previously been employed in the tracking of glacier terminus change, though a systematic comparison of these has not been undertaken. Similarly, the suitability of using the resulting data for the calibration/validation of numerical models has not been evaluated. This could be especially significant for flowline modelling of tidewater glaciers, where discrepancies between the different terminus tracking methods could potentially introduce bias into model calibrations. The choice of method for quantifying terminus change of tidewater glaciers is therefore significant from both glacier monitoring, and numerical modelling viewpoints. In this study we evaluate three existing methodologies that have been widely used to track terminus change (the centreline, bow and box methods) against a full range of idealised glaciological scenarios, and examples of 6 real glaciers in Greenland. We also evaluate two new methodologies that aim to reduce measurement error compared to the existing methodologies, and allow direct comparison of results to those of flowline models. These are (1) a modification to the box method, that can account for termini retreating through fjords that change orientation (termed the curvilinear box method [CBM]), and (2) a method that determines the average terminus position relative to the glacier centreline using an inverse distance weighting extrapolation (termed the extrapolated centreline method [ECM]). No single method achieved complete accuracy for all scenarios though the ECM was best, being able to successfully account for variable fjord orientation, width and terminus geometry. Only results from the centreline, CBM and ECM will be directly comparable to flowline model output, though the CBM and ECM are likely to be the most accurate when applied to real world scenarios.
NASA Technical Reports Server (NTRS)
Lummerzheim, D.; Lilensten, J.
1994-01-01
Auroral electron transport calculations are a critical part of auroral models. We evaluate a numerical solution to the transport and energy degradation problem. The numerical solution is verified by reproducing simplified problems to which analytic solutions exist, internal self-consistency tests, comparison with laboratory experiments of electron beams penetrating a collision chamber, and by comparison with auroral observations, particularly the emission ratio of the N2 second positive to N2(+) first negative emissions. Our numerical solutions agree with range measurements in collision chambers. The calculated N(2)2P to N2(+)1N emission ratio is independent of the spectral characteristics of the incident electrons, and agrees with the value observed in aurora. Using different sets of energy loss cross sections and different functions to describe the energy distribution of secondary electrons that emerge from ionization collisions, we discuss the uncertainties of the solutions to the electron transport equation resulting from the uncertainties of these input parameters.
Toyoda, Masayuki; Ozaki, Taisuke
2009-03-28
A numerical method to calculate the four-center electron-repulsion integrals for strictly localized pseudoatomic orbital basis sets has been developed. Compared to the conventional Gaussian expansion method, this method has an advantage in the ease of combination with O(N) density functional calculations. Additional mathematical derivations are also presented including the analytic derivatives of the integrals with respect to atomic positions and spatial damping of the Coulomb interaction due to the screening effect. In the numerical test for a simple molecule, the convergence up to 10(-5) hartree in energy is successfully obtained with a feasible cost of computation. PMID:19334815
Numerical models to evaluate the temperature increase induced by ex vivo microwave thermal ablation
NASA Astrophysics Data System (ADS)
Cavagnaro, M.; Pinto, R.; Lopresto, V.
2015-04-01
Microwave thermal ablation (MTA) therapies exploit the local absorption of an electromagnetic field at microwave (MW) frequencies to destroy unhealthy tissue, by way of a very high temperature increase (about 60 °C or higher). To develop reliable interventional protocols, numerical tools able to correctly foresee the temperature increase obtained in the tissue would be very useful. In this work, different numerical models of the dielectric and thermal property changes with temperature were investigated, looking at the simulated temperature increments and at the size of the achievable zone of ablation. To assess the numerical data, measurement of the temperature increases close to a MTA antenna were performed in correspondence with the antenna feed-point and the antenna cooling system, for increasing values of the radiated power. Results show that models not including the changes of the dielectric and thermal properties can be used only for very low values of the power radiated by the antenna, whereas a good agreement with the experimental values can be obtained up to 20 W if water vaporization is included in the numerical model. Finally, for higher power values, a simulation that dynamically includes the tissue’s dielectric and thermal property changes with the temperature should be performed.
NASA Astrophysics Data System (ADS)
Subhra Mukherji, Suchi; Banerjee, Arindam
2010-11-01
We will discuss findings from our numerical investigation on the hydrodynamic performance of horizontal axis hydrokinetic turbines (HAHkT) under different turbine geometries and flow conditions. Hydrokinetic turbines are a class of zero-head hydropower systems which utilizes kinetic energy of flowing water to drive a generator. However, such turbines very often suffer from low efficiency which is primarily controlled by tip-speed ratio, solidity, angle of attack and number of blades. A detailed CFD study was performed using two-dimensional and three dimensional numerical models to examine the effect of each of these parameters on the performance of small HAHkTs having power capacities <= 10 kW. The two-dimensional numerical results provide an optimum angle of attack that maximizes the lift as well as lift to drag ratio yielding maximum power output. However three-dimensional numerical studies estimate optimum turbine solidity and blade numbers that produces maximum power coefficient at a given tip speed ratio. In addition, simulations were also performed to observe the axial velocity deficit at the turbine rotor downstream for different tip-speed ratios to obtain both qualitative and quantitative details about stall delay phenomena and the energy loss suffered by the turbine under ambient flow condition.
Numerical models to evaluate the temperature increase induced by ex vivo microwave thermal ablation.
Cavagnaro, M; Pinto, R; Lopresto, V
2015-04-21
Microwave thermal ablation (MTA) therapies exploit the local absorption of an electromagnetic field at microwave (MW) frequencies to destroy unhealthy tissue, by way of a very high temperature increase (about 60 °C or higher). To develop reliable interventional protocols, numerical tools able to correctly foresee the temperature increase obtained in the tissue would be very useful. In this work, different numerical models of the dielectric and thermal property changes with temperature were investigated, looking at the simulated temperature increments and at the size of the achievable zone of ablation. To assess the numerical data, measurement of the temperature increases close to a MTA antenna were performed in correspondence with the antenna feed-point and the antenna cooling system, for increasing values of the radiated power. Results show that models not including the changes of the dielectric and thermal properties can be used only for very low values of the power radiated by the antenna, whereas a good agreement with the experimental values can be obtained up to 20 W if water vaporization is included in the numerical model. Finally, for higher power values, a simulation that dynamically includes the tissue's dielectric and thermal property changes with the temperature should be performed. PMID:25826652
NASA Astrophysics Data System (ADS)
Versluis, Louis; Ziegler, Tom
1988-01-01
An algorithm, based on numerical integration, has been proposed for the evaluation of analytical energy gradients within the Hartree-Fock-Slater (HFS) method. The utility of this algorithm in connection with molecular structure optimization is demonstrated by calculations on organics, main group molecules, and transition metal complexes. The structural parameters obtained from HFS calculations are in at least as good agreement with experiment as structures obtained from ab initio HF calculations. The time required to evaluate the energy gradient by numerical integration constitutes only a fraction (40%-25%) of the elapsed time in a full HFS-SCF calculation. The algorithm is also suitable for density functional methods with exchange-correlation potential different from that employed in the HFS method.
NASA Astrophysics Data System (ADS)
Volkov, K. N.
2007-09-01
The total-pressure loss in gas turbines is evaluated. Reynolds-averaged Navier-Stokes equations are used for numerical calculations. The Spalart-Allmaras model, the k-ɛ model, and the two-layer model and their different modifications allowing for the rotation of the flow and the curvature of streamlines are used to close these equations. The role of different corrections to the turbulence models for the accuracy of calculated estimates is elucidated.
NASA Technical Reports Server (NTRS)
George, William K.; Rae, William J.; Woodward, Scott H.
1991-01-01
The importance of frequency response considerations in the use of thin-film gages for unsteady heat transfer measurements in transient facilities is considered, and methods for evaluating it are proposed. A departure frequency response function is introduced and illustrated by an existing analog circuit. A Fresnel integral temperature which possesses the essential features of the film temperature in transient facilities is introduced and is used to evaluate two numerical algorithms. Finally, criteria are proposed for the use of finite-difference algorithms for the calculation of the unsteady heat flux from a sampled temperature signal.
Chen, Sharon C-A.; Fan, Xin; Zhang, Li; Li, Hai-Xia; Hou, Xin; Cheng, Jing-Wei; Kong, Fanrong; Zhao, Yu-Pei; Xu, Ying-Chun
2016-01-01
Species identification of Nocardia is not straightforward due to rapidly evolving taxonomy, insufficient discriminatory power of conventional phenotypic methods and also of single gene locus analysis including 16S rRNA gene sequencing. Here we evaluated the ability of a 5-locus (16S rRNA, gyrB, secA1, hsp65 and rpoB) multilocus sequence analysis (MLSA) approach as well as that of matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) in comparison with sequencing of the 5’-end 606 bp partial 16S rRNA gene to provide identification of 25 clinical isolates of Nocardia. The 5’-end 606 bp 16S rRNA gene sequencing successfully assigned 24 of 25 (96%) clinical isolates to species level, namely Nocardia cyriacigeorgica (n = 12, 48%), N. farcinica (n = 9, 36%), N. abscessus (n = 2, 8%) and N. otitidiscaviarum (n = 1, 4%). MLSA showed concordance with 16S rRNA gene sequencing results for the same 24 isolates. However, MLSA was able to identify the remaining isolate as N. wallacei, and clustered N. cyriacigeorgica into three subgroups. None of the clinical isolates were correctly identified to the species level by MALDI-TOF MS analysis using the manufacturer-provided database. A small “in-house” spectral database was established incorporating spectra of five clinical isolates representing the five species identified in this study. After complementation with the “in-house” database, of the remaining 20 isolates, 19 (95%) were correctly identified to species level (score ≥ 2.00) and one (an N. abscessus strain) to genus level (score ≥ 1.70 and < 2.00). In summary, MLSA showed superior discriminatory power compared with the 5’-end 606 bp partial 16S rRNA gene sequencing for species identification of Nocardia. MALDI-TOF MS can provide rapid and accurate identification but is reliant on a robust mass spectra database. PMID:26808813
Numerical evaluation of voltage gradient constraints on electrokinetic injection of amendments
NASA Astrophysics Data System (ADS)
Wu, Ming Zhi; Reynolds, David A.; Prommer, Henning; Fourie, Andy; Thomas, David G.
2012-03-01
A new numerical model is presented that simulates groundwater flow and multi-species reactive transport under hydraulic and electrical gradients. Coupled into the existing, reactive transport model PHT3D, the model was verified against published analytical and experimental studies, and has applications in remediation cases where the geochemistry plays an important role. A promising method for remediation of low-permeability aquifers is the electrokinetic transport of amendments for in situ chemical oxidation. Numerical modelling showed that amendment injection resulted in the voltage gradient adjacent to the cathode decreasing below a linear gradient, producing a lower achievable concentration of the amendment in the medium. An analytical method is derived to estimate the achievable amendment concentration based on the inlet concentration. Even with low achievable concentrations, analysis showed that electrokinetic remediation is feasible due to its ability to deliver a significantly higher mass flux in low-permeability media than under a hydraulic gradient.
Numerical evaluation of the jet noise source distribution from far-field cross correlations
NASA Technical Reports Server (NTRS)
Maestrello, L.; Liu, C.-H.
1976-01-01
This paper contains the development of techniques to determine the relationship between the unknown source correlation function to the correlation of scattered amplitudes in a jet. This study has application to the determination of forward motion effects. The technique has been developed and tested on a model jet of high subsonic flow. Numerical solution was obtained by solving the Fredholm integral equation of the first kind. Interpretation of the apparent source distribution and its application to flight testing are provided.
Numerical evaluation of a novel high-temperature superconductor-based quasi-diamagnetic motor
NASA Astrophysics Data System (ADS)
Racz, Arpad; Vajda, Istvan
2014-05-01
An investigation is being pursued at the Budapest University of Technology and Economics, Department of Electric Power Engineering for the application of high-temperature superconductors (HTS) in electrical power systems. In this paper we are going to propose a novel electrical machine construction based on the quasi-diamagnetic behaviour of the HTS materials. The basic operation principle of this machine will be introduced with detailed numerical simulations. Also a possible geometric outline will be presented.
Large deviations in boundary-driven systems: Numerical evaluation and effective large-scale behavior
NASA Astrophysics Data System (ADS)
Bunin, Guy; Kafri, Yariv; Podolsky, Daniel
2012-07-01
We study rare events in systems of diffusive fields driven out of equilibrium by the boundaries. We present a numerical technique and use it to calculate the probabilities of rare events in one and two dimensions. Using this technique, we show that the probability density of a slowly varying configuration can be captured with a small number of long-wavelength modes. For a configuration which varies rapidly in space this description can be complemented by a local-equilibrium assumption.
Stress analysis and damage evaluation of flawed composite laminates by hybrid-numerical methods
NASA Technical Reports Server (NTRS)
Yang, Yii-Ching
1992-01-01
Structural components in flight vehicles is often inherited flaws, such as microcracks, voids, holes, and delamination. These defects will degrade structures the same as that due to damages in service, such as impact, corrosion, and erosion. It is very important to know how a structural component can be useful and survive after these flaws and damages. To understand the behavior and limitation of these structural components researchers usually do experimental tests or theoretical analyses on structures with simulated flaws. However, neither approach has been completely successful. As Durelli states that 'Seldom does one method give a complete solution, with the most efficiency'. Examples of this principle is seen in photomechanics which additional strain-gage testing can only average stresses at locations of high concentration. On the other hand, theoretical analyses including numerical analyses are implemented with simplified assumptions which may not reflect actual boundary conditions. Hybrid-Numerical methods which combine photomechanics and numerical analysis have been used to correct this inefficiency since 1950's. But its application is limited until 1970's when modern computer codes became available. In recent years, researchers have enhanced the data obtained from photoelasticity, laser speckle, holography and moire' interferometry for input of finite element analysis on metals. Nevertheless, there is only few of literature being done on composite laminates. Therefore, this research is dedicated to this highly anisotropic material.
NASA Astrophysics Data System (ADS)
Prodanovic, M.; Soares, F.; Huh, C.
2014-12-01
Ferrofluid is a stable dispersion of paramagnetic nanosize particles in a liquid carrier which are magnetized in the presence of magnetic field. Functionalized coating and small size of nanoparticles allows them to flow through porous media without significantly compromising permeability and with little retention. We numerically and experimentally investigate the potential of ferrofluid in mobilizing trapped non-wetting phase. Numerical method is based on a coupled level set model for two-phase flow and an immersed interface method for finding magnetic field strength, and provides the equilibrium configuration of an oleic (non-wetting) phase inside some pore geometry in the presence of dispersed excitable nanoparticles in surrounding water phase. The magnetic pressures near fluid-fluid interface depend locally on the magnetic field intensity and direction, which in turn depend on the fluid configuration. Interfaces represent magnetic permeability discontinuities and hence cause disturbances in the spatial distribution of the magnetic field. Experiments are conducted in micromodels with high pore-to-throat aspect size ratio. Both numerical and experimental results show that stresses produced by the magnetization of ferrofluids can help overcome strong capillary pressures and displace trapped ganglia in the presence of additional mobilizing force such as increased fluid flux or surfactant injection.
An experimental evaluation of a helicopter rotor section designed by numerical optimization
NASA Technical Reports Server (NTRS)
Hicks, R. M.; Mccroskey, W. J.
1980-01-01
The wind tunnel performance of a 10-percent thick helicopter rotor section design by numerical optimization is presented. The model was tested at Mach number from 0.2 to 0.84 with Reynolds number ranging from 1,900,000 at Mach 0.2 to 4,000,000 at Mach numbers above 0.5. The airfoil section exhibited maximum lift coefficients greater than 1.3 at Mach numbers below 0.45 and a drag divergence Mach number of 0.82 for lift coefficients near 0. A moderate 'drag creep' is observed at low lift coefficients for Mach numbers greater than 0.6.
NASA Astrophysics Data System (ADS)
Kassanos, Ioannis; Chrysovergis, Marios; Anagnostopoulos, John; Papantonis, Dimitris; Charalampopoulos, George
2016-06-01
In this paper the effect of impeller design variations on the performance of a centrifugal pump running as turbine is presented. Numerical simulations were performed after introducing various modifications in the design for various operating conditions. Specifically, the effects of the inlet edge shape, the meridional channel width, the number of blades and the addition of splitter blades on impeller performance was investigated. The results showed that, an increase in efficiency can be achieved by increasing the number of blades and by introducing splitter blades.
NASA Astrophysics Data System (ADS)
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan
2016-02-01
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. The thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.
NASA Technical Reports Server (NTRS)
Yaros, S. F.; Carlson, J. R.; Chandrasekaran, B.
1986-01-01
An effort has been undertaken at the NASA Langley Research Center to assess the capabilities of available computational methods for use in propulsion integration design studies of transonic transport aircraft, particularly of pylon/nacelle combinations which exhibit essentially no interference drag. The three computer codes selected represent state-of-the-art computational methods for analyzing complex configurations at subsonic and transonic flight conditions. These are: EULER, a finitie volume solution of the Euler equation; VSAERO, a panel solution of the Laplace equation; and PPW, a finite difference solution of the small disturbance transonic equations. In general, all three codes have certain capabilities that allow them to be of some value in predicting the flows about transport configurations, but all have limitations. Until more accurate methods are available, careful application and interpretation of the results of these codes are needed.
EVALUATION OF U10MO FUEL PLATE IRRADIATION BEHAVIOR VIA NUMERICAL AND EXPERIMENTAL BENCHMARKING
Samuel J. Miller; Hakan Ozaltun
2012-11-01
This article analyzes dimensional changes due to irradiation of monolithic plate-type nuclear fuel and compares results with finite element analysis of the plates during fabrication and irradiation. Monolithic fuel plates tested in the Advanced Test Reactor (ATR) at Idaho National Lab (INL) are being used to benchmark proposed fuel performance for several high power research reactors. Post-irradiation metallographic images of plates sectioned at the midpoint were analyzed to determine dimensional changes of the fuel and the cladding response. A constitutive model of the fabrication process and irradiation behavior of the tested plates was developed using the general purpose commercial finite element analysis package, Abaqus. Using calculated burn-up profiles of irradiated plates to model the power distribution and including irradiation behaviors such as swelling and irradiation enhanced creep, model simulations allow analysis of plate parameters that are either impossible or infeasible in an experimental setting. The development and progression of fabrication induced stress concentrations at the plate edges was of primary interest, as these locations have a unique stress profile during irradiation. Additionally, comparison between 2D and 3D models was performed to optimize analysis methodology. In particular, the ability of 2D and 3D models account for out of plane stresses which result in 3-dimensional creep behavior that is a product of these components. Results show that assumptions made in 2D models for the out-of-plane stresses and strains cannot capture the 3-dimensional physics accurately and thus 2D approximations are not computationally accurate. Stress-strain fields are dependent on plate geometry and irradiation conditions, thus, if stress based criteria is used to predict plate behavior (as opposed to material impurities, fine micro-structural defects, or sharp power gradients), unique 3D finite element formulation for each plate is required.
Evaluation of Sulfur Flow Emplacement on Io from Galileo Data and Numerical Modeling
NASA Technical Reports Server (NTRS)
Williams, David A.; Greeley, Ronald; Lopes, Rosaly M. C.; Davies, Ashley G.
2001-01-01
Galileo images of bright lava flows surrounding Emakong Patera have bee0 analyzed and numerical modeling has been performed to assess whether these flows could have resulted from the emplacement of sulfur lavas on Io. Images from the solid-state imaging (SSI) camera show that these bright, white to yellow Emakong flows are up to 370 km long and contain dark, sinuous features that are interpreted to be lava conduits, -300-500 m wide and >lo0 km lorig. Neiu-Infrared Mapping S estimate of 344 K f 60 G131'C) within the Bmakong caldera. We suggest that these bright flows likely resulted from either sulfur lavas or silicate lavas that have undergone extensive cooling, pyroclastic mantling, and/or alteration with bright sulfurous materials. The Emakoag bright flows have estimated volume of -250-350 km', similar to some of the smaller Columbia River Basalt flows, If the Emakong flows did result from effusive sulfur eruptions, then they are orders of magnitude reater in volume than any terrestrial sulfur flows. Our numerical modeling capable of traveling tens to hundreds of kilometers, consistent with the predictions of Sagan. Our modeled flow distances are also consistent with the measured lengths of the Emakong channels and bright flows.
NASA Technical Reports Server (NTRS)
Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; Prior, D. L.; Scalia, G. M.; Thomas, J. D.; Garcia, M. J.
2000-01-01
The simplified Bernoulli equation relates fluid convective energy derived from flow velocities to a pressure gradient and is commonly used in clinical echocardiography to determine pressure differences across stenotic orifices. Its application to pulmonary venous flow has not been described in humans. Twelve patients undergoing cardiac surgery had simultaneous high-fidelity pulmonary venous and left atrial pressure measurements and pulmonary venous pulsed Doppler echocardiography performed. Convective gradients for the systolic (S), diastolic (D), and atrial reversal (AR) phases of pulmonary venous flow were determined using the simplified Bernoulli equation and correlated with measured actual pressure differences. A linear relationship was observed between the convective (y) and actual (x) pressure differences for the S (y = 0.23x + 0.0074, r = 0.82) and D (y = 0.22x + 0.092, r = 0.81) waves, but not for the AR wave (y = 0. 030x + 0.13, r = 0.10). Numerical modeling resulted in similar slopes for the S (y = 0.200x - 0.127, r = 0.97), D (y = 0.247x - 0. 354, r = 0.99), and AR (y = 0.087x - 0.083, r = 0.96) waves. Consistent with numerical modeling, the convective term strongly correlates with but significantly underestimates actual gradient because of large inertial forces.
NASA Astrophysics Data System (ADS)
Troppová, Eva; Tippner, Jan; Hrčka, Richard
2016-04-01
This paper presents an experimental measurement of thermal properties of medium density fiberboards with different thicknesses (12, 18 and 25 mm) and sample sizes (50 × 50 mm and 100 × 100 mm) by quasi-stationary method. The quasi-stationary method is a transient method which allows measurement of three thermal parameters (thermal conductivity, thermal diffusivity and heat capacity). The experimentally gained values were used to verify a numerical model and furthermore served as input parameters for the numerical probabilistic analysis. The sensitivity of measured outputs (time course of temperature) to influential factors (density, heat transfer coefficient and thermal conductivities) was established and described by the Spearman's rank correlation coefficients. The dependence of thermal properties on density was confirmed by the data measured. Density was also proved to be an important factor for sensitivity analyses as it highly correlated with all output parameters. The accuracy of the measurement method can be improved based on the results of the probabilistic analysis. The relevancy of the experiment is mainly influenced by the choice of a proper ratio between thickness and width of samples.
Le Cann, Sophie; Galland, Alexandre; Rosa, Benoît; Le Corroller, Thomas; Pithioux, Martine; Argenson, Jean-Noël; Chabrand, Patrick; Parratte, Sébastien
2014-09-01
Most acetabular cups implanted today are press-fit impacted cementless. Anchorage begins with the primary stability given by insertion of a slightly oversized cup. This primary stability is key to obtaining bone ingrowth and secondary stability. We tested the hypothesis that primary stability of the cup is related to surface roughness of the implant, using both an experimental and a numerical models to analyze how three levels of surface roughness (micro, macro and combined) affect the primary stability of the cup. We also investigated the effect of differences in diameter between the cup and its substrate, and of insertion force, on the cups' primary stability. The results of our study show that primary stability depends on the surface roughness of the cup. The presence of macro-roughness on the peripheral ring is found to decrease primary stability; there was excessive abrasion of the substrate, damaging it and leading to poor primary stability. Numerical modeling indicates that oversizing the cup compared to its substrate has an impact on primary stability, as has insertion force. PMID:25080896
Experimental and numerical evaluation of the heat fluxes in a basic two-dimensional motor
NASA Astrophysics Data System (ADS)
Nicoud, F.
In the framework of a study assessing the ablation of Internal Thermal Insulation (ITI) of the Ariane 5 P230 Solid Rocket Booster (SRB), a 2D basic motor has been designed and manufactured at ONERA. During the first phase of the study, emphasis has been put on the heat flux measurements on an inert wall facing a propellant grain. In order to numerically reproduce the increase of the heat transfer exchange coefficient which is experimentally observed when one proceeds from the head-end to the aft-end of the port, a 2D explicit code with a two-equation turbulence model has been used. It is found that the computed heat transfer coefficient is closer to the experimental one when a wall law accounting for the mean density variations due to the large temperature gradient near the ITI is used. For this, the ITI is assumed to be completely inert and the wall temperature is imposed. The experimental data for two other tests, not numerically simulated, are also presented.
Charalampous, Georgios; Hardalupas, Yannis
2011-03-20
The dependence of fluorescent and scattered light intensities from spherical droplets on droplet diameter was evaluated using Mie theory. The emphasis is on the evaluation of droplet sizing, based on the ratio of laser-induced fluorescence and scattered light intensities (LIF/Mie technique). A parametric study is presented, which includes the effects of scattering angle, the real part of the refractive index and the dye concentration in the liquid (determining the imaginary part of the refractive index). The assumption that the fluorescent and scattered light intensities are proportional to the volume and surface area of the droplets for accurate sizing measurements is not generally valid. More accurate sizing measurements can be performed with minimal dye concentration in the liquid and by collecting light at a scattering angle of 60 deg. rather than the commonly used angle of 90 deg. Unfavorable to the sizing accuracy are oscillations of the scattered light intensity with droplet diameter that are profound at the sidescatter direction (90 deg.) and for droplets with refractive indices around 1.4.
A critical evaluation of numerical algorithms and flow physics in complex supersonic flows
NASA Astrophysics Data System (ADS)
Aradag, Selin
In this research, two different complex supersonic flows are selected to apply CFD to Navier-Stokes simulations. First test case is "Supersonic Flow over an Open Rectangular Cavity". Open cavity flow fields are remarkably complicated with internal and external regions that are coupled via self-sustained shear layer oscillations. Supersonic flow past a cavity has numerous applications in store carriage and release. Internal carriage of stores, which can be modeled using a cavity configuration, is used for supersonic aircraft in order to reduce radar cross section, aerodynamic drag and aerodynamic heating. Supersonic, turbulent, three-dimensional unsteady flow past an open rectangular cavity is simulated, to understand the physics and three-dimensional nature of the cavity flow oscillations. Influences of numerical parameters such as numerical flux scheme, computation time and flux limiter on the computed flow are determined. Two dimensional simulations are also performed for comparison purposes. The next test case is "The Computational Design of Boeing/AFOSR Mach 6 Wind Tunnel". Due to huge differences between geometrical scales, this problem is both challenging and computationally intensive. It is believed that most of the experimental data obtained from conventional ground testing facilities are not reliable due to high levels of noise associated with the acoustic fluctuations from the turbulent boundary layers on the wind tunnel walls. Therefore, it is very important to have quiet testing facilities for hypersonic flow research. The Boeing/AFOSR Mach 6 Wind tunnel in Purdue University has been designed as a quiet tunnel for which the noise level is an order of magnitude lower than that in conventional wind tunnels. However, quiet flow is achieved in the Purdue Mach 6 tunnel for only low Reynolds numbers. Early transition of the nozzle wall boundary layer has been identified as the cause of the test section noise. Separation bubbles on the bleed lip and associated
Numerical evaluation of Auger recombination coefficients in relaxed and strained germanium
NASA Astrophysics Data System (ADS)
Dominici, Stefano; Wen, Hanqing; Bertazzi, Francesco; Goano, Michele; Bellotti, Enrico
2016-05-01
The potential applications of germanium and its alloys in infrared silicon-based photonics have led to a renewed interest in their optical properties. In this letter, we report on the numerical determination of Auger coefficients at T = 300 K for relaxed and biaxially strained germanium. We use a Green's function based model that takes into account all relevant direct and phonon-assisted processes and perform calculations up to a strain level corresponding to the transition from indirect to direct energy gap. We have considered excess carrier concentrations ranging from 1016 cm-3 to 5 × 1019 cm-3. For use in device level simulations, we also provide fitting formulas for the calculated electron and hole Auger coefficients as functions of carrier density.
Copps, Kevin D.; Carnes, Brian R.
2008-04-01
We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.
Numerical evaluation of an innovative cup layout for open volumetric solar air receivers
NASA Astrophysics Data System (ADS)
Cagnoli, Mattia; Savoldi, Laura; Zanino, Roberto; Zaversky, Fritz
2016-05-01
This paper proposes an innovative volumetric solar absorber design to be used in high-temperature air receivers of solar power tower plants. The innovative absorber, a so-called CPC-stacked-plate configuration, applies the well-known principle of a compound parabolic concentrator (CPC) for the first time in a volumetric solar receiver, heating air to high temperatures. The proposed absorber configuration is analyzed numerically, applying first the open-source ray-tracing software Tonatiuh in order to obtain the solar flux distribution on the absorber's surfaces. Next, a Computational Fluid Dynamic (CFD) analysis of a representative single channel of the innovative receiver is performed, using the commercial CFD software ANSYS Fluent. The solution of the conjugate heat transfer problem shows that the behavior of the new absorber concept is promising, however further optimization of the geometry will be necessary in order to exceed the performance of the classical absorber designs.
A Numerical Evaluation Of A Facial Pattern In Children With Isolated Pulmonary Stenosis
NASA Astrophysics Data System (ADS)
Ainsworth, Howard; Hunt, James; Joseph, Michael
1980-07-01
A facial contouring technique, using light sectioning by Coob was modified by Ainsworth and Joseph and used in a numerical study of children with isolated pulmonary stenosis (PS) to test the hypothesis that the facial pattern in this condition differs from the normal. Measurements were compared between a group of 20 normal children, and a group of 20 children with PS between the ages of 6 and 10.5 years. A distinctive facial pattern has emerged. Many anteroposterior measurements were significantly greater in the PS group, indicating that the tissues are more prominent in the maxillary region. Twenty-nine of the measurements showed significant differences between the two groups (P <.05). Discriminant analyses were carried out to discover which, if any, might be used to predict the group to which an individual should belong. Depending on the variables chosen, between 34 and 37 individuals from the total of 40 were assigned to their correct group, PS or control.
Numerical and experimental evaluation of a compact sensor antenna for healthcare devices.
Alomainy, A; Yang Hao; Pasveer, F
2007-12-01
The paper presents a compact planar antenna designed for wireless sensors intended for healthcare applications. Antenna performance is investigated with regards to various parameters governing the overall sensor operation. The study illustrates the importance of including full sensor details in determining and analysing the antenna performance. A globally optimized sensor antenna shows an increase in antenna gain by 2.8 dB and 29% higher radiation efficiency in comparison to a conventional printed strip antenna. The wearable sensor performance is demonstrated and effects on antenna radiated power, efficiency and front to back ratio of radiated energy are investigated both numerically and experimentally. Propagation characteristics of the body-worn sensor to on-body and off-body base units are also studied. It is demonstrated that the improved sensor antenna has an increase in transmitted and received power, consequently sensor coverage range is extended by approximately 25%. PMID:23852005
Carbon capture and storage reservoir properties from poroelastic inversion: A numerical evaluation
NASA Astrophysics Data System (ADS)
Lepore, Simone; Ghose, Ranajit
2015-11-01
We investigate the prospect of estimating carbon capture and storage (CCS) reservoir properties from P-wave intrinsic attenuation and velocity dispersion. Numerical analogues for two CCS reservoirs are examined: the Utsira saline formation at Sleipner (Norway) and the coal-bed methane basin at Atzbach-Schwanestadt (Austria). P-wave intrinsic dispersion curves in the field-seismic frequency band, obtained from theoretical studies based on simulation of oscillatory compressibility and shear tests upon representative rock samples, are considered as observed data. We carry out forward modelling using poroelasticity theories, making use of previously established empirical relations, pertinent to CCS reservoirs, to link pressure, temperature and CO2 saturation to other properties. To derive the reservoir properties, poroelastic inversions are performed through a global multiparameter optimization using simulated annealing. We find that the combination of attenuation and velocity dispersion in the error function helps significantly in eliminating the local minima and obtaining a stable result in inversion. This is because of the presence of convexity in the solution space when an integrated error function is minimized, which is governed by the underlying physics. The results show that, even in the presence of fairly large model discrepancies, the inversion provides reliable values for the reservoir properties, with the error being less than 10% for most of them. The estimated values of velocity and attenuation and their sensitivity to effective stress and CO2 saturation generally agree with the earlier experimental observation. Although developed and tested for numerical analogues of CCS reservoirs, the approach presented here can be adapted in order to predict key properties in a fluid-bearing porous reservoir, in general.
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-01-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
NASA Astrophysics Data System (ADS)
Huijssen, Jacobus; Hallez, Raphael; Pluymers, Bert; Desmet, Wim
2013-07-01
A synthesis procedure is presented for the prediction of the sound pressure level (SPL) of passenger vehicles in a pass-by noise test. The proposed synthesis procedure translates the noise from the sources in the moving vehicle to the receivers in two steps. Firstly, the steady-state receiver contributions of the sources are computed as they would arise from a number of static vehicle positions along the drive path. Secondly, these contributions are then combined into a single transient signal from a moving vehicle for each source-receiver pair by means of a travel time correction. The multiple source-receiver transfer functions are numerically evaluated by employing the Fast Multipole Boundary Element Method (FMBEM), which allows for pass-by noise SPL estimation on the basis of the CAD/CAE computer models that are available early in the design stage. Results are presented that show the accuracy of the synthesis procedure and that show the ability of the combination of the synthesis procedure and numerically evaluated transfer functions to predict pass-by noise SPL for a realistic case in an evaluation time of less than a day.
A Numerical model to evaluate proposed ground-water allocations in southwest Kansas
Jorgensen, D.G.; Grubb, H.F.; Baker, C.H.; Hilmes, G.E.; Jenkins, E.D.
1982-01-01
A computer model was developed to assist the Southwest Kansas Groundwater Management District No. 3 in the evaluation of applications to appropriate ground water. The model calculated the drawdown due from a proposed well at all existing wells in the section of the proposed well and at all wells in the adjacent eight sections. The depletion expected in the 9-square-mile area due to all existing wells and the proposed well is computed and compared with allowable limits defined by the management district. An optional program permits the evaluation of allowable depletion for one or more townships. All options are designed to run interactively, thus allowing for immediate evaluation of proposed ground-water withdrawals. (USGS)
Three-Dimensional Numerical Evaluation of Thermal Performance of Uninsulated Wall Assemblies
Ridouane, El Hassan; Bianchi, Marcus V.A.
2011-11-01
This study describes a detailed 3D computational fluid dynamics model that evaluates the thermal performance of uninsulated wall assemblies. It accounts for conduction through framing, convection, and radiation and allows for material property variations with temperature. This research was presented at the ASME 2011 International Mechanical Engineering Congress and Exhibition; Denver, Colorado; November 11-17, 2011
Evaluation of Sulfur Flow Emplacement on Io from Galileo Data and Numerical Modeling
NASA Technical Reports Server (NTRS)
Williams, David A.; Greeley, Ronald; Lopes, Rosaly M. C.; Davies, Ashley G.
2001-01-01
Galileo images of bright lava flows surrounding Emakong Patera have been analyzed and numerical modeling has been performed to assess whether these flows could have resulted from the emplacement of sulfur lavas on Io. Images from the solid-state imaging.(SSI) camera show that these bright, white to yellow Emakong flows are up to 370 km long and contain dark, sinuous features that are interpreted to be lava conduits, approx. 300-500 m wide and > 100 km long. Near-Infrared Mapping Spectrometer (NIMS) thermal emission data yield a color temperature estimate of 344 K +/- 60 K (less than or equal to 131 C) within the Emakong caldera. We suggest that these bright flows likely resulted from either sulfur lavas or silicate lavas that have undergone extensive cooling, pyroclastic mantling, and/or alteration with bright sulfurous materials. The Emakong bright flows have estimated volumes of approx. 250-350 cu km, similar to some of the smaller Columbia River Basalt flows. If the Emakong flows did result from effusive sulfur eruptions, then they are orders of magnitude greater in volume than any terrestrial sulfur flows. Our numerical modeling results show that sulfur lavas on Io could have been emplaced as turbulent flows, which were capable of traveling tens to hundreds of kilometers, consistent with the predictions of Sagan [ 19793 and Fink et al. [ 19831. Our modeled flow distances are also consistent with the measured lengths of the Emakong channels and bright flows. Modeled thermal erosion rates are approx. 1-4 m/d for flows erupted at approx. 140-180 C, which are consistent with the melting rates of Kieffer et al. [2000]. The Emakong channels could be thermal erosional in nature; however, the morphologic signatures of thermal erosion channels cannot be discerned from available images. There are planned Galileo flybys of Io in 2001 which provide excellent opportunities to obtain high-resolution morphologic and color data of Emakong Patera. Such observations could, along
Wang, Haiqiang; Zhuang, Zhuokai; Sun, Chenglang; Zhao, Nan; Liu, Yue; Wu, Zhongbiao
2016-03-01
Wet scrubbing combined with ozone oxidation has become a promising technology for simultaneous removal of SO2 and NOx in exhaust gas. In this paper, a new 20-species, 76-step detailed kinetic mechanism was proposed between O3 and NOx. The concentration of N2O5 was measured using an in-situ IR spectrometer. The numerical evaluation results kept good pace with both the public experiment results and our experiment results. Key reaction parameters for the generation of NO2 and N2O5 during the NO ozonation process were investigated by a numerical simulation method. The effect of temperature on producing NO2 was found to be negligible. To produce NO2, the optimal residence time was 1.25sec and the molar ratio of O3/NO about 1. For the generation of N2O5, the residence time should be about 8sec while the temperature of the exhaust gas should be strictly controlled and the molar ratio of O3/NO about 1.75. This study provided detailed investigations on the reaction parameters of ozonation of NOx by a numerical simulation method, and the results obtained should be helpful for the design and optimization of ozone oxidation combined with the wet flue gas desulfurization methods (WFGD) method for the removal of NOx. PMID:26969050
Design and numerical evaluation of an innovative multi-directional shape memory alloy damper
NASA Astrophysics Data System (ADS)
Ren, Wenjie; Li, Hongnan; Song, Gangbing
2007-04-01
Superelastic shape memory alloy (SMA) is a potential candidate for use in structure damping devices due to its unique mechanical properties. In order to mitigate the vibration of a structure subjected to earthquake tremors from different directions, an innovative, multi-directional SMA-based damper is advanced. The damper, with two movable cylinders attached to four groups of SMA strands arranged in a radial symmetry, can not only function in a plane, but also can work vertically and rotationally. Based on experimentation, the Graesser model of superelastic SMA is determined. By analyzing the damper's mechanism working in different directions, the corresponding theoretical models are developed. Numerical simulations are conducted to attain the damper's hysteresis. Working in a plane, the damper, with a 3% initial strain, provides a rectangular hysteresis with the maximum amount of damping. A rectangular flag hysteresis can be supplied in the absence of a pre-stress in the wires, going through the origin with a moderate amount of energy dissipation and higher force capacity. Moreover, the damper has better working capacities (i.e. force, stroke and energy dissipation) if the deflection is parallel to the internal bisectors of the tension axes. Working vertically or rotationally, similar triangular flag hysteresis is generated with small energy dissipation and a self-centering capacity. For a given deflection, the initial strain (3%) increases the force of the damper, but decreases its stroke.
NASA Astrophysics Data System (ADS)
Koh, E.; Lee, E.; Lee, K.
2013-12-01
The layered aquifer system (i.e. perched and regional aquifers) is locally observed in Gosan area of Jeju Island, Korea due to scattered distributions of an impermeable clay layer. In the Gosan area, farming is actively performed and nitrate contamination has been frequently reported in groundwater of regional aquifer which is sole water resource in the island. Water quality of the regional groundwater is impacted by inflows of the nitrate-rich perched groundwater, which is located above the impermeable layer and directly affected by surface contaminants. A poorly grouted well penetrating the impermeable layer provides a passage of contaminated groundwater through the impermeable layer. Such a hydrogeological characteristic consequently induces nitrate contamination of the regional aquifer in this region. To quantify the inflows of the perched groundwater via leakage wells, a numerical model was developed to calculate leakage amounts of the perched groundwater into the regional groundwater. This perched groundwater leakages were applied as point and time-variable contamination sources during the solute transport simulation process for the regional aquifer. This work will provide useful information to suggest effective ways to control nitrate contamination of groundwater in the agricultural field.
Oostrom, Martinus; Wietsma, Thomas W.; Strickland, Christopher E.; Freedman, Vicky L.; Truex, Michael J.
2012-02-01
Soil desiccation, in conjunction with surface infiltration control, is considered at the Hanford Site as a potential technology to limit the flux of technetium and other contaminants in the vadose zone to the groundwater. An intermediate-scale experiment was conducted to test the response of a series of instruments to desiccation and subsequent rewetting of porous media. The instruments include thermistors, thermocouple psychrometers, dual-probe heat pulse sensors, heat dissipation units, and humidity probes. The experiment was simulated with the multifluid flow simulator STOMP, using independently obtained hydraulic and thermal porous medium properties. All instrument types used for this experiment were able to indicate when the desiccation front passed a certain location. In most cases the changes were sharp, indicating rapid changes in moisture content, water potential, or humidity. However, a response to the changing conditions was recorded only when the drying front was very close to a sensor. Of the tested instruments, only the heat dissipation unit and humidity probes were able to detect rewetting. The numerical simulation results reasonably match the experimental data, indicating that the simulator captures the pertinent gas flow and transport processes related to desiccation and rewetting and may be useful in the design and analysis of field tests.
NASA Astrophysics Data System (ADS)
Backeberg, B. C.; Bertino, L.; Johannessen, J. A.
2009-06-01
A 4th order advection scheme is applied in a nested eddy-resolving Hybrid Coordinate Ocean Model (HYCOM) of the greater Agulhas Current system for the purpose of testing advanced numerics as a means for improving the model simulation for eventual operational implementation. Model validation techniques comparing sea surface height variations, sea level skewness and variogram analyses to satellite altimetry measurements quantify that generally the 4th order advection scheme improves the realism of the model simulation. The most striking improvement over the standard 2nd order momentum advection scheme, is that the southern Agulhas Current is simulated as a well-defined meandering current, rather than a train of successive eddies. A better vertical structure and stronger poleward transports in the Agulhas Current core contribute toward a better southwestward penetration of the current, and its temperature field, implying a stronger Indo-Atlantic inter-ocean exchange. It is found that the transport, and hence this exchange, is sensitive to the occurrences of mesoscale features originating upstream in the Mozambique Channel and southern East Madagascar Current, and that the improved HYCOM simulation is well suited for further studies of these inter-actions.
NASA Astrophysics Data System (ADS)
Backeberg, B. C.; Bertino, L.; Johannessen, J. A.
2009-02-01
A 4th order advection scheme is applied in a nested eddy-resolving Hybrid Coordinate Ocean Model (HYCOM) of the greater Agulhas Current system for the purpose of testing advanced numerics as a means for improving the model simulation for eventual operational implementation. Model validation techniques comparing sea surface height variations, sea level skewness and variogram analyses to satellite altimetry measurements quantify that generally the 4th order advection scheme improves the realism of the model simulation. The most striking improvement over the standard 2nd order momentum advection scheme, is that the Southern Agulhas Current is simulated as a well-defined meandering current, rather than a train of successive eddies. A better vertical structure and stronger poleward transports in the Agulhas Current core contribute toward a better southwestward penetration of the current, and its temperature field, implying a stronger Indo-Atlantic inter-ocean exchange. It is found that the transport, and hence this exchange, is sensitive to the occurrences of mesoscale features originating upstream in the Mozambique Channel and Southern East Madagascar Current, and that the improved HYCOM simulation is well suited for further studies of these inter-actions.
Evaluation of the deflections in the radiator vessel of the ALICE RICH array using numerical methods
NASA Astrophysics Data System (ADS)
Demelio, G.; Galantucci, L. M.; Grimaldi, A.; Nappi, E.; Posa, F.; Valentino, V.
1996-02-01
The RICH array in ALICE (A Large Ion Collider Experiment) at CERN-LHC is being designed following the basic criterion to optimize the detector performances in terms of Cherenkov angle resolution and the minimisation of the total material traversed by the incoming particles. Due to the physics requirements, low deformation of the liquid freon container is mandatory, therefore a careful engineering design to predict the deflection of the radiator structure when filled with freon is needed. The aim of this study is the design of the liquid freon container under different static load conditions since the RICH array is placed in a barrel frame structure of about 4 m radius and 8 m length. Because of its high stiffness and low weight, a honeycomb sandwich with NOMEX ® core and carbon fiber skins is used for the vessel structure. Different solutions are analyzed using numerical techniques, based on Navier double series expansion and finite element method. They show good agreement and highlight the possibility of obtaining negligible stresses and strains.
NASA Astrophysics Data System (ADS)
Ding, Guoliang; Santare, Michael H.; Karlsson, Anette M.; Kusoglu, Ahmet
2016-06-01
Understanding the mechanisms of growth of defects in polymer electrolyte membrane (PEM) fuel cells is essential for improving cell longevity. Characterizing the crack growth in PEM fuel cell membrane under relative humidity (RH) cycling is an important step towards establishing strategies essential for developing more durable membrane electrode assemblies (MEA). In this study, a crack propagation criterion based on plastically dissipated energy is investigated numerically. The accumulation of plastically dissipated energy under cyclical RH loading ahead of the crack tip is calculated and compared to a critical value, presumed to be a material parameter. Once the accumulation reaches the critical value, the crack propagates via a node release algorithm. From the literature, it is well established experimentally that membranes reinforced with expanded polytetrafluoroethylene (ePTFE) reinforced perfluorosulfonic acid (PFSA) have better durability than unreinforced membranes, and through-thickness cracks are generally found under the flow channel regions but not land regions in unreinforced PFSA membranes. We show that the proposed plastically dissipated energy criterion captures these experimental observations and provides a framework for investigating failure mechanisms in ionomer membranes subjected to similar environmental loads.
Numerical evaluation of tree canopy shape near noise barriers to improve downwind shielding.
Van Renterghem, T; Botteldooren, D
2008-02-01
The screen-induced refraction of sound by wind results in a reduced noise shielding for downwind receivers. Placing a row of trees behind a highway noise barrier modifies the wind field, and this was proven to be an important curing measure in previous studies. In this paper, the wind field modification by the canopy of trees near noise barriers is numerically predicted by using common quantitative tree properties. A realistic range of pressure resistance coefficients are modeled, for two wind speed profiles. As canopy shape influences vertical gradients in the horizontal component of the wind velocity, three typical shapes are simulated. A triangular crown shape, where the pressure resistance coefficient is at maximum at the bottom of the canopy and decreases linearly toward the top, is the most interesting configuration. A canopy with uniform aerodynamic properties with height behaves similarly at low wind speeds. The third crown shape that was modeled is the ellipse form, which has a worse performance than the first two types, but still gives a significant improvement compared to barriers without trees. With increasing wind speed, the optimum pressure resistance coefficient increases. Coniferous trees are more suited than deciduous trees to increase the downwind noise barrier efficiency. PMID:18247869
Käser, Tanja; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; Richtmann, Verena; Grond, Ursina; Gross, Markus; von Aster, Michael
2013-01-01
This article presents the design and a first pilot evaluation of the computer-based training program Calcularis for children with developmental dyscalculia (DD) or difficulties in learning mathematics. The program has been designed according to insights on the typical and atypical development of mathematical abilities. The learning process is supported through multimodal cues, which encode different properties of numbers. To offer optimal learning conditions, a user model completes the program and allows flexible adaptation to a child's individual learning and knowledge profile. Thirty-two children with difficulties in learning mathematics completed the 6-12-weeks computer training. The children played the game for 20 min per day for 5 days a week. The training effects were evaluated using neuropsychological tests. Generally, children benefited significantly from the training regarding number representation and arithmetic operations. Furthermore, children liked to play with the program and reported that the training improved their mathematical abilities. PMID:23935586
Numerical approach for the evaluation of Weibull distribution parameters for hydrologic purposes
NASA Astrophysics Data System (ADS)
Pierleoni, A.; Di Francesco, S.; Biscarini, C.; Manciola, P.
2016-06-01
In hydrology, the statistical description of low flow phenomena is very important in order to evaluate the available water resource especially in a river and the related values can be obviously considered as random variables, therefore probability distributions dealing with extreme values (maximum and/or minimum) of the variable play a fundamental role. Computational procedures for the estimation of the parameters featuring these distributions are actually very useful especially when embedded into analysis software [1][2] or as standalone applications. In this paper a computational procedure for the evaluation of the Weibull[3] distribution is presented focusing on the case when the lower limit of the distribution is not known or not set to a specific value a priori. The procedure takes advantage of the Gumbel[4] moment approach to the problem.
NASA Astrophysics Data System (ADS)
Rhode, Katherine L.; Osiensky, James L.; Miller, Stanley M.
2007-12-01
SummaryMost previous investigations to evaluate the "effective" or average transmissivity in heterogeneous environments have used calculations based on areas to weight the effects of each heterogeneity. Analysis of spatial volumetric variations within the cone of depression expressed at the potentiometric surface offers a more general solution to evaluate the meaning of transmissivity ( T) and storativity ( S) values derived from aquifer tests in these environments. The [Cooper Jr., H.H., Jacob, C.E. 1946. A generalized graphical method for evaluating formation constants and summarizing well-field history. Eos Trans. AGU, 27(4), 526-534] method is used to demonstrate that T variations reflected in slope changes in plots of the pumping well drawdown data correspond to changes in the volumetric weighted mean transmissivity ( VWMT) over time. Lognormal distributions of transmissivity are represented by block heterogeneities within two simulated aquifers, for both spatially random and spatially correlated data sets. By analyzing the volumetric evolution of the cone of depression observed in the potentiometric surface, the nature of T averaging within the cone of depression as a function of time is illustrated. Volumetric analysis shows that the average aquifer T varies with time as the cone of depression progressively envelops different heterogeneities. The initial trend is controlled primarily by the heterogeneities directly surrounding the pumping center. If steady-shape conditions are not achieved, late-time VWMT values do not approach an asymptotic limit.
NASA Astrophysics Data System (ADS)
Kitaygorsky, J.; Amburgey, C.; Elliott, J. R.; Fisher, R.; Perala, R. A.
A broadband (100 MHz-1.2 GHz) plane wave electric field source was used to evaluate electric field penetration inside a simplified Boeing 707 aircraft model with a finite-difference time-domain (FDTD) method using EMA3D. The role of absorption losses inside the simplified aircraft was investigated. It was found that, in this frequency range, none of the cavities inside the Boeing 707 model are truly reverberant when frequency stirring is applied, and a purely statistical electromagnetics approach cannot be used to predict or analyze the field penetration or shielding effectiveness (SE). Thus it was our goal to attempt to understand the nature of losses in such a quasi-statistical environment by adding various numbers of absorbing objects inside the simplified aircraft and evaluating the SE, decay-time constant τ, and quality factor Q. We then compare our numerical results with experimental results obtained by D. Mark Johnson et al. on a decommissioned Boeing 707 aircraft.
NASA Astrophysics Data System (ADS)
Meite, M.; Pop, O.; Dubois, F.; Absi, J.
2010-06-01
Usually the element of real structures is subject of the mixed mode loadings. This fact can be explained by the elements geometry and the loading orientations. In this case the propagation of the eventual cracks is characterised by the mixed mode kinematics. In order to characterize the fracture process in mixed mode it’s necessary to separate the fracture process in order to evaluate the influence of each mode. Our study is limited to plane configurations. The mixed mode is considered as an association of opening and shear modes. The mixed mode fracture is evaluated trough the experimental tests using the SEN specimen for different mixed mode ratios. The fracture process separation is operated by the invariant integral Mθ. Moreover, our study regroups an experimental and a numerical approaches.
NASA Astrophysics Data System (ADS)
Grieco, F.; Capra, L.; Groppelli, G.; Norini, G.
2007-05-01
The present study concerns the numerical modeling of debris avalanches on the Nevado de Toluca Volcano (Mexico) using TITAN2D simulation software, and its application to create hazard maps. Nevado de Toluca is an andesitic to dacitic stratovolcano of Late Pliocene-Holocene age, located in central México near to the cities of Toluca and México City; its past activity has endangered an area with more than 25 million inhabitants today. The present work is based upon the data collected during extensive field work finalized to the realization of the geological map of Nevado de Toluca at 1:25,000 scale. The activity of the volcano has developed from 2.6 Ma until 10.5 ka with both effusive and explosive events; the Nevado de Toluca has presented long phases of inactivity characterized by erosion and emplacement of debris flow and debris avalanche deposits on its flanks. The largest epiclastic events in the history of the volcano are wide debris flows and debris avalanches, occurred between 1 Ma and 50 ka, during a prolonged hiatus in eruptive activity. Other minor events happened mainly during the most recent volcanic activity (less than 50 ka), characterized by magmatic and tectonic-induced instability of the summit dome complex. According to the most recent tectonic analysis, the active transtensive kinematics of the E-W Tenango Fault System had a strong influence on the preferential directions of the last three documented lateral collapses, which generated the Arroyo Grande and Zaguàn debris avalanche deposits towards E and Nopal debris avalanche deposit towards W. The analysis of the data collected during the field work permitted to create a detailed GIS database of the spatial and temporal distribution of debris avalanche deposits on the volcano. Flow models, that have been performed with the software TITAN2D, developed by GMFG at Buffalo, were entirely based upon the information stored in the geological database. The modeling software is built upon equations
Gong, P. . Dept. of Forest Economics)
1998-08-01
Different decision models can be constructed and used to analyze a regeneration decision in even-aged stand management. However, the optimal decision and management outcomes determined in an analysis may depend on the decision model used in the analysis. This paper examines the proper choice of decision model for determining the optimal planting density and land expectation value (LEV) for a Scots pine (Pinus sylvestris L.) plantation in northern Sweden. First, a general adaptive decision model for determining the regeneration alternative that maximizes the LEV is presented. This model recognizes future stand state and timber price uncertainties by including multiple stand state and timber price scenarios, and assumes that the harvest decision in each future period will be made conditional on the observed stand state and timber prices. Alternative assumptions about future stand states, timber prices, and harvest decisions can be incorporated into this general decision model, resulting in several different decision models that can be used to analyze a specific regeneration problem. Next, the consequences of choosing different modeling assumptions are determined using the example Scots pine plantation problem. Numerical results show that the most important sources of uncertainty that affect the optimal planting density and LEV are variations of the optimal clearcut time due to short-term fluctuations of timber prices. It is appropriate to determine the optimal planting density and harvest policy using an adaptive decision model that recognizes uncertainty only in future timber prices. After the optimal decisions have been found, however, the LEV should be re-estimated by incorporating both future stand state and timber price uncertainties.
Numerical evaluation of the three-point scalar-tensor cross-correlations and the tensor bi-spectrum
NASA Astrophysics Data System (ADS)
Sreenath, V.; Tibrewala, Rakesh; Sriramkumar, L.
2013-12-01
Utilizing the Maldacena formalism and extending the earlier efforts to compute the scalar bi-spectrum, we construct a numerical procedure to evaluate the three-point scalar-tensor cross-correlations as well as the tensor bi-spectrum in single field inflationary models involving the canonical scalar field. We illustrate the accuracy of the adopted procedure by comparing the numerical results with the analytical results that can be obtained in the simpler cases of power law and slow roll inflation. We also carry out such a comparison in the case of the Starobinsky model described by a linear potential with a sudden change in the slope, which provides a non-trivial and interesting (but, nevertheless, analytically tractable) scenario involving a brief period of deviation from slow roll. We then utilize the code we have developed to evaluate the three-point correlation functions of interest (and the corresponding non-Gaussianity parameters that we introduce) for an arbitrary triangular configuration of the wavenumbers in three different classes of inflationary models which lead to features in the scalar power spectrum, as have been recently considered by the Planck team. We also discuss the contributions to the three-point functions during preheating in inflationary models with a quadratic minimum. We conclude with a summary of the main results we have obtained.
Numerical and experimental evaluation of a new low-leakage labyrinth seal
NASA Technical Reports Server (NTRS)
Rhode, D. L.; Ko, S. H.; Morrison, G. L.
1988-01-01
The effectiveness of a recently developed leakage model for evaluating new design features of most seals is demonstrated. A preliminary assessment of the present stator groove feature shows that it gives approximately a 20 percent leakage reduction with no shaft speed effects. Also, detailed distributions of predicted streamlines, axial velocity, relative pressure and turbulence energy enhance one's physical insight. In addition, the interesting measured effect of axial position of the rotor/stator pair on leakage rate and stator wall axial pressure distribution is examined.
ERIC Educational Resources Information Center
Rom, Mark Carl
2011-01-01
Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…
Numerical model for the evaluation of Earthquake effects on a magmatic system.
NASA Astrophysics Data System (ADS)
Garg, Deepak; Longo, Antonella; Papale, Paolo
2016-04-01
A finite element numerical model is presented to compute the effect of an Earthquake on the dynamics of magma in reservoirs with deformable walls. The magmatic system is hit by a Mw 7.2 Earthquake (Petrolia/Capo Mendocina 1992) with hypocenter at 15 km diagonal distance. At subsequent times the seismic wave reaches the nearest side of the magmatic system boundary, travels through the magmatic fluid and arrives to the other side of the boundary. The modelled physical system consists in the magmatic reservoir with a thin surrounding layer of rocks. Magma is considered as an homogeneous multicomponent multiphase Newtonian mixture with exsolution and dissolution of volatiles (H2O+CO2). The magmatic reservoir is made of a small shallow magma chamber filled with degassed phonolite, connected by a vertical dike to a larger deeper chamber filled with gas-rich shoshonite, in condition of gravitational instability. The coupling between the Earthquake and the magmatic system is computed by solving the elastostatic equation for the deformation of the magmatic reservoir walls, along with the conservation equations of mass of components and momentum of the magmatic mixture. The characteristic elastic parameters of rocks are assigned to the computational domain at the boundary of magmatic system. Physically consistent Dirichlet and Neumann boundary conditions are assigned according to the evolution of the seismic signal. Seismic forced displacements and velocities are set on the part of the boundary which is hit by wave. On the other part of boundary motion is governed by the action of fluid pressure and deviatoric stress forces due to fluid dynamics. The constitutive equations for the magma are solved in a monolithic way by space-time discontinuous-in-time finite element method. To attain additional stability least square and discontinuity capturing operators are included in the formulation. A partitioned algorithm is used to couple the magma and thin layer of rocks. The
Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0
NASA Astrophysics Data System (ADS)
Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun
2013-02-01
We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization
A Numerical Study of Some Potential Sources of Error in Side-by-Side Seismometer Evaluations
Holcomb, L. Gary
1990-01-01
INTRODUCTION This report presents the results of a series of computer simulations of potential errors in test data, which might be obtained when conducting side-by-side comparisons of seismometers. These results can be used as guides in estimating potential sources and magnitudes of errors one might expect when analyzing real test data. First, the derivation of a direct method for calculating the noise levels of two sensors in a side-by-side evaluation is repeated and extended slightly herein. This bulk of this derivation was presented previously (see Holcomb 1989); it is repeated here for easy reference. This method is applied to the analysis of a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of white noise spectra with known signal-to-noise ratios (SNR's). This report extends this analysis to high SNR's to determine the limitations of the direct method for calculating the noise levels at signal-to-noise levels which are much higher than presented previously (see Holcomb 1989). Next, the method is used to analyze a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of bandshaped noise spectra with known signal-to-noise ratios. This is a much more realistic representation of real world data because the earth's background spectrum is certainly not flat. Finally, the results of the analysis of simulated white and bandshaped side-by-side test data are used to assist in interpreting the analysis of the effects of simulated azimuthal misalignment in side-by-side sensor evaluations. A thorough understanding of azimuthal misalignment errors is important because of the physical impossibility of perfectly aligning two sensors in a real world situation. The analysis herein indicates that alignment errors place lower limits on the levels of system noise which can be resolved in a side-by-side measurement. It also indicates that alignment errors are the source of the fact that
Evaluation of the successive approximations method for acoustic streaming numerical simulations.
Catarino, S O; Minas, G; Miranda, J M
2016-05-01
This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly. PMID:27250122
Numerical evaluation of sequential bone drilling strategies based on thermal damage.
Tai, Bruce L; Palmisano, Andrew C; Belmont, Barry; Irwin, Todd A; Holmes, James; Shih, Albert J
2015-09-01
Sequentially drilling multiple holes in bone is used clinically for surface preparation to aid in fusion of a joint, typically under non-irrigated conditions. Drilling induces a significant amount of heat and accumulates after multiple passes, which can result in thermal osteonecrosis and various complications. To understand the heat propagation over time, a 3D finite element model was developed to simulate sequential bone drilling. By incorporating proper material properties and a modified bone necrosis criteria, this model can visualize the propagation of damaged areas. For this study, comparisons between a 2.0 mm Kirschner wire and 2.0 mm twist drill were conducted with their heat sources determined using an inverse method and experimentally measured bone temperatures. Three clinically viable solutions to reduce thermally-induced bone damage were evaluated using finite element analysis, including tool selection, time interval between passes, and different drilling sequences. Results show that the ideal solution would be using twist drills rather than Kirschner wires if the situation allows. A shorter time interval between passes was also found to be beneficial as it reduces the total heat exposure time. Lastly, optimizing the drilling sequence reduced the thermal damage of bone, but the effect may be limited. This study demonstrates the feasibility of using the proposed model to study clinical issues and find potential solutions prior to clinical trials. PMID:26163230
NASA Astrophysics Data System (ADS)
Jain, Charitra; Vogt, Christian; Clauser, Christoph
2014-05-01
We model hypothetical Engineered Geothermal System (EGS) reservoirs by solving coupled partial differential equations governing fluid flow and heat transport. Building on EGS's strengths of inherent modularity and storage capability, it is possible to implement multiple wells in the reservoir to extend the rock volume accessible for circulating water in order to increase the heat yield. By varying parameters like flow rates and well-separations in the subsurface, this study looks at their long-term impacts on the reservoir development. This approach allows us to experiment with different placements of the engineered fractures and propose several EGS layouts for achieving optimized heat extraction. Considering the available crystalline area and accounting for the competing land uses, this study evaluates the overall EGS potential and compares it with those of other used renewables in Germany. There is enough area to support 13450 EGS plants, each with six reversed-triplets (18 wells) and an average electric power of 35.3MWe. When operated at full capacity, these systems can collectively supply 4155TWh of electric energy in one year which would be roughly six times the electric energy produced in Germany in the year 2011. Engineered Geothermal Systems make a compelling case for contributing towards national power production in a future powered by a sustainable, decentralized energy system.
Furukawa, Ryoichi; Chen, Yuan; Horiguchi, Akio; Takagaki, Keisuke; Nishi, Junichi; Konishi, Akira; Shirakawa, Yoshiyuki; Sugimoto, Masaaki; Narisawa, Shinji
2015-09-30
Capping is one of the major problems that occur during the tabletting process in the pharmaceutical industry. This study provided an effective method for evaluating the capping tendency during diametrical compression test using the finite element method (FEM). In experiments, tablets of microcrystalline cellulose (MCC) were compacted with a single tabletting machine, and the capping tendency was determined by visual inspection of the tablet after a diametrical compression test. By comparing the effects of double-radius and single-radius concave punch shapes on the capping tendency, it was observed that the capping tendency of double-radius tablets occurred at a lower compaction force compared with single-radius tablets. Using FEM, we investigated the variation in plastic strain within tablets during the diametrical compression test and visualised it using the output variable actively yielding (AC YIELD) of ABAQUS. For both single-radius and double-radius tablets, a capping tendency is indicated if the variation in plastic strain was initiated from the centre of tablets, while capping does not occur if the variation began from the periphery of tablets. The compaction force estimated by the FEM analysis at which the capping tendency was observed was in reasonable agreement with the experimental results. PMID:26188313
Accurate Navier-Stokes results for the hypersonic flow over a spherical nosetip
Blottner, F.G.
1989-01-01
The unsteady thin-layer Navier-Stokes equations for a perfect gas are solved with a linearized block Alternating Direction Implicit finite-difference solution procedure. Solution errors due to numerical dissipation added to the governing equations are evaluated. Errors in the numerical predictions on three different grids are determined where Richardson extrapolation is used to estimate the exact solution. Accurate computational results are tabulated for the hypersonic laminar flow over a spherical body which can be used as a benchmark test case. Predictions obtained from the code are in good agreement with inviscid numerical results and experimental data. 9 refs., 11 figs., 3 tabs.
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
NASA Astrophysics Data System (ADS)
Buchner, J.; Simunek, J.; Dane, J. H.; King, A. P.; Lee, J.; Rolston, D. E.; Hopmans, J. W.
2007-12-01
Carbon dioxide emissions from an agricultural field in the Sacramento Valley, California, were evaluated using the process-based SOILCO2 module of the HYDRUS-1D software package and a simple empirical model. CO2 fluxes, meteorological variables, soil temperatures, and water contents were measured during years 2004-2006 at multiple locations in an agricultural field, half of which had been subjected to standard tillage and the other half to minimum tillage. Furrow irrigation was applied on a regular basis. While HYDRUS-1D simulates dynamic interactions between soil water contents, temperatures, soil CO2 concentrations, and soil respiration by numerically solving partially-differential water flow (Richards), and heat and CO2 transport (convection- dispersion) equations, an empirical model is based on simple reduction functions, closely resembling the CO2 production function of SOILCO2. It is assumed in this function that overall CO2 production in the soil profile is the sum of the soil and plant respiration, optimal values of which are affected by time, depth, water contents, temperatures, soil salinity, and CO2 concentrations in the soil profile. The effect of these environmental factors is introduced using various reduction functions that multiply the optimal soil CO2 production. While in the SOILCO2 module it is assumed that CO2 is produced in the soil profile and then transported, depending mainly on water contents, toward the soil surface, an empirical model relates CO2 emissions directly to various environmental factors. It was shown that both the numerical model and the simple reduction functions could reasonably well predict the CO2 fluxes across the soil surface. Regression coefficients between measured CO2 emissions and those predicted by the numerical and simple empirical models are compared.
NASA Astrophysics Data System (ADS)
Henne, Stephan; Kaufmann, Pirmin; Schraner, Martin; Brunner, Dominik
2013-04-01
allows particles to leave the limited COSMO domain. On the technical side, we added an OpenMP shared-memory parallelisation to the model, which also allows for asynchronous reading of input data. Here we present results from several model performance tests under different conditions and compare these with results from standard FLEXPART simulations using nested ECMWF input. This analysis will contain evaluation of deposition fields, comparison of convection schemes and performance analysis of the parallel version. Furthermore, a series of forward-backward simulations were conducted in order to test the robustness of model results independent of the integration direction. Finally, selected examples from recent applications of the model to transport of radioactive and conservative tracers and for in-situ measurement characterisation will be presented.
NASA Astrophysics Data System (ADS)
Pytharoulis, Ioannis; Tegoulias, Ioannis; Karacostas, Theodore; Kotsopoulos, Stylianos; Kartsios, Stergios; Bampzelis, Dimitrios
2015-04-01
The Thessaly plain, which is located in central Greece, has a vital role in the financial life of the country, because of its significant agricultural production. The aim of DAPHNE project (http://www.daphne-meteo.gr) is to tackle the problem of drought in this area by means of Weather Modification in convective clouds. This problem is reinforced by the increase of population and the water demand for irrigation, especially during the warm period of the year. The nonhydrostatic Weather Research and Forecasting model (WRF), is utilized for research and operational purposes of DAPHNE project. The WRF output fields are employed by the partners in order to provide high-resolution meteorological guidance and plan the project's operations. The model domains cover: i) Europe, the Mediterranean sea and northern Africa, ii) Greece and iii) the wider region of Thessaly (at selected periods), at horizontal grid-spacings of 15km, 5km and 1km, respectively, using 2-way telescoping nesting. The aim of this research work is to investigate the model performance in relation to the prevailing upper-air synoptic circulation. The statistical evaluation of the high-resolution operational forecasts of near-surface and upper air fields is performed at a selected period of the operational phase of the project using surface observations, gridded fields and weather radar data. The verification is based on gridded, point and object oriented techniques. The 10 upper-air circulation types, which describe the prevailing conditions over Greece, are employed in the synoptic classification. This methodology allows the identification of model errors that occur and/or are maximized at specific synoptic conditions and may otherwise be obscured in aggregate statistics. Preliminary analysis indicates that the largest errors are associated with cyclonic conditions. Acknowledgments This research work of Daphne project (11SYN_8_1088) is co-funded by the European Union (European Regional Development Fund
Chang, Tsang-Jung
2002-09-01
A computational fluid dynamics technique was used to evaluate the effect of traffic pollution on indoor air quality of a naturally ventilated building for various ventilation control strategies. The transport of street-level nonreactive pollutants emitted from motor vehicles through the indoor environment was simulated using the large eddy simulation (LES) of the turbulent flows and the pollutant transport equations. The numerical model developed herein was verified by available wind-tunnel measurements. Good agreement with the measured velocity and concentration data was found. Twelve sets of numerical scenario simulations for various roof- and side-vent openness and outdoor wind speeds were carried out. The effects of the air change rate, the indoor airflow pattern, and the external pollutant dispersion on indoor air quality were investigated. The control strategies of ventilation rates and paths for reducing incoming vehicle pollutants and maintaining a desirable air change rate are proposed to reduce the impact of outdoor traffic pollution during traffic rush hours. It was concluded that the windward side vent is a significant factor contributing to air change rate and indoor air quality. Air intakes on the leeward side of the building can effectively reduce the peak and average indoor concentration of traffic pollutants, but the corresponding air change rate is relatively low. Using the leeward cross-flow ventilation with the windward roof vent can effectively lower incoming vehicle pollutants and maintain a desirable air change rate during traffic rush hours. PMID:12269665
NASA Astrophysics Data System (ADS)
Assous, Franck; Chaskalovic, Joël
2013-03-01
In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.
NASA Astrophysics Data System (ADS)
Sanchez, Jose Luis; Posada, Rafael; Hierro, Rodrigo; García-Ortega, Eduardo; Lopez, Laura; Gascón, Estibaliz
2013-04-01
Madrid - Barajas airport is placed at 70 km away from the Central System and snow days and mountains waves are considered as risks days for landing operations. This motivated the study of mesoscale factors affecting this type of situations. The availability of observational data gathered during three consecutives winter campaigns in the Central System along with data from high-resolution numerical models, have allowed the evaluation of the environmental conditions necessary for mountain waves formations in snow days and were characterized from observational data and numerical simulations. By means of Meteosat Second Generation satellite images, lee clouds were observed in 25 days corresponding to the 2008-2011 winter seasons. Six of them, which also presented NW low level flow over the mountain range, were analyzed. Necessary conditions for oscillations as well as vertical wave propagation were studied from radiometer data and MM5 model simulations. From radiometer data the presence of stable environment in the six selected events is confirmed. From MM5 model, dynamic conditions allowing the flow to cross the mountain range were evaluated in three different locations around the mountain range. Simulations of vertical velocity show that MM5 model is able to detect mountain waves. The waves present in the six selected events are examined. Tropospheric were able to forecast energy release associated with the mountain waves. The vertical wavelength presented a high variability due to intense background winds at high tropospheric levels. The average values estimated for λz were between 3 and 12 km. The intrinsic period estimated was around 30 and 12 km. The simulations were able to forecast energy release associated with mountain waves. Acknowledgments: This study was supported by the Plan Nacional de I+D of Spain, through the grants CGL2010-15930, Micrometeo IPT-310000-2010-022 and the Junta de Castilla y León through the grant LE220A11-2.
Nurmi, Joonas; Pellinen, Jukka; Rantalainen, Anna-Lea
2012-03-01
Emerging contaminants from wastewater effluent samples were analysed, using posttarget and nontarget analysis techniques. The samples were analysed with an ultra performance liquid chromatograph-time-of-flight mass spectrometer (UPLC-TOF-MS), and the resulting data were processed with commercial deconvolution software. The method works well for posttarget analysis with prior information about the retention times of the compounds of interest. With positive polarity, 63 of 66 compounds and with negative polarity, 18 of 20 compounds were correctly identified in a spiked sample, while two compounds of a total of 88 fell out of the mass range. Furthermore, a four-stage process for identification was developed for the posttarget analysis lacking the retention time data. In the process, the number of candidate compounds was reduced by using the accurate mass of selected compounds in two steps (stages 1 and 2), structure-property relationships (stage 3) and isotope patterns of the analytes (stage 4). The process developed was validated by analysing wastewater samples spiked with 88 compounds. This procedure can be used to gain a preliminary indication of the presence of certain analytes in the samples. Nontarget analysis was tested by applying a theoretical mass spectra library for a wastewater sample spiked with six pharmaceuticals. The results showed a high number of false identifications. In addition, manual processing of the data was considered laborious and ineffective. Finally, the posttarget analysis was applied to a real wastewater sample. The analysis revealed the presence of six compounds that were afterwards confirmed with standard compounds as being correct. Three psycholeptics (nordiazepam, oxazepam and temazepam) could be tentatively identified, using the identification process developed. Posttarget analysis with UPLC-TOF-MS proved to be a promising method for analysing wastewater samples, while we concluded that the software for nontarget analysis will need
NASA Astrophysics Data System (ADS)
Liu, Jianqiao; Jin, Guohua; Zhai, Zhaoxia; Monica, Faheema Fairuj; Liu, Xuesong
2015-05-01
The grain size effects on tin oxide gas-sensitive elements are numerically described by the model of gradient-distributed oxygen vacancies, which extends the receptor function of semiconductors to the condition of inhomogeneous donor density in grains. The sensor resistance and the response to the reducing gas are formulated as functions of the grain size and the depletion layer width. The simulations show good agreement with the experimental results. The depletion layer width is estimated as 4 nm for the undoped SnO2 element, whereas the values are 2 and 7 nm for Sb-doped and Al-doped samples, respectively. The results are experimentally verified by the donor-doped SnO2 thin films, the depletion layer widths of which are evaluated on the basis of the correlation between the electrical resistance and the Sb-doping amount. The location of the Fermi level is found to be a crucial factor that dominates the evaluation results.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Guevara, Carlos; Graf, Thomas
2013-04-01
Subsurface water systems are endangered due to salt water intrusion in coastal aquifers, leachate infiltration from waste disposal sites and salt transport in agricultural sites. This leads to the situation where more dense fluid overlies a less dense fluid creating a density gradient. Under certain conditions this density gradient produces instabilities in form dense plume fingers that move downwards. This free convection increases solute transport over large distances and shorter times. In cases where a significantly larger density gradient exists, the effect of free convection on transport is non-negligible. The assumption of a constant density distribution in space and time is no longer valid. Therefore variable-density flow must be considered. The flow equation and the transport equation govern the numerical modeling of variable-density flow and solute transport. Computer simulation programs mathematically describe variable-density flow using the Oberbeck-Boussinesq Approximation (OBA). Three levels of simplifications can de considered, which are denoted by OB1, OB2 and OB3. OB1 is the usually applied simplification where variable density is taken into account in the hydraulic potential. In OB2 variable density is considered in the flow equation and in OB3 variable density is additionally considered in the transport equation. Using the results from a laboratory-scale experiment of variable-density flow and solute transport (Simmons et al., Transp. Porous Medium, 2002) it is investigated which level of mathematical accuracy is required to represent the physical experiment the most accurate. Differences between the physical and mathematical model are evaluated using qualitative indicators (e.g. mass fluxes, Nusselt number). Results show that OB1 is required for small density gradients and OB3 is required for large density gradients.
NASA Technical Reports Server (NTRS)
Ho, C. Y.
1993-01-01
The Center for Information and Numerical Data Analysis and Synthesis, (CINDAS), measures and maintains databases on thermophysical, thermoradiative, mechanical, optical, electronic, ablation, and physical properties of materials. Emphasis is on aerospace structural materials especially composites and on infrared detector/sensor materials. Within CINDAS, the Department of Defense sponsors at Purdue several centers: the High Temperature Material Information Analysis Center (HTMIAC), the Ceramics Information Analysis Center (CIAC) and the Metals Information Analysis Center (MIAC). The responsibilities of CINDAS are extremely broad encompassing basic and applied research, measurement of the properties of thin wires and thin foils as well as bulk materials, acquisition and search of world-wide literature, critical evaluation of data, generation of estimated values to fill data voids, investigation of constitutive, structural, processing, environmental, and rapid heating and loading effects, and dissemination of data. Liquids, gases, molten materials and solids are all considered. The responsibility of maintaining widely used databases includes data evaluation, analysis, correlation, and synthesis. Material property data recorded on the literature are often conflicting, diverging, and subject to large uncertainties. It is admittedly difficult to accurately measure materials properties. Systematic and random errors both enter. Some errors result from lack of characterization of the material itself (impurity effects). In some cases assumed boundary conditions corresponding to a theoretical model are not obtained in the experiments. Stray heat flows and losses must be accounted for. Some experimental methods are inappropriate and in other cases appropriate methods are carried out with poor technique. Conflicts in data may be resolved by curve fitting of the data to theoretical or empirical models or correlation in terms of various affecting parameters. Reasons (e.g. phase
Hibi, Yoshihiko; Tomigashi, Akira; Hirose, Masafumi
2015-12-01
Numerical simulations that couple flow in a surface fluid with that in a porous medium are useful for examining problems of pollution that involve interactions among the atmosphere, surface water and groundwater, including, for example, saltwater intrusion along coasts. We previously developed a numerical simulation method for simulating a coupled atmospheric gas, surface water, and groundwater system (called the ASG method) that employs a saturation equation for flow in a porous medium; this equation allows both the void fraction of water in the surface system and water saturation in the porous medium to be solved simultaneously. It remained necessary, however, to evaluate how global pressure, including gas pressure, water pressure, and capillary pressure, should be specified at the boundary between the surface and the porous medium. Therefore, in this study, we derived a new equation for global pressure and integrated it into the ASG method. We then simulated water saturation in a porous medium and the void fraction of water in a surface system by the ASG method and reproduced fairly well the results of two column experiments. Next, we simulated water saturation in a porous medium (sand) with a bank, by using both the ASG method and a modified Picard (MP) method. We found only a slight difference in water saturation between the ASG and MP simulations. This result confirmed that the derived equation for global pressure was valid for a porous medium, and that the global pressure value could thus be used with the saturation equation for porous media. Finally, we used the ASG method to simulate a system coupling atmosphere, surface water, and a porous medium (110m wide and 50m high) with a trapezoidal bank. The ASG method was able to simulate the complex flow of fluids in this system and the interaction between the porous medium and the surface water or the atmosphere. PMID:26583741
NASA Astrophysics Data System (ADS)
Hibi, Yoshihiko; Tomigashi, Akira; Hirose, Masafumi
2015-12-01
Numerical simulations that couple flow in a surface fluid with that in a porous medium are useful for examining problems of pollution that involve interactions among the atmosphere, surface water and groundwater, including, for example, saltwater intrusion along coasts. We previously developed a numerical simulation method for simulating a coupled atmospheric gas, surface water, and groundwater system (called the ASG method) that employs a saturation equation for flow in a porous medium; this equation allows both the void fraction of water in the surface system and water saturation in the porous medium to be solved simultaneously. It remained necessary, however, to evaluate how global pressure, including gas pressure, water pressure, and capillary pressure, should be specified at the boundary between the surface and the porous medium. Therefore, in this study, we derived a new equation for global pressure and integrated it into the ASG method. We then simulated water saturation in a porous medium and the void fraction of water in a surface system by the ASG method and reproduced fairly well the results of two column experiments. Next, we simulated water saturation in a porous medium (sand) with a bank, by using both the ASG method and a modified Picard (MP) method. We found only a slight difference in water saturation between the ASG and MP simulations. This result confirmed that the derived equation for global pressure was valid for a porous medium, and that the global pressure value could thus be used with the saturation equation for porous media. Finally, we used the ASG method to simulate a system coupling atmosphere, surface water, and a porous medium (110 m wide and 50 m high) with a trapezoidal bank. The ASG method was able to simulate the complex flow of fluids in this system and the interaction between the porous medium and the surface water or the atmosphere.
NASA Technical Reports Server (NTRS)
Knudsen, Erik; Arakere, Nagaraj K.
2006-01-01
Foam; a cellular material, is found all around us. Bone and cork are examples of biological cell materials. Many forms of man-made foam have found practical applications as insulating materials. NASA uses the BX-265 foam insulation material on the external tank (ET) for the Space Shuttle. This is a type of Spray-on Foam Insulation (SOFI), similar to the material used to insulate attics in residential construction. This foam material is a good insulator and is very lightweight, making it suitable for space applications. Breakup of segments of this foam insulation on the shuttle ET impacting the shuttle thermal protection tiles during liftoff is believed to have caused the space shuttle Columbia failure during re-entry. NASA engineers are very interested in understanding the processes that govern the breakup/fracture of this complex material from the shuttle ET. The foam is anisotropic in nature and the required stress and fracture mechanics analysis must include the effects of the direction dependence on material properties. Material testing at NASA MSFC has indicated that the foam can be modeled as a transversely isotropic material. As a first step toward understanding the fracture mechanics of this material, we present a general theoretical and numerical framework for computing stress intensity factors (SIFs), under mixed-mode loading conditions, taking into account the material anisotropy. We present mode I SIFs for middle tension - M(T) - test specimens, using 3D finite element stress analysis (ANSYS) and FRANC3D fracture analysis software, developed by the Cornel1 Fracture Group. Mode I SIF values are presented for a range of foam material orientations. Also, NASA has recorded the failure load for various M(T) specimens. For a linear analysis, the mode I SIF will scale with the far-field load. This allows us to numerically estimate the mode I fracture toughness for this material. The results represent a quantitative basis for evaluating the strength and
NASA Astrophysics Data System (ADS)
Barker, Jessica L. B.; Hassan, Md. Mahadi; Sultana, Sarmin; Ahmed, Kazi Matin; Robinson, Clare E.
2016-09-01
Aquifer storage, transfer and recovery (ASTR) may be an efficient low cost water supply technology for rural coastal communities that experience seasonal freshwater scarcity. The feasibility of ASTR as a water supply alternative is being evaluated in communities in south-western Bangladesh where the shallow aquifers are naturally brackish and severe seasonal freshwater scarcity is compounded by frequent extreme weather events. A numerical variable-density groundwater model, first evaluated against data from an existing community-scale ASTR system, was applied to identify the influence of hydrogeological as well as design and operational parameters on system performance. For community-scale systems, it is a delicate balance to achieve acceptable water quality at the extraction well whilst maintaining a high recovery efficiency (RE) as dispersive mixing can dominate relative to the small size of the injected freshwater plume. For the existing ASTR system configuration used in Bangladesh where the injection head is controlled and the extraction rate is set based on the community water demand, larger aquifer hydraulic conductivity, aquifer depth and injection head improve the water quality (lower total dissolved solids concentration) in the extracted water because of higher injection rates, but the RE is reduced. To support future ASTR system design in similar coastal settings, an improved system configuration was determined and relevant non-dimensional design criteria were identified. Analyses showed that four injection wells distributed around a central single extraction well leads to high RE provided the distance between the injection wells and extraction well is less than half the theoretical radius of the injected freshwater plume. The theoretical plume radius relative to the aquifer dispersivity is also an important design consideration to ensure adequate system performance. The results presented provide valuable insights into the feasibility and design
NASA Astrophysics Data System (ADS)
Itano, Wayne M.; Ramsey, Norman F.
1993-07-01
The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.
NASA Astrophysics Data System (ADS)
Tsujimura, Maki; Watanabe, Yasuto; Ikeda, Koichi; Yano, Shinjiro; Abe, Yutaka
2016-04-01
Headwater catchments in mountainous region are the most important recharge area for surface and subsurface waters, additionally time information of the water is principal to understand hydrological processes in the catchments. However, there have been few researches to evaluate variation of residence time of subsurface water in time and space at the mountainous headwaters especially with steep slope. We investigated the temporal variation of the residence time of the spring and groundwater with tracing of hydrological flow processes in mountainous catchments underlain by granite, Yamanashi Prefecture, central Japan. We conducted intensive hydrological monitoring and water sampling of spring, stream and ground waters in high-flow and low-flow seasons from 2008 through 2013 in River Jingu Watershed underlain by granite, with an area of approximately 15 km2 and elevation ranging from 950 m to 2000 m. The CFCs, stable isotopic ratios of oxygen-18 and deuterium, inorganic solute constituent concentrations were determined on all water samples. Also, a numerical simulation was conducted to reproduce of the average residence times of the spring and groundwater. The residence time of the spring water estimated by the CFCs concentration ranged from 10 years to 60 years in space within the watershed, and it was higher (older) during the low flow season and lower (younger) during the high flow season. We tried to reproduce the seasonal change of the residence time in the spring water by numerical simulation, and the calculated residence time of the spring water and discharge of the stream agreed well with the observed values. The groundwater level was higher during the high flow season and the groundwater dominantly flowed through the weathered granite with higher permeability, whereas that was lower during the low flow season and that flowed dominantly through the fresh granite with lower permeability. This caused the seasonal variation of the residence time of the spring
NASA Astrophysics Data System (ADS)
Hejazi, S.; Woodbury, A. D.
2011-12-01
The main goal of this research is to contribute to the understanding of nitrate transport and transformations in soil and its impact on groundwater. The physical, chemical and biochemical nitrogen transport processes with the spatial and temporal climate groundwater model are considered to simulate nitrate and ammonium concentration. A Nitrogen-1D model was developed to analyze the nitrogen dynamics during treated manure application to the soil. The model simulates water movement, heat transfer and solute transformations in one dimensional unsaturated soil. First, we coupled a vertical soil nitrogen transport with SABAE-HW model. This model was a one-dimensional physically-based model that uses the same physical mechanisms, inputs and outputs, as CLASS (2.6). It is noted that the applications of SABAE-HW has been verified before by the authors. Mineralization, nitrification and denitrification are modeled in the soil profile as the most important nutrient cycles. It is also assumed that the main source of organic N is from animal manure. A-single-pool nitrogen transformation is designed to simulate mineralization, nitrification and denitrification processes. Since SABAE-HW considers the effects of soil freezing and thawing on soil water dynamics, the proposed mathematical model (SABAE-HWS) is able to investigate the effects of nitrogen biochemical reactions in winter. An upwind finite volume scheme was applied to solve solute transport and nitrogen transformation equations numerically. The finite volume method insures continuous fluxes across layers boundaries. To evaluate the reliability of the numerical approach we compared the results of the model with one of the pioneer analytical solutions. Both analytical and numerical solutions have been obtained for temporally dependent problems for uniform and increasing inputs. A complete agreement between them has been found respecting to the different boundary conditions. The model is also calibrated using field data
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
de Oliveira, Martha Lima; de Melo, Edésio José Tenório; Miguens, Flávio Costa
2016-09-01
Airborne particulate matter (PM) has been included among the most important air pollutants by governmental environment agencies and academy researchers. The use of terrestrial plants for monitoring PM has been widely accepted, particularly when it is coupled with SEM/EDS. Herein, Tillandsia stricta leaves were used as monitors of PM, focusing on a comparative evaluation of Environmental SEM (ESEM) and High-Pressure SEM (HPSEM). In addition, specimens air-dried at formaldehyde atmosphere (AD/FA) were introduced as an SEM procedure. Hydrated specimen observation by ESEM was the best way to get information from T. stricta leaves. If any artifacts were introduced by AD/FA, they were indiscernible from those caused by CPD. Leaf anatomy was always well preserved. PM density was determined on adaxial and abaxial leaf epidermis for each of the SEM proceedings. When compared with ESEM, particle extraction varied from 0 to 20% in air-dried leaves while 23-78% of particles deposited on leaves surfaces were extracted by CPD procedures. ESEM was obviously the best choice over other methods but morphological artifacts increased in function of operation time while HPSEM operation time was without limit. AD/FA avoided the shrinkage observed in the air-dried leaves and particle extraction was low when compared with CPD. Structural and particle density results suggest AD/FA as an important methodological approach to air pollution biomonitoring that can be widely used in all electron microscopy labs. Otherwise, previous PM assessments using terrestrial plants as biomonitors and performed by conventional SEM could have underestimated airborne particulate matter concentration. PMID:27357408
Kuo, Chao-Hung; Liu, Chung-Jung; Yang, Ching-Chia; Kuo, Fu-Chen; Hu, Huang-Ming; Shih, Hsiang-Yao; Wu, Meng-Chieh; Chen, Yen-Hsu; Wang, Hui-Min David; Ren, Jian-Lin; Wu, Deng-Chyang; Chang, Lin-Li
2016-05-01
Because Helicobacter pylori (H pylori) would cause carcinogenesis of the stomach, we need sufficient information for deciding on an appropriate strategy of eradication. Many factors affect the efficacy of eradication including antimicrobial resistance (especially clarithromycin resistance) and CYP2C19 polymorphism. This study was to survey the efficiency of gastric juice for detecting H pylori infection, clarithromycin resistance, and CYP2C19 polymorphism.The specimens of gastric juice were collected from all patients while receiving gastroscopy. DNA was extracted from gastric juice and then urease A and cag A were amplified by polymerase chain reaction (PCR) for detecting the existence of H pylori. By PCR-restriction fragment length polymorphism (PCR-RFLP), the 23S rRNA of H pylori and CYP2C19 genotypes of host were examined respectively. During endoscopy examination, biopsy-based specimens were also collected for rapid urease test, culture, and histology. The blood samples were also collected for analysis of CYP2C19 genotypes. We compared the results of gastric juice tests with the results of traditional clinical tests.When compared with the results from traditional clinical tests, our results from gastric juice showed that the sensitivity (SEN), specificity (SPE), positive predictive value (PPV), negative predictive value (NPV), and accuracy to detect H pylori infection were 92.1% (105/114), 92.9% (143/154), 90.5% (105/116), 94.1% (143/152), and 92.5% (248/268), respectively. The SEN, SPE, PPV, and NPV to detect clarithromycin resistance were 97.3% (36/37), 91.5% (43/47), 90.0% (36/40), and 97.7% (43/44), respectively. By using PCR-RFLP, the consistency of human CYP2C19 gene polymorphism from blood samples and gastric juice was as high as 94.9% (149/157).The manipulated gastric juice is actually an effective diagnostic sample for evaluation of H pylori existence, clarithromycin resistance, and host CYP2C19 polymorphism. PMID:27227911
Kuo, Chao-Hung; Liu, Chung-Jung; Yang, Ching-Chia; Kuo, Fu-Chen; Hu, Huang-Ming; Shih, Hsiang-Yao; Wu, Meng-Chieh; Chen, Yen-Hsu; Wang, Hui-Min David; Ren, Jian-Lin; Wu, Deng-Chyang; Chang, Lin-Li
2016-01-01
Abstract Because Helicobacter pylori (H pylori) would cause carcinogenesis of the stomach, we need sufficient information for deciding on an appropriate strategy of eradication. Many factors affect the efficacy of eradication including antimicrobial resistance (especially clarithromycin resistance) and CYP2C19 polymorphism. This study was to survey the efficiency of gastric juice for detecting H pylori infection, clarithromycin resistance, and CYP2C19 polymorphism. The specimens of gastric juice were collected from all patients while receiving gastroscopy. DNA was extracted from gastric juice and then urease A and cag A were amplified by polymerase chain reaction (PCR) for detecting the existence of H pylori. By PCR-restriction fragment length polymorphism (PCR-RFLP), the 23S rRNA of H pylori and CYP2C19 genotypes of host were examined respectively. During endoscopy examination, biopsy-based specimens were also collected for rapid urease test, culture, and histology. The blood samples were also collected for analysis of CYP2C19 genotypes. We compared the results of gastric juice tests with the results of traditional clinical tests. When compared with the results from traditional clinical tests, our results from gastric juice showed that the sensitivity (SEN), specificity (SPE), positive predictive value (PPV), negative predictive value (NPV), and accuracy to detect H pylori infection were 92.1% (105/114), 92.9% (143/154), 90.5% (105/116), 94.1% (143/152), and 92.5% (248/268), respectively. The SEN, SPE, PPV, and NPV to detect clarithromycin resistance were 97.3% (36/37), 91.5% (43/47), 90.0% (36/40), and 97.7% (43/44), respectively. By using PCR-RFLP, the consistency of human CYP2C19 gene polymorphism from blood samples and gastric juice was as high as 94.9% (149/157). The manipulated gastric juice is actually an effective diagnostic sample for evaluation of H pylori existence, clarithromycin resistance, and host CYP2C19 polymorphism. PMID:27227911
A robust and accurate formulation of molecular and colloidal electrostatics.
Sun, Qiang; Klaseboer, Evert; Chan, Derek Y C
2016-08-01
This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics. PMID:27497538
A robust and accurate formulation of molecular and colloidal electrostatics
NASA Astrophysics Data System (ADS)
Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.
2016-08-01
This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.
Evaluation of numerical models by FerryBox and fixed platform in situ data in the southern North Sea
NASA Astrophysics Data System (ADS)
Haller, M.; Janssen, F.; Siddorn, J.; Petersen, W.; Dick, S.
2015-11-01
For understanding and forecasting of hydrodynamics in coastal regions, numerical models have served as an important tool for many years. In order to assess the model performance, we compared simulations to observational data of water temperature and salinity. Observations were available from FerryBox transects in the southern North Sea and, additionally, from a fixed platform of the MARNET network. More detailed analyses have been made at three different stations, located off the English eastern coast, at the Oyster Ground and in the German Bight. FerryBoxes installed on ships of opportunity (SoO) provide high-frequency surface measurements along selected tracks on a regular basis. The results of two operational hydrodynamic models have been evaluated for two different time periods: BSHcmod v4 (January 2009 to April 2012) and FOAM AMM7 NEMO (April 2011 to April 2012). While they adequately simulate temperature, both models underestimate salinity, especially near the coast in the southern North Sea. Statistical errors differ between the two models and between the measured parameters. The root mean square error (RMSE) of water temperatures amounts to 0.72 °C (BSHcmod v4) and 0.44 °C (AMM7), while for salinity the performance of BSHcmod is slightly better (0.68 compared to 1.1). The study results reveal weaknesses in both models, in terms of variability, absolute levels and limited spatial resolution. Simulation of the transition zone between the coasts and the open sea is still a demanding task for operational modelling. Thus, FerryBox data, combined with other observations with differing temporal and spatial scales, can serve as an invaluable tool not only for model evaluation, but also for model optimization by assimilation of such high-frequency observations.
NASA Technical Reports Server (NTRS)
Atkins, H. L.; Helenbrook, B. T.
2005-01-01
This paper describes numerical experiments with P-multigrid to corroborate analysis, validate the present implementation, and to examine issues that arise in the implementations of the various combinations of relaxation schemes, discretizations and P-multigrid methods. The two approaches to implement P-multigrid presented here are equivalent for most high-order discretization methods such as spectral element, SUPG, and discontinuous Galerkin applied to advection; however it is discovered that the approach that mimics the common geometric multigrid implementation is less robust, and frequently unstable when applied to discontinuous Galerkin discretizations of di usion. Gauss-Seidel relaxation converges 40% faster than block Jacobi, as predicted by analysis; however, the implementation of Gauss-Seidel is considerably more expensive that one would expect because gradients in most neighboring elements must be updated. A compromise quasi Gauss-Seidel relaxation method that evaluates the gradient in each element twice per iteration converges at rates similar to those predicted for true Gauss-Seidel.
Ongaro, A; Campana, L G; De Mattei, M; Dughiero, F; Forzan, M; Pellati, A; Rossi, C R; Sieni, E
2016-04-01
Electrochemotherapy (ECT) is a local anticancer treatment based on the combination of chemotherapy and short, tumor-permeabilizing, voltage pulses delivered using needle electrodes or plate electrodes. The application of ECT to large skin surface tumors is time consuming due to technical limitations of currently available voltage applicators. The availability of large pulse applicators with few and more spaced needle electrodes could be useful in the clinic, since they could allow managing large and spread tumors while limiting the duration and the invasiveness of the procedure. In this article, a grid electrode with 2-cm spaced needles has been studied by means of numerical models. The electroporation efficiency has been assessed on human osteosarcoma cell line MG63 cultured in monolayer. The computational results show the distribution of the electric field in a model of the treated tissue. These results are helpful to evaluate the effect of the needle distance on the electric field distribution. Furthermore, the in vitro tests showed that the grid electrode proposed is suitable to electropore, by a single application, a cell culture covering an area of 55 cm(2). In conclusion, our data might represent substantial improvement in ECT in order to achieve a more homogeneous and time-saving treatment, with benefits for patients with cancer. PMID:25911645
A numerical evaluation of TIROS-N and NOAA-6 analyses in a high resolution limited area model
NASA Technical Reports Server (NTRS)
Derber, J. C.; Koehler, T. L.; Horn, L. H.
1981-01-01
Vertical temperature profiles derived from TIROS-N and NOAA-6 radiance measurements were used to create separate analyses for the period 0000 GMT 6 January to 0000 GMT 7 January 1980. The 0000 GMT 6 January satellite analyses and a conventional analysis were used to initialize and run the University of Wisconsin's version of the Australian Region Primitive Equations model. Forecasts based on conventional analyses were used to evaluate the forecasts based only on satellite upper air data. The forecasts based only on TIROS-N or NOAA-6 data did reasonably well in locating the main trough and ridge positions. The satellite initial analyses and forecasts revealed errors correlated to the synoptic situation. The trough in both TIROS-N and NOAA-6 forecasts which was initially too warm remained too warm as it propagated eastward during the forecast period. Thus, it is unlikely that the operational satellite data will improve forecasts in a data dense region. However, in regions of poor data coverage, the satellite data should have a beneficial effect on numerical forecasts.
Numerical evaluation of the use of granulated coal ash to reduce an oxygen-deficient water mass.
Yamamoto, Hironori; Yamamoto, Tamiji; Mito, Yugo; Asaoka, Satoshi
2016-06-15
Granulated coal ash (GCA), which is a by-product of coal thermal electric power stations, effectively decreases phosphate and hydrogen sulfide (H2S) concentrations in the pore water of coastal marine sediments. In this study, we developed a pelagic-benthic coupled ecosystem model to evaluate the effectiveness of GCA for diminishing the oxygen-deficient water mass formed in coastal bottom water of Hiroshima Bay in Japan. Numerical experiments revealed the application of GCA was effective for reducing the oxygen-deficient water masses, showing alleviation of the DO depletion in summer increased by 0.4-3mgl(-1). The effect of H2S adsorption onto the GCA lasted for 5.25years in the case in which GCA was mixed with the sediment in a volume ratio of 1:1. The application of this new GCA-based environmental restoration technique could also make a substantial contribution to form a recycling-oriented society. PMID:27143344
NASA Astrophysics Data System (ADS)
Margerin, Ludovic; Planès, Thomas; Mayor, Jessie; Calvet, Marie
2016-01-01
Coda-wave interferometry is a technique which exploits tiny waveform changes in the coda to detect temporal variations of seismic properties in evolving media. Observed waveform changes are of two kinds: traveltime perturbations and distortion of seismograms. In the last 10 yr, various theories have been published to relate either background velocity changes to traveltime perturbations, or changes in the scattering properties of the medium to waveform decorrelation. These theories have been limited by assumptions pertaining to the scattering process itself-in particular isotropic scattering, or to the propagation regime-single-scattering and/or diffusion. In this manuscript, we unify and extend previous results from the literature using a radiative transfer approach. This theory allows us to incorporate the effect of anisotropic scattering and to cover a broad range of propagation regimes, including the contribution of coherent, singly scattered and multiply scattered waves. Using basic physical reasoning, we show that two different sensitivity kernels are required to describe traveltime perturbations and waveform decorrelation, respectively, a distinction which has not been well appreciated so far. Previous results from the literature are recovered as limiting cases of our general approach. To evaluate numerically the sensitivity functions, we introduce an improved version of a spectral technique known as the method of `rotated coordinate frames', which allows global evaluation of the Green's function of the radiative transfer equation in a finite domain. The method is validated through direct pointwise comparison with Green's functions obtained by the Monte Carlo method. To illustrate the theory, we consider a series of scattering media displaying increasing levels of scattering anisotropy and discuss the impact on the traveltime and decorrelation kernels. We also consider the related problem of imaging variations of scattering properties based on intensity
NASA Astrophysics Data System (ADS)
Sahin, O. K.; Asci, M.
2014-12-01
At this study, determination of theoretical parameters for inversion process of Trabzon-Sürmene-Kutlular ore bed anomalies was examined. Making a decision of which model equation can be used for inversion is the most important step for the beginning. It is thought that will give a chance to get more accurate results. So, sections were evaluated with sphere-cylinder nomogram. After that, same sections were analyzed with cylinder-dike nomogram to determine the theoretical parameters for inversion process for every single model equations. After comparison of results, we saw that only one of them was more close to parameters of nomogram evaluations. But, other inversion result parameters were different from their nomogram parameters.
NASA Astrophysics Data System (ADS)
Li, Dawei; Shen, Hui
2015-09-01
The first Chinese microwave ocean environment satellite HY-2A was launched successfully in August, 2011. This study presents a quality assessment of HY-2A scatterometer (HYSCAT) data based on comparison with ocean buoy data, the Advanced Scatterometer (ASCAT) data, and numerical model data from the National Centers for Environmental Prediction (NCEP). The in-situ observations include those from buoy arrays operated by the National Data Buoy Center (NDBC) and Tropical Atmosphere Ocean (TAO) project. Only buoys located offshore and in deep water were analyzed. The temporal and spatial collocation windows between HYSCAT data and buoy observations were 30 min and 25 km, respectively. The comparisons showed that the wind speeds and directions observed by HYSCAT agree well with the buoy data. The root-mean-squared errors (RMSEs) of wind speed and direction for the HYSCAT standard wind products are 1.90 m/s and 22.80°, respectively. For the HYSCAT-ASCAT comparison, the temporal and spatial differences were limited to 1 h and 25 km, respectively. This comparison yielded RMSEs of 1.68 m/s for wind speed and 19.1° for wind direction. We also compared HYSCAT winds with reanalysis data from NCEP. The results show that the RMSEs of wind speed and direction are 2.6 m/s and 26°, respectively. The global distribution of wind speed residuals (HYSCAT-NCEP) is also presented here for evaluation of the HYSCAT-retrieved wind field globally. Considering the large temporal and spatial differences of the collocated data, it is concluded that the HYSCAT-retrieved wind speed and direction met the mission requirements, which were 2 m/s and 20° for wind speeds in the range 2-24 m/s. These encouraging assessment results show that the wind data obtained from HYSCAT will be useful for the scientific community.
NASA Astrophysics Data System (ADS)
Ohashi, Takahiro
2011-05-01
In this study, support structures of a die for press working are discussed to solve the machine difference problems amongst presses. The developed multi-point die support structures are not only utilized for adjusting elastic deformation of a die, but also for in-process sensing of the behavior of a die. The structures have multiple support cells between a die and the slide of a press machine. The cell, known as `a support unit,' has the strain gauges attached on its side, and works in both ways as a kind of spring and a load and displacement sensor. The cell contacts on the die with a ball-contact, therefore it transmits only the vertical force at each support point. The isolation of a momentum and horizontal load at each support point contributes for a simple numerical model; it helps us to know the practical boundary condition at the points under an actual production. In addition, the momentum and horizontal forces at the points are useless for press working; the isolation of these forces contributes to reduce a jolt and related machine differences. The horizontal distribution of support units is changed to reduce elastic deformation of a die; it contributes to reduce a jolt, alignment errors of a die and geometrical errors of a product. The validity of those adjustments are confirmed with evaluating a product shape of a deep drawing and measuring jolts between upper and lower stamping dies. Furthermore, die deformation in a process is analyzed with using elastic FE analysis with actual bearing loads compiled from each support unit.
Numerical Asymptotic Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1992-01-01
Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.
NASA Astrophysics Data System (ADS)
Uys, Nico J.; Herbst, Charles P.; Lotter, Mattheus G.; De Villiers, Johannes F. K.; van Zyl, Martin
1997-05-01
A numerical observer (JPEGNO) to evaluate the influence of JPEG compression on the diagnostic quality of CT image is proposed. JPEGNO is based on the grey scale histogram of the image and is defined as the inverse of the sum of the difference between successive grey levels in the histogram of the image.
NASA Astrophysics Data System (ADS)
Moussaddy, Hadi
The Representative Volume Element (RVE) plays a central role in the mechanics of composite materials with respect to predicting their effective properties. Numerical homogenization delivers accurate estimations of composite effective properties when associated with a RVE. In computational homogenization, the RVE refers to an ensemble of random material volumes that yield, by an averaging procedure, the effective properties of the bulk material, within a tolerance. A large diversity of RVE quantitative definitions, providing computational methods to estimate the RVE size, are found in literature. In this study, the ability of the different RVE definitions to yield accurate effective properties is investigated. The assessment is conducted on a specific random microstructure, namely an elastic two-phase three dimensional composite reinforced by randomly oriented fibers. Large scale finite element simulations of material volumes of different sizes are performed on high performance computational servers using parallel computing. The materials volumes are virtually generated and subjected to periodic boundary conditions. It is shown that most popular RVE definitions, based on convergence of the properties when increasing the material volume, yields inaccurate effective properties. A new RVE definition is introduced based on the statistical variations of the properties computed from material volumes. It is shown to produce more accurate estimations of the effective properties. In addition, the new definition produced RVE that are smaller in size than that of other RVE definitions ; also the number of necessary finite element simulations to determine the RVE is substantially reduced. The computed effective properties are compared to that of analytical models. The comparisons are performed for a wide range of fibers aspect ratios (up to 120), properties contrast (up to 300) and volume fractions only up to 20% due to computational limits. The Mori-Tanaka model and the two
NASA Astrophysics Data System (ADS)
Köhler, Mandy; Haendel, Falk; Epting, Jannis; Binder, Martin; Müller, Matthias; Huggenberger, Peter; Liedl, Rudolf
2015-04-01
Increasing groundwater temperatures have been observed in many urban areas such as London (UK), Tokyo (Japan) and also in Basel (Switzerland). Elevated groundwater temperatures are a result of different direct and indirect thermal impacts. Groundwater heat pumps, building structures located within the groundwater and district heating pipes, among others, can be addressed to direct impacts, whereas indirect impacts result from the change in climate in urban regions (i.e. reduced wind, diffuse heat sources). A better understanding of the thermal processes within the subsurface is urgently needed for decision makers as a basis for the selection of appropriate measures to reduce the ongoing increase of groundwater temperatures. However, often only limited temperature data is available that derives from measurements in conventional boreholes, which differ in construction and instrumental setup resulting in measurements that are often biased and not comparable. For three locations in the City of Basel models were implemented to study selected thermal processes and to investigate if heat-transport models can reproduce thermal measurements. Therefore, and to overcome the limitations of conventional borehole measurements, high-resolution depth-oriented temperature measurement systems have been introduced in the urban area of Basel. In total seven devices were installed with up to 16 sensors which are located in the unsaturated and saturated zone (0.5 to 1 m separation distance). Measurements were performed over a period of 4 years (ongoing) and provide sufficient data to set up and calibrate high-resolution local numerical heat transport models which allow studying selected local thermal processes. In a first setup two- and three-dimensional models were created to evaluate the impact of the atmosphere boundary on groundwater temperatures (see EGU Poster EGU2013-9230: Modelling Strategies for the Thermal Management of Shallow Rural and Urban Groundwater bodies). For Basel
NASA Astrophysics Data System (ADS)
Apuani, T.; Merri, A.
2009-04-01
A stress-strain analysis of the Stromboli volcano was performed using a three-dimensional explicit finite difference numerical code (FLAC 3D, ITASCA, 2005), to evaluate the effects associated to the presence of magma pressure in magmatic conduit and to foresee the evolution of the magmatic feeding complex. The simulations considered both the ordinary state for the Stromboli, characterized by a partial fill of the active dyke with regular emission of gas and lava fountains and the paroxysmal conditions observed during the March 2007's eruptive crisis, with the magma level in the active dyke reaching the topographic surface along the Sciara del Fuoco depression. The modeling contributes to identify the most probable directions of propagation of new dikes, and the effects of their propagation on the stability of the volcano edifice. The numerical model extends 6 x 6 x 2.6 km3, with a mesh resolution of 100 m, adjusting the grid to fit the shape of the object to be modeled. An elasto-plastic constitutive law was adopted and an homogeneous Mohr-Coulomb strength criterion was chosen for the volcanic cone, assuming one lithotechnical unit (alternation of lava and breccia layers "lava-breccia unit"- Apuani et al 2005). The dykes are represented as discontinuities of the grid, and are modeled by means of interfaces. The magmatic pressure is imposed to the model as normal pressure applied on both sides of the interfaces. The magmastatic pressure was calculated as Pm=d•h, where d is the magma unit weight assumed equal to 25 KN/m3, and h (m) is the height of the magma column. Values of overpressure between 0 and 1 MPa were added to simulate the paroxysmal eruption. The simulation was implemented in successive stages, assuming the results of the previous stages as condition for the next one. A progressive propagation of the dike was simulated, in accordance with the stress conditions identified step by step, and in accordance with the evidences detected by in situ survey, and
Numerical methods for characterization of synchrotron radiation based on the Wigner function method
NASA Astrophysics Data System (ADS)
Tanaka, Takashi
2014-06-01
Numerical characterization of synchrotron radiation based on the Wigner function method is explored in order to accurately evaluate the light source performance. A number of numerical methods to compute the Wigner functions for typical synchrotron radiation sources such as bending magnets, undulators and wigglers, are presented, which significantly improve the computation efficiency and reduce the total computation time. As a practical example of the numerical characterization, optimization of betatron functions to maximize the brilliance of undulator radiation is discussed.
Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui
2016-01-01
Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506
NASA Astrophysics Data System (ADS)
Guy, N.; Chen, S. S.; Zhang, C.
2014-12-01
A large number of observations were collected during the DYNAMO (Dynamics of the Madden-Julian Oscillation) field campaign in the tropical Indian Ocean during 2011. These data ranged from in-situ measurements of individual hydrometeors to regional precipitation distribution to large-scale precipitation and wind fields. Many scientific findings have been reported in the three years since project completion, leading to a better physical understanding of the Madden-Julian Oscillation (MJO) initiation and providing insight to a roadmap to better predictability. The NOAA P-3 instrumented aircraft was deployed from 11 November - 13 December 2011, embarking on 12 flights. This mobile platform provided high resolution, high quality in-situ and remotely sensed observations of the meso-γ to meso-α scale environment and offered coherent cloud dynamic and microphysical data in convective cloud systems where surface-based instruments were unable to reach. Measurements included cloud and precipitation microphysical observations via the Particle Measuring System 2D cloud and precipitation probes, aircraft altitude flux measurements, dropsonde vertical thermodynamic profiles, and 3D precipitation and wind field observations from the tail-mounted Doppler X-band weather radar. Existing satellite (infrared, visible, and water vapor) data allowed the characterization of the large-scale environment. These comprehensive data have been combined into an easily accesible product with special attention paid to comparing observations to future numerical simulations. The P-3 and French Falcon aircraft flew a coordinated mission, above and below the melting level, respectively, near Gan Island on 8 December 2011, acquiring coincident cloud microphysical and dynamics data. The Falcon aircraft is instrumented with vertically pointing W-band radar, with a focus on ice microphysical properties. We present this case in greater detail to show the optimal coincident measurements. Additional
Yuan, Yijun; Yao, Yong Guo, Bo; Yang, Yanfu; Tian, JiaJun; Yi, Miao
2015-03-28
A model of multiwavelength erbium-doped fiber laser (MEFL), which takes into account the impact of fiber attenuation on the four-wave-mixing (FWM), is proposed. Using this model, we numerically study the output characteristics of the MEFL based on FWM in a dispersion shift fiber with two seeding light signals (TSLS) and experimentally verify these characteristics. The numerical and experimental results show that the number of output channels can be increased with the increase of the erbium-doped fiber pump power. In addition, by decreasing the spacing of TSLS and increasing the power of TSLS, the number of output channels can be increased. However, when the power of TSLS exceeds a critical value, the number of output channels decreases. The results by numerical simulation are consistent with experimental observations from the MEFL.
Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; Jornada, Felipe H. da; Deslippe, Jack; Yang, Chao; and others
2015-04-01
We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.
Accurate Anisotropic Fast Marching for Diffusion-Based Geodesic Tractography
Jbabdi, S.; Bellec, P.; Toro, R.; Daunizeau, J.; Pélégrini-Issac, M.; Benali, H.
2008-01-01
Using geodesics for inferring white matter fibre tracts from diffusion-weighted MR data is an attractive method for at least two reasons: (i) the method optimises a global criterion, and hence is less sensitive to local perturbations such as noise or partial volume effects, and (ii) the method is fast, allowing to infer on a large number of connexions in a reasonable computational time. Here, we propose an improved fast marching algorithm to infer on geodesic paths. Specifically, this procedure is designed to achieve accurate front propagation in an anisotropic elliptic medium, such as DTI data. We evaluate the numerical performance of this approach on simulated datasets, as well as its robustness to local perturbation induced by fiber crossing. On real data, we demonstrate the feasibility of extracting geodesics to connect an extended set of brain regions. PMID:18299703
Yang, Zhou; Lowe, Chris D; Crowther, Will; Fenton, Andy; Watts, Phillip C; Montagnes, David J S
2013-02-01
We use strains recently collected from the field to establish cultures; then, through laboratory studies we investigate how among strain variation in protozoan ingestion and growth rates influences population dynamics and intraspecific competition. We focused on the impact of changing temperature because of its well-established effects on protozoan rates and its ecological relevance, from daily fluctuations to climate change. We show, first, that there is considerable inter-strain variability in thermal sensitivity of maximum growth rate, revealing distinct differences among multiple strains of our model species Oxyrrhis marina. We then intensively examined two representative strains that exhibit distinctly different thermal responses and parameterised the influence of temperature on their functional and numerical responses. Finally, we assessed how these responses alter predator-prey population dynamics. We do this first considering a standard approach, which assumes that functional and numerical responses are directly coupled, and then compare these results with a novel framework that incorporates both functional and numerical responses in a fully parameterised model. We conclude that: (i) including functional diversity of protozoa at the sub-species level will alter model predictions and (ii) including directly measured, independent functional and numerical responses in a model can provide a more realistic account of predator-prey dynamics. PMID:23151643
ERIC Educational Resources Information Center
Kaufmann, Liane; Handl, Pia; Thony, Brigitte
2003-01-01
In this study, six elementary grade children with developmental dyscalculia were trained individually and in small group settings with a one-semester program stressing basic numerical knowledge and conceptual knowledge. All the children showed considerable and partly significant performance increases on all calculation components. Results suggest…
NASA Technical Reports Server (NTRS)
Nixon, Douglas D.
2009-01-01
Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.
Brault, C; Gil, C; Boboc, A; Spuig, P
2011-04-01
On the Tore Supra tokamak, a far infrared polarimeter diagnostic has been routinely used for diagnosing the current density by measuring the Faraday rotation angle. A high precision of measurement is needed to correctly reconstruct the current profile. To reach this precision, electronics used to compute the phase and the amplitude of the detected signals must have a good resilience to the noise in the measurement. In this article, the analogue card's response to the noise coming from the detectors and their impact on the Faraday angle measurements are analyzed, and we present numerical methods to calculate the phase and the amplitude. These validations have been done using real signals acquired by Tore Supra and JET experiments. These methods have been developed to be used in real-time in the future numerical cards that will replace the Tore Supra present analogue ones. PMID:21678660
A numerical model for CO effect evaluation in HT-PEMFCs: Part 2 - Application to different membranes
NASA Astrophysics Data System (ADS)
Cozzolino, R.; Chiappini, D.; Tribioli, L.
2016-06-01
In this paper, a self-made numerical model of a high temperature polymer electrolyte membrane fuel cell is presented. In particular, we focus on the impact of CO poisoning on fuel cell performance and its influence on electrochemical modelling. More specifically, the aim of this work is to demonstrate the effectiveness of our zero-dimensional electrochemical model of HT-PEMFCs, by comparing numerical and experimental results, obtained from two different commercial membranes electrode assemblies: the first one is based on polybenzimidazole (PBI) doped with phosphoric acid, while the second one uses a PBI electrolyte with aromatic polyether polymers/copolymers bearing pyridine units, always doped with H3PO4. The analysis has been carried out considering both the effect of CO poisoning and operating temperature for the two membranes above mentioned.
NASA Astrophysics Data System (ADS)
Anania, Laura; Badalá, Antonio; D'Agata, Giuseppe
2016-01-01
In this work the attention is focused to the numerical simulation of the experimental bending tests carried out on a total of six reinforced concrete r.c. plates the latter aimed to provide a basic understanding of the its performance when strengthened by Fiber Reinforced Cementitius Matrix (FRCM) Composites. Three of those were used as control specimens. The numerical simulation was carried out by LUSAS software. A good correlation between the FE results and data obtained from the test, both in the load-deformation behavior and the failure load was highlighted. This permits to prove that applied strengthening system gives back an enhancement 2.5 times greater in respect of the unreinforced case. A greater energy dissipation ability and a residual load-bearing capacity makes the proposed system very useful in the retrofitting as well as in the case of strengthening of bridge structures. Based on the validation of the FE results in bending, the numerical analysis was also extended to characterize the behavior of this strengthening system in tensile.
Practical aspects of spatially high accurate methods
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.
1992-01-01
The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
ERIC Educational Resources Information Center
Mills, Myron L.
1988-01-01
A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)
How to accurately bypass damage
Broyde, Suse; Patel, Dinshaw J.
2016-01-01
Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203
Anisotropic Turbulence Modeling for Accurate Rod Bundle Simulations
Baglietto, Emilio
2006-07-01
An improved anisotropic eddy viscosity model has been developed for accurate predictions of the thermal hydraulic performances of nuclear reactor fuel assemblies. The proposed model adopts a non-linear formulation of the stress-strain relationship in order to include the reproduction of the anisotropic phenomena, and in combination with an optimized low-Reynolds-number formulation based on Direct Numerical Simulation (DNS) to produce correct damping of the turbulent viscosity in the near wall region. This work underlines the importance of accurate anisotropic modeling to faithfully reproduce the scale of the turbulence driven secondary flows inside the bundle subchannels, by comparison with various isothermal and heated experimental cases. The very low scale secondary motion is responsible for the increased turbulence transport which produces a noticeable homogenization of the velocity distribution and consequently of the circumferential cladding temperature distribution, which is of main interest in bundle design. Various fully developed bare bundles test cases are shown for different geometrical and flow conditions, where the proposed model shows clearly improved predictions, in close agreement with experimental findings, for regular as well as distorted geometries. Finally the applicability of the model for practical bundle calculations is evaluated through its application in the high-Reynolds form on coarse grids, with excellent results. (author)
NASA Astrophysics Data System (ADS)
Pournoury, M.; Zamiri, A.; Kim, T. Y.; Yurlov, V.; Oh, K.
2016-03-01
Capacitive touch sensor screen with the metal materials has recently become qualified for substitution of ITO; however several obstacles still have to be solved. One of the most important issues is moiré phenomenon. The visibility problem of the metal-mesh, in touch sensor module (TSM) is numerically considered in this paper. Based on human eye contract sensitivity function (CSF), moiré pattern of TSM electrode mesh structure is simulated with MATLAB software for 8 inch screen display in oblique view. Standard deviation of the generated moiré by the superposition of electrode mesh and screen image is calculated to find the optimal parameters which provide the minimum moiré visibility. To create the screen pixel array and mesh electrode, rectangular function is used. The filtered image, in frequency domain, is obtained by multiplication of Fourier transform of the finite mesh pattern (product of screen pixel and mesh electrode) with the calculated CSF function for three different observer distances (L=200, 300 and 400 mm). It is observed that the discrepancy between analytical and numerical results is less than 0.6% for 400 mm viewer distance. Moreover, in the case of oblique view due to considering the thickness of the finite film between mesh electrodes and screen, different points of minimum standard deviation of moiré pattern are predicted compared to normal view.
Kato, Kikuya; Uchida, Junji; Kukita, Yoji; Kumagai, Toru; Nishino, Kazumi; Inoue, Takako; Kimura, Madoka; Oba, Shigeyuki; Imamura, Fumio
2016-01-01
Monitoring of disease/therapeutic conditions is an important application of circulating tumor DNA (ctDNA). We devised numerical indices, based on ctDNA dynamics, for therapeutic response and disease progression. 52 lung cancer patients subjected to the EGFR-TKI treatment were prospectively collected, and ctDNA levels represented by the activating and T790M mutations were measured using deep sequencing. Typically, ctDNA levels decreased sharply upon initiation of EGFR-TKI, however this did not occur in progressive disease (PD) cases. All 3 PD cases at initiation of EGFR-TKI were separated from other 27 cases in a two-dimensional space generated by the ratio of the ctDNA levels before and after therapy initiation (mutation allele ratio in therapy, MART) and the average ctDNA level. For responses to various agents after disease progression, PD/stable disease cases were separated from partial response cases using MART (accuracy, 94.7%; 95% CI, 73.5-100). For disease progression, the initiation of ctDNA elevation (initial positive point) was compared with the onset of objective disease progression. In 11 out of 28 eligible patients, both occurred within ±100 day range, suggesting a detection of the same change in disease condition. Our numerical indices have potential applicability in clinical practice, pending confirmation with designed prospective studies. PMID:27381430
Kato, Kikuya; Uchida, Junji; Kukita, Yoji; Kumagai, Toru; Nishino, Kazumi; Inoue, Takako; Kimura, Madoka; Oba, Shigeyuki; Imamura, Fumio
2016-01-01
Monitoring of disease/therapeutic conditions is an important application of circulating tumor DNA (ctDNA). We devised numerical indices, based on ctDNA dynamics, for therapeutic response and disease progression. 52 lung cancer patients subjected to the EGFR-TKI treatment were prospectively collected, and ctDNA levels represented by the activating and T790M mutations were measured using deep sequencing. Typically, ctDNA levels decreased sharply upon initiation of EGFR-TKI, however this did not occur in progressive disease (PD) cases. All 3 PD cases at initiation of EGFR-TKI were separated from other 27 cases in a two-dimensional space generated by the ratio of the ctDNA levels before and after therapy initiation (mutation allele ratio in therapy, MART) and the average ctDNA level. For responses to various agents after disease progression, PD/stable disease cases were separated from partial response cases using MART (accuracy, 94.7%; 95% CI, 73.5–100). For disease progression, the initiation of ctDNA elevation (initial positive point) was compared with the onset of objective disease progression. In 11 out of 28 eligible patients, both occurred within ±100 day range, suggesting a detection of the same change in disease condition. Our numerical indices have potential applicability in clinical practice, pending confirmation with designed prospective studies. PMID:27381430
Lane, J.W., Jr.; Buursink, M.L.; Haeni, F.P.; Versteeg, R.J.
2000-01-01
The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons
Accurate determination of characteristic relative permeability curves
NASA Astrophysics Data System (ADS)
Krause, Michael H.; Benson, Sally M.
2015-09-01
A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.
Parashar, R.; Cushman, J.H.
2008-06-20
Microbial motility is often characterized by 'run and tumble' behavior which consists of bacteria making sequences of runs followed by tumbles (random changes in direction). As a superset of Brownian motion, Levy motion seems to describe such a motility pattern. The Eulerian (Fokker-Planck) equation describing these motions is similar to the classical advection-diffusion equation except that the order of highest derivative is fractional, {alpha} element of (0, 2]. The Lagrangian equation, driven by a Levy measure with drift, is stochastic and employed to numerically explore the dynamics of microbes in a flow cell with sticky boundaries. The Eulerian equation is used to non-dimensionalize parameters. The amount of sorbed time on the boundaries is modeled as a random variable that can vary over a wide range of values. Salient features of first passage time are studied with respect to scaled parameters.
Song, J. H.; Lee, J.; Lee, S.; Kim, E. Z.; Lee, N. K.; Lee, G. A.; Park, S. J.; Chu, A.
2013-12-16
In this paper, laser forming characteristics in ultra high strength steel with ultimate strength of 1200MPa are investigated numerically and experimentally. FE simulation is conducted to identify the response related to deformation and characterize the effect of laser power, beam diameter and scanning speed with respect to the bending angle for a square sheet part. The thermo-mechanical behaviors during the straight-line heating process are presented in terms of temperature, stress and strain. An experimental setup including a fiber laser with maximum mean power of 3.0 KW is used in the experiments. From the results in this work, it would be easily adjustment the laser power and the scanning speed by controlling the line energy for a bending operation of CP1180 steel sheets.
NASA Technical Reports Server (NTRS)
Atlas, R.; Halem, M.; Ghil, M.
1979-01-01
The present evaluation is concerned with (1) the significance of prognostic differences resulting from the inclusion of satellite-derived temperature soundings, (2) how specific differences between the SAT and NOSAT prognoses evolve, and (3) comparison of two experiments using the Goddard Laboratory for Atmospheric Sciences general circulation model. The subjective evaluation indicates that the beneficial impact of sounding data is enhanced with increased resolution. It is suggested that satellite sounding data posses valuable information content which at times can correct gross analysis errors in data sparse regions.
Jones, Steven P.; Tang, Xian-Liang; Guo, Yiru; Steenbergen, Charles; Lefer, David J.; Kukreja, Rakesh C.; Kong, Maiying; Li, Qianhong; Bhushan, Shashi; Zhu, Xiaoping; Du, Junjie; Nong, Yibing; Stowers, Heather L.; Kondo, Kazuhisa; Hunt, Gregory N.; Goodchild, Traci T.; Orr, Adam; Chang, Carlos C.; Ockaili, Ramzi; Salloum, Fadi N.; Bolli, Roberto
2014-01-01
Rationale Despite four decades of intense effort and substantial financial investment, the cardioprotection field has failed to deliver a single drug that effectively reduces myocardial infarct size in patients. A major reason is insufficient rigor and reproducibility in preclinical studies. Objective To develop a multicenter randomized controlled trial (RCT)-like infrastructure to conduct rigorous and reproducible preclinical evaluation of cardioprotective therapies. Methods and Results With NHLBI support, we established the Consortium for preclinicAl assESsment of cARdioprotective therapies (CAESAR), based on the principles of randomization, investigator blinding, a priori sample size determination and exclusion criteria, appropriate statistical analyses, and assessment of reproducibility. To validate CAESAR, we tested the ability of ischemic preconditioning (IPC) to reduce infarct size in three species (at two sites/species): mice (n=22-25/group), rabbits (n=11-12/group), and pigs (n=13/group). During this validation phase, i) we established protocols that gave similar results between Centers and confirmed that IPC significantly reduced infarct size in all species, and ii) we successfully established a multi-center structure to support CAESAR’s operations, including two surgical Centers for each species, a Pathology Core (to assess infarct size), a Biomarker Core (to measure plasma cardiac troponin levels), and a Data Coordinating Center – all with the oversight of an external Protocol Review and Monitoring Committee. Conclusions CAESAR is operational, generates reproducible results, can detect cardioprotection, and provides a mechanism for assessing potential infarct-sparing therapies with a level of rigor analogous to multicenter RCTs. This is a revolutionary new approach to cardioprotection. Importantly, we provide state-of-the-art, detailed protocols (“CAESAR protocols”) for measuring infarct size in mice, rabbits, and pigs in a manner that is
Accurate computation of Stokes flow driven by an open immersed interface
NASA Astrophysics Data System (ADS)
Li, Yi; Layton, Anita T.
2012-06-01
We present numerical methods for computing two-dimensional Stokes flow driven by forces singularly supported along an open, immersed interface. Two second-order accurate methods are developed: one for accurately evaluating boundary integral solutions at a point, and another for computing Stokes solution values on a rectangular mesh. We first describe a method for computing singular or nearly singular integrals, such as a double layer potential due to sources on a curve in the plane, evaluated at a point on or near the curve. To improve accuracy of the numerical quadrature, we add corrections for the errors arising from discretization, which are found by asymptotic analysis. When used to solve the Stokes equations with sources on an open, immersed interface, the method generates second-order approximations, for both the pressure and the velocity, and preserves the jumps in the solutions and their derivatives across the boundary. We then combine the method with a mesh-based solver to yield a hybrid method for computing Stokes solutions at N2 grid points on a rectangular grid. Numerical results are presented which exhibit second-order accuracy. To demonstrate the applicability of the method, we use the method to simulate fluid dynamics induced by the beating motion of a cilium. The method preserves the sharp jumps in the Stokes solution and their derivatives across the immersed boundary. Model results illustrate the distinct hydrodynamic effects generated by the effective stroke and by the recovery stroke of the ciliary beat cycle.
ERIC Educational Resources Information Center
Siegler, Robert S.; Braithwaite, David W.
2016-01-01
In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…
NASA Astrophysics Data System (ADS)
Bever, Aaron J.; Harris, Courtney K.
2014-09-01
The Waipaoa River Sedimentary System in New Zealand, a focus site of the MARGINS Source-to-Sink program, contains both a terrestrial and marine component. Poverty Bay serves as the interface between the fluvial and oceanic portions of this dispersal system. This study used a three-dimensional hydrodynamic and sediment-transport numerical model, the Regional Ocean Modeling System (ROMS), coupled to the Simulated WAves Nearshore (SWAN) wave model to investigate sediment-transport dynamics within Poverty Bay and the mechanisms by which sediment travels from the Waipaoa River to the continental shelf. Two sets of model calculations were analyzed; the first represented a winter storm season, January-September, 2006; and the second an approximately 40 year recurrence interval storm that occurred on 21-23 October 2005. Model results indicated that hydrodynamics and sediment-transport pathways within Poverty Bay differed during wet storms that included river runoff and locally generated waves, compared to dry storms driven by oceanic swell. During wet storms the model estimated significant deposition within Poverty Bay, although much of the discharged sediment was exported from the Bay during the discharge pulse. Later resuspension events generated by Southern Ocean swell reworked and modified the initial deposit, providing subsequent pulses of sediment from the Bay to the continental shelf. In this manner, transit through Poverty Bay modified the input fluvial signal, so that the sediment characteristics and timing of export to the continental shelf differed from the Waipaoa River discharge. Sensitivity studies showed that feedback mechanisms between sediment-transport, currents, and waves were important within the model calculations.
NASA Astrophysics Data System (ADS)
Grell, Georg; Marrapu, Pallavi; Freitas, Saulo R.; Psckham, Steven E.
2015-04-01
A convective parameterization is applied and evaluated that may be used in high resolution non-hydrostatic mesoscale models for weather and air quality prediction, as well as in modeling system with unstructured varying grid resolutions and for convection aware simulations. This scheme is based on a stochastic approach originally implemented by Grell and Devenyi (2002) and described in more detail in Grell and Freitas (2014, ACP). Interactions with aerosols have been implemented through a CCN dependent autoconversion of cloud water to rain as well as an aerosol dependent evaporation of cloud drops. Initial tests with this newly implemented aerosol approach showed plausible results with a decrease in predicted precipitation in some areas, caused by the changed autoconversion mechanism. Here we compare and evaluate performance over a 10-day period using the SAMBBA test case of the Working Group for Numerical Experimentation (WGNE) on aerosol impacts on numerical weather prediction. A shorter period is also compared to fully cloud-resolving simulations using WRF-Chem.
Seaman, N.L.; Guo, Z.; Ackerman, T.P.
1996-04-01
Predictions of cloud occurrence and vertical location from the Pennsylvannia State University/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) were evaluated statistically using cloud observations obtained at Coffeyville, Kansas, as part of the Second International satellite Cloud Climatology Project Regional Experiment campaign. Seventeen cases were selected for simulation during a November-December 1991 field study. MM5 was used to produce two sets of 36-km simulations, one with and one without four-dimensional data assimilation (FDDA), and a set of 12-km simulations without FDDA, but nested within the 36-km FDDA runs.
Guba, O.; Taylor, M. A.; Ullrich, P. A.; Overfelt, J. R.; Levy, M. N.
2014-11-27
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Guba, O.; Taylor, M. A.; Ullrich, P. A.; Overfelt, J. R.; Levy, M. N.
2014-06-25
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Two highly accurate methods for pitch calibration
NASA Astrophysics Data System (ADS)
Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.
2009-11-01
Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.
NASA Astrophysics Data System (ADS)
Pedretti, D.; Fernandez-Garcia, D.; Bolster, D.; Sanchez-Vila, X.; Benson, D.
2012-04-01
For risk assessment and adequate decision making regarding remediation strategies in contaminated aquifers, solute fate in the subsurface must be modeled correctly. In practical situations, hydrodynamic transport parameters are obtained by fitting procedures, that aim to mathematically reproduce solute breakthrough (BTC) observed in the field during tracer tests. In recent years, several methods have been proposed (curve-types, moments, nonlocal formulations) but none of them combine the two main characteristic effects of convergent flow tracer tests (which are the most used tests in the practice): the intrinsic non-stationarity of the convergent flow to a well and the ubiquitous multiscale hydraulic heterogeneity of geological formations. These two effects separately have been accounted for by a lot of methods that appear to work well. Here, we investigate both effects at the same time via numerical analysis. We focus on the influence that measurable statistical properties of the aquifers (such as the variance and the statistical geometry of correlation scales) have on the shape of BTCs measured at the pumping well during convergent flow tracer tests. We built synthetic multigaussian 3D fields of heterogeneous hydraulic conductivity fields with variable statistics. A well is located in the center of the domain to reproduce a forced gradient towards it. Constant-head values are imposed on the boundaries of the domains, which have 251x251x100 cells. Injections of solutes take place by releasing particles at different distances from the well and using a random walk particle tracking scheme with constant local coefficient of dispersivity. The results show that BTCs partially display the typical anomalous behavior that has been commonly referred to as the effect of heterogeneity and connectivity (early and late arrival times of solute differ from the one predicted by local formulations). Among the most salient features, the behaviors of BTCs after the peak (the slope
NASA Technical Reports Server (NTRS)
Bell, Jordan R.; Case, Jonathan L.; LaFontaine, Frank J.; Kumar, Sujay V.
2012-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed a Greenness Vegetation Fraction (GVF) dataset, which is updated daily using swaths of Normalized Difference Vegetation Index data from the Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA EOS Aqua and Terra satellites. NASA SPoRT began generating daily real-time GVF composites at 1-km resolution over the Continental United States (CONUS) on 1 June 2010. The purpose of this study is to compare the National Centers for Environmental Prediction (NCEP) climatology GVF product (currently used in operational weather models) to the SPoRT-MODIS GVF during June to October 2010. The NASA Land Information System (LIS) was employed to study the impacts of the SPoRT-MODIS GVF dataset on a land surface model (LSM) apart from a full numerical weather prediction (NWP) model. For the 2010 warm season, the SPoRT GVF in the western portion of the CONUS was generally higher than the NCEP climatology. The eastern CONUS GVF had variations both above and below the climatology during the period of study. These variations in GVF led to direct impacts on the rates of heating and evaporation from the land surface. In the West, higher latent heat fluxes prevailed, which enhanced the rates of evapotranspiration and soil moisture depletion in the LSM. By late Summer and Autumn, both the average sensible and latent heat fluxes increased in the West as a result of the more rapid soil drying and higher coverage of GVF. The impacts of the SPoRT GVF dataset on NWP was also examined for a single severe weather case study using the Weather Research and Forecasting (WRF) model. Two separate coupled LIS/WRF model simulations were made for the 17 July 2010 severe weather event in the Upper Midwest using the NCEP and SPoRT GVFs, with all other model parameters remaining the same. Based on the sensitivity results, regions with higher GVF in the SPoRT model runs had higher evapotranspiration and
NASA Technical Reports Server (NTRS)
Bell, Jordan R.; Case, Jonathan L.; Molthan, Andrew L.
2011-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center develops new products and techniques that can be used in operational meteorology. The majority of these products are derived from NASA polar-orbiting satellite imagery from the Earth Observing System (EOS) platforms. One such product is a Greenness Vegetation Fraction (GVF) dataset, which is produced from Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA EOS Aqua and Terra satellites. NASA SPoRT began generating daily real-time GVF composites at 1-km resolution over the Continental United States (CONUS) on 1 June 2010. The purpose of this study is to compare the National Centers for Environmental Prediction (NCEP) climatology GVF product (currently used in operational weather models) to the SPoRT-MODIS GVF during June to October 2010. The NASA Land Information System (LIS) was employed to study the impacts of the new SPoRT-MODIS GVF dataset on land surface models apart from a full numerical weather prediction (NWP) model. For the 2010 warm season, the SPoRT GVF in the western portion of the CONUS was generally higher than the NCEP climatology. The eastern CONUS GVF had variations both above and below the climatology during the period of study. These variations in GVF led to direct impacts on the rates of heating and evaporation from the land surface. The second phase of the project is to examine the impacts of the SPoRT GVF dataset on NWP using the Weather Research and Forecasting (WRF) model. Two separate WRF model simulations were made for individual severe weather case days using the NCEP GVF (control) and SPoRT GVF (experimental), with all other model parameters remaining the same. Based on the sensitivity results in these case studies, regions with higher GVF in the SPoRT model runs had higher evapotranspiration and lower direct surface heating, which typically resulted in lower (higher) predicted 2-m temperatures (2-m dewpoint temperatures). The opposite was true
NASA Astrophysics Data System (ADS)
Yucel, Ismail; Onen, Alper; Yilmaz, Koray; Gochis, David
2015-04-01
A fully-distributed, multi-physics, multi-scale hydrologic and hydraulic modeling system, WRF-Hydro, is used to assess the potential for skillful flood forecasting based on precipitation inputs derived from the Weather Research and Forecasting (WRF) model and the EUMETSAT Multi-sensor Precipitation Estimates (MPEs). Similar to past studies it was found that WRF model precipitation forecast errors related to model initial conditions are reduced when the three dimensional atmospheric data assimilation (3DVAR) scheme in the WRF model simulations is used. The study then undertook a comparative evaluation of the impact of MPE versus WRF precipitation estimates, both with and without data assimilation, in driving WRF-Hydro simulated streamflow. Several flood events that occurred in the Black Sea region were used for testing and evaluation. Following model calibration, the WRF-Hydro system was capable of skillfully reproducing observed flood hydrographs in terms of the volume of the runoff produced and the overall shape of the hydrograph. Streamflow simulation skill was significantly improved for those WRF model simulations where storm precipitation was accurately depicted with respect to timing, location and amount. Accurate streamflow simulations were more evident in WRF model simulations where the 3DVAR scheme was used compared to when it was not used. Because of substantial dry bias feature of MPE, streamflow derived using this precipitation product is in general very poor. Overall, root mean squared errors for runoff were reduced by 22.2% when hydrological model calibration is performed with WRF precipitation. Errors were reduced by 36.9% (above uncalibrated model performance) when both WRF model data assimilation and hydrological model calibration was utilized. Our results also indicated that when assimilated precipitation and model calibration is performed jointly, the calibrated parameters at the gauged sites could be transferred to ungauged neighboring basins
NASA Technical Reports Server (NTRS)
Horn, Lyle H.; Koehler, Thomas L.; Whittaker, Linda M.
1988-01-01
To evaluate the effect of the FGGE satellite observing system, the following two data sets were compared by examining the available potential energy (APE) and extratropical cyclone activity within the entire global domain during the first Special Observing Period: (1) the complete FGGE IIIb set, which incorporates satellite soundings, and (2) a NOSAT set which incorporates only conventional data. The time series of the daily total APEs indicate that NOSAT values are larger than the FGGE values, although in the Northern Hemisphere the differences are negligible. Analyses of cyclone scale features revealed only minor differences between the Northern Hemisphere FGGE and NOSAT analyses. On the other hand, substantial differences were revealed in the two Southern Hemisphere analyses, where the satellite soundings apparently add detail to the FGGE set.
Evaluation of numerical models by FerryBox and Fixed Platform in-situ data in the southern North Sea
NASA Astrophysics Data System (ADS)
Haller, M.; Janssen, F.; Siddorn, J.; Petersen, W.; Dick, S.
2015-02-01
FerryBoxes installed on ships of opportunity (SoO) provide high-frequency surface biogeochemical measurements along selected tracks on a regular basis. Within the European FerryBox Community, several FerryBoxes are operated by different institutions. Here we present a comparison of model simulations applied to the North Sea with FerryBox temperature and salinity data from a transect along the southern North Sea and a more detailed analysis at three different positions located off the English East coast, at the Oyster Ground and in the German Bight. In addition to the FerryBox data, data from a Fixed Platform of the MARNET network are applied. Two operational hydrodynamic models have been evaluated for different time periods: results of BSHcmod v4 are analysed for 2009-2012, while simulations of FOAM AMM7 NEMO have been available from MyOcean data base for 2011 and 2012. The simulation of water temperatures is satisfying; however, limitations of the models exist, especially near the coast in the southern North Sea, where both models are underestimating salinity. Statistical errors differ between the models and the measured parameters, as the root mean square error (rmse) accounts for BSHcmod v4 to 0.92 K, for AMM7 only to 0.44 K. For salinity, BSHcmod is slightly better than AMM7 (0.98 and 1.1 psu, respectively). The study results reveal weaknesses of both models, in terms of variability, absolute levels and limited spatial resolution. In coastal areas, where the simulation of the transition zone between the coasts and the open ocean is still a demanding task for operational modelling, FerryBox data, combined with other observations with differing temporal and spatial scales serve as an invaluable tool for model evaluation and optimization. The optimization of hydrodynamical models with high frequency regional datasets, like the FerryBox data, is beneficial for their subsequent integration in ecosystem modelling.
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2003-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2002-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Hibi, Yoshihiko; Tomigashi, Akira
2015-09-01
Numerical simulations that couple flow in a surface fluid with that in a porous medium are useful for examining problems of pollution that involve interactions among atmosphere, water, and groundwater, including saltwater intrusion along coasts. Coupled numerical simulations of such problems must consider both vertical flow between the surface fluid and the porous medium and complicated boundary conditions at their interface. In this study, a numerical simulation method coupling Navier-Stokes equations for surface fluid flow and Darcy equations for flow in a porous medium was developed. Then, the basic ability of the coupled model to reproduce (1) the drawdown of a surface fluid observed in square-pillar experiments, using pillars filled with only fluid or with fluid and a porous medium and (2) the migration of saltwater (salt concentration 0.5%) in the porous medium using the pillar filled with fluid and a porous medium was evaluated. Simulations that assumed slippery walls reproduced well the results with drawdowns of 10-30 cm when the pillars were filled with packed sand, gas, and water. Moreover, in the simulation of saltwater infiltration by the method developed in this study, velocity was precisely reproduced because the experimental salt concentration in the porous medium after saltwater infiltration was similar to that obtained in the simulation. Furthermore, conditions across the boundary between the porous medium and the surface fluid were satisfied in these numerical simulations of square-pillar experiments in which vertical flow predominated. Similarly, the velocity obtained by the simulation for a system coupling flow in surface fluid with that in a porous medium when horizontal flow predominated satisfied the conditions across the boundary. Finally, it was confirmed that the present simulation method was able to simulate a practical-scale surface fluid and porous medium system. All of these numerical simulations, however, required a great deal of
NASA Astrophysics Data System (ADS)
Tecle, Amanuel Sebhatu
Hurricane is one of the most destructive and costly natural hazard to the built environment and its impact on low-rise buildings, particularity, is beyond acceptable. The major objective of this research was to perform a parametric evaluation of internal pressure (IP) for wind-resistant design of low-rise buildings and wind-driven natural ventilation applications. For this purpose, a multi-scale experimental, i.e. full-scale at Wall of Wind (WoW) and small-scale at Boundary Layer Wind Tunnel (BLWT), and a Computational Fluid Dynamics (CFD) approach was adopted. This provided new capability to assess wind pressures realistically on internal volumes ranging from small spaces formed between roof tiles and its deck to attic to room partitions. Effects of sudden breaching, existing dominant openings on building envelopes as well as compartmentalization of building interior on the IP were systematically investigated. Results of this research indicated: (i) for sudden breaching of dominant openings, the transient overshooting response was lower than the subsequent steady state peak IP and internal volume correction for low-wind-speed testing facilities was necessary. For example a building without volume correction experienced a response four times faster and exhibited 30--40% lower mean and peak IP; (ii) for existing openings, vent openings uniformly distributed along the roof alleviated, whereas one sided openings aggravated the IP; (iii) larger dominant openings exhibited a higher IP on the building envelope, and an off-center opening on the wall exhibited (30--40%) higher IP than center located openings; (iv) compartmentalization amplified the intensity of IP and; (v) significant underneath pressure was measured for field tiles, warranting its consideration during net pressure evaluations. The study aimed at wind driven natural ventilation indicated: (i) the IP due to cross ventilation was 1.5 to 2.5 times higher for Ainlet/Aoutlet>1 compared to cases where Ainlet
Reliable numerical computation in an optimal output-feedback design
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1991-01-01
A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.
Nonspinning numerical relativity waveform surrogates: Building the model
NASA Astrophysics Data System (ADS)
Galley, Chad
2015-04-01
Simulating binary black hole coalescences involves solving Einstein's equations with large-scale computing resources that can take months to complete for a single numerical solution. This engenders a computationally intractable problem for multiple-query applications related to parameter space exploration, data analysis for gravitational wave detectors like LIGO, and semi-analytical waveform fits. I discuss how reduced order modeling techniques are used to build accurate surrogates that can be evaluated quickly in place of numerically solving Einstein's equations for generating gravitational waveforms of nonspinning binary black hole coalescences. To within error, the surrogate can model all modes available from a numerical simulation including, for example, troublesome modes such as the (3,2) mode and memory modes. A companion talk will cover quantifying the best surrogate model's errors. The results of this work represent a significant advance by making it possible to use numerical relativity waveforms for multiple-query applications.
Sheet Hydroforming Process Numerical Model Improvement Through Experimental Results Analysis
NASA Astrophysics Data System (ADS)
Gabriele, Papadia; Antonio, Del Prete; Alfredo, Anglani
2010-06-01
The increasing application of numerical simulation in metal forming field has helped engineers to solve problems one after another to manufacture a qualified formed product reducing the required time [1]. Accurate simulation results are fundamental for the tooling and the product designs. The wide application of numerical simulation is encouraging the development of highly accurate simulation procedures to meet industrial requirements. Many factors can influence the final simulation results and many studies have been carried out about materials [2], yield criteria [3] and plastic deformation [4,5], process parameters [6] and their optimization. In order to develop a reliable hydromechanical deep drawing (HDD) numerical model the authors have been worked out specific activities based on the evaluation of the effective stiffness of the blankholder structure [7]. In this paper after an appropriate tuning phase of the blankholder force distribution, the experimental activity has been taken into account to improve the accuracy of the numerical model. In the first phase, the effective capability of the blankholder structure to transfer the applied load given by hydraulic actuators to the blank has been explored. This phase ended with the definition of an appropriate subdivision of the blankholder active surface in order to take into account the effective pressure map obtained for the given loads configuration. In the second phase the numerical results obtained with the developed subdivision have been compared with the experimental data of the studied model. The numerical model has been then improved, finding the best solution for the blankholder force distribution.
Accurate near-field calculation in the rigorous coupled-wave analysis method
NASA Astrophysics Data System (ADS)
Weismann, Martin; Gallagher, Dominic F. G.; Panoiu, Nicolae C.
2015-12-01
The rigorous coupled-wave analysis (RCWA) is one of the most successful and widely used methods for modeling periodic optical structures. It yields fast convergence of the electromagnetic far-field and has been adapted to model various optical devices and wave configurations. In this article, we investigate the accuracy with which the electromagnetic near-field can be calculated by using RCWA and explain the observed slow convergence and numerical artifacts from which it suffers, namely unphysical oscillations at material boundaries due to the Gibbs phenomenon. In order to alleviate these shortcomings, we also introduce a mathematical formulation for accurate near-field calculation in RCWA, for one- and two-dimensional straight and slanted diffraction gratings. This accurate near-field computational approach is tested and evaluated for several representative test-structures and configurations in order to illustrate the advantages provided by the proposed modified formulation of the RCWA.
Disparity fusion using depth and stereo cameras for accurate stereo correspondence
NASA Astrophysics Data System (ADS)
Jang, Woo-Seok; Ho, Yo-Sung
2015-03-01
Three-dimensional content (3D) creation has received a lot of attention due to numerous successes of 3D entertainment. Accurate stereo correspondence is necessary for efficient 3D content creation. In this paper, we propose a disparity map estimation method based on stereo correspondence. The proposed system utilizes depth and stereo camera sets. While the stereo set carries out disparity estimation, depth camera information is projected to left and right camera positions using 3D transformation and upsampling is processed in accordance with the image size. The upsampled depth is used for obtaining disparity data of left and right positions. Finally, disparity data from each depth sensor are combined. In order to evaluate the proposed method, we applied view synthesis from the acquired disparity map. The experimental results demonstrate that our method produces more accurate disparity maps compared to the conventional approaches which use the single depth sensors.
NASA Astrophysics Data System (ADS)
Reiss, Georg; Frandsen, Henrik Lund; Persson, Åsa Helen; Weiß, Christian; Brandstätter, Wilhelm
2015-11-01
Metal-supported Solid Oxide Fuel Cells (SOFCs) are developed as a durable and cost-effective alternative to the state-of-the-art cermet SOFCs. This novel technology offers new opportunities but also new challenges. One of them is corrosion of the metallic support, which will decrease the long-term performance of the SOFCs. In order to understand the implications of the corrosion on the mass-transport through the metallic support, a corrosion model is developed that is capable of determining the change of the porous microstructure due to oxide scale growth. The model is based on high-temperature corrosion theory, and the required model parameters can be retrieved by standard corrosion weight gain measurements. The microstructure is reconstructed from X-ray computed tomography, and converted into a computational grid. The influence of the changing microstructure on the fuel cell performance is evaluated by determining an effective diffusion coefficient and the equivalent electrical area specific resistance (ASR) due to diffusion over time. It is thus possible to assess the applicability (in terms of corrosion behaviour) of potential metallic supports without costly long-term experiments. In addition to that an analytical frame-work is proposed, which is capable of estimating the porosity, tortuosity and the corresponding ASR based on weight gain measurements.
D`Agnese, F.A.; Faunt, C.C.; Turner, A.K.; Hill, M.C.
1997-12-31
Yucca Mountain is being studied as a potential site for a high-level radioactive waste repository. In cooperation with the U.S. Department of Energy, the U.S. Geological Survey is evaluating the geologic and hydrologic characteristics of the ground-water system. The study area covers approximately 100,000 square kilometers between lat 35{degrees}N., long 115{degrees}W and lat 38{degrees}N., long 118{degrees}W and encompasses the Death Valley regional ground-water flow system. Hydrology in the region is a result of both the and climatic conditions and the complex described as dominated by interbasinal flow and may be conceptualized as having two main components: a series of relatively shallow and localized flow paths that are superimposed on deeper regional flow paths. A significant component of the regional ground-water flow is through a thick Paleozoic carbonate rock sequence. Throughout the regional flow system, ground-water flow is probably controlled by extensive and prevalent structural features that result from regional faulting and fracturing. Hydrogeologic investigations over a large and hydrogeologically complex area impose severe demands on data management. This study utilized geographic information systems and geoscientific information systems to develop, store, manipulate, and analyze regional hydrogeologic data sets describing various components of the ground-water flow system.
Accurate momentum transfer cross section for the attractive Yukawa potential
Khrapak, S. A.
2014-04-15
Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
NASA Astrophysics Data System (ADS)
Dizon, John Ryan C.; Gorospe, Alking B.; Shin, Hyung-Seop
2014-05-01
Rare-earth-Ba-Cu-O (REBCO) based coated conductors (CCs) are now being used for electric device applications. For coil-based applications such as motors, generators and magnets, the CC tape needs to have robust mechanical strength along both the longitudinal and transverse directions. The CC tape in these coils is subjected to transverse tensile stresses during cool-down and operation, which results in delamination within and between constituent layers. In this study, in order to explain the behaviour observed in the evaluation of c-axis delamination strength in Cu-stabilized GdBCO CC tapes by anvil tests, numerical analysis of the mechanical stress distribution within the CC tape has been performed. The upper anvil size was varied in the analysis to understand the effect of anvil size on stress distribution within the multilayered CC tape, which is closely related to the delamination strength, delamination mode and delamination sites that were experimentally observed. The numerical simulation results showed that, when an anvil size covering the whole tape width was used, the REBCO coating film was subjected to the largest stress, which could result in low mechanical delamination and electromechanical delamination strengths. Meanwhile, when smaller-sized anvils were used, the copper stabilizer layer would experience the largest stress among all the constituent layers of the CC tape, which could result in higher mechanical and electromechanical delamination strengths, as well as high scattering of both of these delamination strengths. As a whole, the numerical simulation results could explain the damage evolution observed in CC tapes tested under transverse tensile stress, as well as the transverse tensile stress response of the critical current, Ic.
Accurate density functional thermochemistry for larger molecules.
Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.
1997-06-20
Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).
NASA Astrophysics Data System (ADS)
McNamara, Roger P.; Eagle, C. D.
1992-08-01
Planetary Observer High Accuracy Orbit Prediction Program (POHOP), an existing numerical integrator, was modified with the solar and lunar formulae developed by T.C. Van Flandern and K.F. Pulkkinen to provide the accuracy required to evaluate long-term orbit characteristics of objects on the geosynchronous region. The orbit of a 1000 kg class spacecraft is numerically integrated over 50 years using both the original and the more accurate solar and lunar ephemerides methods. Results of this study demonstrate that, over the long term, for an object located in the geosynchronous region, the more accurate solar and lunar ephemerides effects on the objects's position are significantly different than using the current POHOP ephemeris.
NASA Astrophysics Data System (ADS)
Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi
2016-04-01
This study evaluated an effect of an coupling between the Faraday and Cotton-Mouton effect to a measurement signal of the Dodel-Kunz method which uses counter-circular-polarized probing-laser for measuring the Faraday effect. When the coupling is small (the Faraday effect is dominant and the characteristic eigenmodes are approximately circularly polarized), the measurement signal can be algebraically expressed and it is shown that the finite effect of the coupling is still significant. When the Faraday effect is not dominant, a numerical calculation is necessary. The numerical calculation under an ITER-like condition (Bt = 5.3 T, Ip = 15 MA, a = 2 m, ne = 1020 m-3 and λ = 119 μm) showed that difference between the pure Faraday rotation and the measurement signal of the Dodel-Kunz method was an order of one degree, which exceeds allowable error of ITER poloidal polarimeter. In conclusion, similar to other polarimeter techniques, the Dodel-Kunz method is not free from the coupling between the Faraday and Cotton-Mouton effect.
NASA Astrophysics Data System (ADS)
Yucel, I.; Onen, A.; Yilmaz, K. K.; Gochis, D. J.
2015-04-01
A fully-distributed, multi-physics, multi-scale hydrologic and hydraulic modeling system, WRF-Hydro, is used to assess the potential for skillful flood forecasting based on precipitation inputs derived from the Weather Research and Forecasting (WRF) model and the EUMETSAT Multi-sensor Precipitation Estimates (MPEs). Similar to past studies it was found that WRF model precipitation forecast errors related to model initial conditions are reduced when the three dimensional atmospheric data assimilation (3DVAR) scheme in the WRF model simulations is used. A comparative evaluation of the impact of MPE versus WRF precipitation estimates, both with and without data assimilation, in driving WRF-Hydro simulated streamflow is then made. The ten rainfall-runoff events that occurred in the Black Sea Region were used for testing and evaluation. With the availability of streamflow data across rainfall-runoff events, the calibration is only performed on the Bartin sub-basin using two events and the calibrated parameters are then transferred to other neighboring three ungauged sub-basins in the study area. The rest of the events from all sub-basins are then used to evaluate the performance of the WRF-Hydro system with the calibrated parameters. Following model calibration, the WRF-Hydro system was capable of skillfully reproducing observed flood hydrographs in terms of the volume of the runoff produced and the overall shape of the hydrograph. Streamflow simulation skill was significantly improved for those WRF model simulations where storm precipitation was accurately depicted with respect to timing, location and amount. Accurate streamflow simulations were more evident in WRF model simulations where the 3DVAR scheme was used compared to when it was not used. Because of substantial dry bias feature of MPE, as compared with surface rain gauges, streamflow derived using this precipitation product is in general very poor. Overall, root mean squared errors for runoff were reduced by
NASA Astrophysics Data System (ADS)
Gopalan, Avinash; Samal, M. K.; Chakravartty, J. K.
2015-10-01
In this work, fracture behaviour of 20MnMoNi55 reactor pressure vessel (RPV) steel in the ductile to brittle transition regime (DBTT) is characterised. Compact tension (CT) and single edged notched bend (SENB) specimens of two different sizes were tested in the DBTT regime. Reference temperature 'T0' was evaluated according to the ASTM E1921 standard. The effect of size and geometry on the T0 was studied and T0 was found to be lower for SENB geometry. In order to understand the fracture behaviour numerically, finite element (FE) simulations were performed using Beremin's model for cleavage and Rousselier's model for ductile failure mechanisms. The simulated fracture behaviour was found to be in good agreement with the experiment.
NASA Astrophysics Data System (ADS)
Malinowski, Arkadiusz; Takeuchi, Takuya; Chen, Shang; Suzuki, Toshiya; Ishikawa, Kenji; Sekine, Makoto; Hori, Masaru; Lukasiak, Lidia; Jakubowski, Andrzej
2013-07-01
This paper describes a new, fast, and case-independent technique for sticking coefficient (SC) estimation based on pallet for plasma evaluation (PAPE) structure and numerical analysis. Our approach does not require complicated structure, apparatus, or time-consuming measurements but offers high reliability of data and high flexibility. Thermal analysis is also possible. This technique has been successfully applied to estimation of very low value of SC of hydrogen radicals on chemically amplified ArF 193 nm photoresist (the main goal of this study). Upper bound of our technique has been determined by investigation of SC of fluorine radical on polysilicon (in elevated temperature). Sources of estimation error and ways of its reduction have been also discussed. Results of this study give an insight into the process kinetics, and not only they are helpful in better process understanding but additionally they may serve as parameters in a phenomenological model development for predictive modelling of etching for ultimate CMOS topography simulation.
NASA Astrophysics Data System (ADS)
Kajikawa, K.; Funaki, K.; Shikimachi, K.; Hirano, N.; Nagaya, S.
2010-11-01
AC losses in a superconductor strip are numerically evaluated by means of a finite element method formulated with a current vector potential. The expressions of AC losses in an infinite slab that corresponds to a simple model of infinitely stacked strips are also derived theoretically. It is assumed that the voltage-current characteristics of the superconductors are represented by Bean’s critical state model. The typical operation pattern of a Superconducting Magnetic Energy Storage (SMES) coil with direct and alternating transport currents in an external AC magnetic field is taken into account as the electromagnetic environment for both the single strip and the infinite slab. By using the obtained results of AC losses, the influences of the transport currents on the total losses are discussed quantitatively.
Accurate orbit propagation with planetary close encounters
NASA Astrophysics Data System (ADS)
Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca
2015-08-01
We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).
Accurate ω-ψ Spectral Solution of the Singular Driven Cavity Problem
NASA Astrophysics Data System (ADS)
Auteri, F.; Quartapelle, L.; Vigevano, L.
2002-08-01
This article provides accurate spectral solutions of the driven cavity problem, calculated in the vorticity-stream function representation without smoothing the corner singularities—a prima facie impossible task. As in a recent benchmark spectral calculation by primitive variables of Botella and Peyret, closed-form contributions of the singular solution for both zero and finite Reynolds numbers are subtracted from the unknown of the problem tackled here numerically in biharmonic form. The method employed is based on a split approach to the vorticity and stream function equations, a Galerkin-Legendre approximation of the problem for the perturbation, and an evaluation of the nonlinear terms by Gauss-Legendre numerical integration. Results computed for Re=0, 100, and 1000 compare well with the benchmark steady solutions provided by the aforementioned collocation-Chebyshev projection method. The validity of the proposed singularity subtraction scheme for computing time-dependent solutions is also established.
Numerical simulation of tip clearance effects in turbomachinery
Basson, A.; Lakshminarayana, B.
1995-07-01
The numerical formulation developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation,a nd is applicable to both viscous and inviscid flows. The value of this artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately.
ERIC Educational Resources Information Center
Sozio, Gerry
2009-01-01
Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…
NASA Technical Reports Server (NTRS)
Baker, John G.
2009-01-01
Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.
Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods
NASA Technical Reports Server (NTRS)
Atkins, Harold L.; Pampell, Alyssa
2011-01-01
A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.
How accurate are the weather forecasts for Bierun (southern Poland)?
NASA Astrophysics Data System (ADS)
Gawor, J.
2012-04-01
Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather prediction models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather prediction models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The prediction periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The results of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How accurate are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most predictable? 4) Why
Predict amine solution properties accurately
Cheng, S.; Meisen, A.; Chakma, A.
1996-02-01
Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.
Accurate thickness measurement of graphene
NASA Astrophysics Data System (ADS)
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
NASA Astrophysics Data System (ADS)
Nakakura, M.; Ohtake, M.; Matsubara, K.; Yoshida, K.; Cho, H. S.; Kodama, T.; Gokon, N.
2016-05-01
In 2014, Niigata University and the Institute of Applied Energy developed a point-concentration receiver-evaluation system using a silicon carbide (SiC) honeycomb and a three-dimensional simulation method. The system includes several improvements over its forerunner, including the ability to increase/decrease the power-on aperture (POA), air-mass flow-rate (AMF) and the height of the focal surface. This paper will focus on the results of tests using an improved receiver evaluation system, at a focal height of 1600 mm. Maximum outlet air temperature reached as high as 800 K, but exerted no untoward effects on the system. Also, receiver efficiency rated between 45 % and 70 %, according to POA. A numerical, 3D method of analysis was created at the same time, in order to analyze temperature and flow-distribution, in detail. This simulation, based on the dual-cell approach, reproduced the thermal non-equilibrium between the solid and fluid domain of the receiver material. The most interesting feature of this simulation, is that, because it includes upper and lower computational domains, it can be used to analyze the influence of both inward- and outward-flowing receiver materials.
Bradfield, A.D.
1986-01-01
Coal-mining impacts on Smoky Creek, eastern Tennessee were evaluated using water quality and benthic invertebrate data. Data from mined sites were also compared with water quality and invertebrate fauna found at Crabapple Branch, an undisturbed stream in a nearby basin. Although differences in water quality constituent concentrations and physical habitat conditions at sampling sites were apparent, commonly used measures of benthic invertebrate sample data such as number of taxa, sample diversity, number of organisms, and biomass were inadequate for determining differences in stream environments. Clustering algorithms were more useful in determining differences in benthic invertebrate community structure and composition. Normal (collections) and inverse (species) analyses based on presence-absence data of species of Ephemeroptera, Plecoptera, and Tricoptera were compared using constancy, fidelity, and relative abundance of species found at stations with similar fauna. These analyses identified differences in benthic community composition due to seasonal variations in invertebrate life histories. When data from a single season were examined, sites on tributary streams generally clustered separately from sites on Smoky Creek. These analyses compared with differences in water quality, stream size, and substrate characteristics between tributary sites and the more degraded main stem sites, indicated that numerical classification of invertebrate data can provide discharge-independent information useful in rapid evaluations of in-stream environmental conditions. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Castellanza, Riccardo; Fernandez Merodo, Josè Antonio; di Prisco, Claudio; Frigerio, Gabriele; Crosta, Giovanni B.; Orlandi, Gianmarco
2013-04-01
Aim of the study is the assessment of stability conditions for an abandoned gypsum mine (Bologna , Italy). Mining was carried out til the end of the 70s by the room and pillar method. During mining a karst cave was crossed karstic waters flowed into the mine. As a consequence, the lower level of the mining is completely flooded and portions of the mining levels show critical conditions and are structurally prone to instability. Buildings and infrastructures are located above the first and second level and a large portion of the area below the mine area, and just above of the Savena river, is urbanised. Gypsum geomechanical properties change over time; water, or even air humidity, dissolves or weaken gypsum pillars, leading progressively to collapse. The mine is located in macro-crystalline gypsum beds belonging to the Messinian Gessoso Solfifera Formation. Selenitic gypsum beds are interlayered with by centimetre to meter thick shales layers. In order to evaluate the risk related to the collapse of the flooded level (level 3) a deterministic approach based on 3D numerical analyses has been considered. The entire abandoned mine system up to the ground surface has been generated in 3D. The considered critical scenario implies the collapse of the pillars and roof of the flooded level 3. In a first step, a sequential collapse starting from the most critical pillar has been simulated by means of a 3D Finite Element code. This allowed the definition of the subsidence basin at the ground surface and the interaction with the buildings in terms of ground displacements. 3D numerical analyses have been performed with an elasto-perfectly plastic constitutive model. In a second step, the effect of a simultaneous collapse of the entire level 3 has been considered in order to evaluate the risk of a flooding due to the water outflow from the mine system. Using a 3D CFD (Continuum Fluid Dynamics) finite element code the collapse of the level 3 has been simulated and the volume of
NASA Astrophysics Data System (ADS)
RUNG, J.
2013-12-01
In this study, a series of rainfall-stability analyses were performed to simulate the failure mechanism and the function of remediation works of the down slope of T-16 tower pier, Mao-Kong gondola (or T-16 Slope) at the hillside of Taipei City using two-dimensional finite element method. The failure mechanism of T-16 Slope was simulated using the rainfall hyetograph of Jang-Mi typhoon in 2008 based on the field investigation data, monitoring data, soil/rock mechanical testing data and detail design plots of remediation works. Eventually, the numerical procedures and various input parameters in the analysis were verified by comparing the numerical results with the field observations. In addition, 48 hrs design rainfalls corresponding to 5, 10, 25 and 50 years return periods were prepared using the 20 years rainfall data of Mu-Zha rainfall observation station, Central Weather Bureau for the rainfall-stability analyses of T-16 Slope to inspect the effect of the compound stabilization works on the overall stability of the slope. At T-16 Slope, without considering the longitudinal and transverse drainages on the ground surface, there totally 4 types of stabilization works were installed to stabilize the slope. From the slope top to the slope toe, the stabilization works of T-16 Slope consists of RC-retaining wall with micro-pile foundation at the up-segment, earth anchor at the up-middle-segment, soil nailing at the middle-segment and retaining pile at the down-segment of the slope. The effect of each individual stabilization work on the slope stability under rainfall condition was examined and evaluated by raising field groundwater level.
NASA Astrophysics Data System (ADS)
Mohanty, S.; Jha, Madan K.; Kumar, Ashwani; Panda, D. K.
2013-07-01
In view of worldwide concern for the sustainability of groundwater resources, basin-wide modeling of groundwater flow is essential for the efficient planning and management of groundwater resources in a groundwater basin. The objective of the present study is to evaluate the performance of finite difference-based numerical model MODFLOW and the artificial neural network (ANN) model developed in this study in simulating groundwater levels in an alluvial aquifer system. Calibration of the MODFLOW was done by using weekly groundwater level data of 2 years and 4 months (February 2004 to May 2006) and validation of the model was done using 1 year of groundwater level data (June 2006 to May 2007). Calibration of the model was performed by a combination of trial-and-error method and automated calibration code PEST with a mean RMSE (root mean squared error) value of 0.62 m and a mean NSE (Nash-Sutcliffe efficiency) value of 0.915. Groundwater levels at 18 observation wells were simulated for the validation period. Moreover, artificial neural network models were developed to predict groundwater levels in 18 observation wells in the basin one time step (i.e., week) ahead. The inputs to the ANN model consisted of weekly rainfall, evaporation, river stage, water level in the drain, pumping rate of the tubewells and groundwater levels in these wells at the previous time step. The time periods used in the MODFLOW were also considered for the training and testing of the developed ANN models. Out of the 174 data sets, 122 data sets were used for training and 52 data sets were used for testing. The simulated groundwater levels by MODFLOW and ANN model were compared with the observed groundwater levels. It was found that the ANN model provided better prediction of groundwater levels in the study area than the numerical model for short time-horizon predictions.
Accurate stress resultants equations for laminated composite deep thick shells
Qatu, M.S.
1995-11-01
This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.
NASA Astrophysics Data System (ADS)
Guzella, Matheus dos Santos; Cabezas-Gómez, Luben; da Silva, José Antônio; Maia, Cristiana Brasil; Hanriot, Sérgio de Morais
2016-02-01
This study presents a numerical evaluation of the influence of some void fraction correlations over the thermal-hydraulic behavior of wire-on-tube condensers operating with HFC-134a. The numerical model is based on finite volume method considering the homogeneous equilibrium model. Empirical correlations are applied to provide closure relations. Results show that the choice of void fraction correlation influences the refrigerant charge and pressure drop calculations, while no influences the heat transfer rate.
Accurate ab Initio Spin Densities
2012-01-01
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921
NASA Astrophysics Data System (ADS)
Ichikawa, R.; Hobiger, T.; Koyama, Y.; Kondo, T.
2008-12-01
The Japan Meteorological Agency (JMA) meso-scale analysis data (MANAL data) which we used in our study provides temperature, humidity, and pressure values at the surface and at 21 height levels (which vary between several tens of meters and about 31 km), for each node in a 10km by 10 km grid that covers Japan islands, the surrounding ocean and eastern Eurasia. The 3-hourly operational products are available by JMA since March, 2006. We have simultaneously evaluated atmospheric parameters (equivalent zenith total delay and linear horizontal delay gradients) and position errors derived from slant path delays obtained by the KAshima RAytracing Tools (KARAT) through the MANAL data. Most of the early mapping functions developed for VLBI and GPS were based on the assumption of azimuthal isotropy. On the other hand, the recent geodetic analyses are carried out by applying the modern mapping functions based on the numerical weather analysis fields. The Global Mapping Function (GMF) by Boehm et al. (2006), and Vienna Mapping Function (VMF) by Boehm and Schuh (2004) have been successfully applied to remove the zenith hydrostatic delay in the recent years. In addition, the lateral spatial variation of wet delay is reduced by linear gradient estimation. Comparisons between KARAT-based slant delay and empirical mapping functions indicate large biases ranging from 18 to 90 mm, which is considered to be caused by significant variability of water vapor. Position error simulation reveal that the highly variability of the errors is clearly associated with severe atmospheric phenomena. Such simulation are very useful to investigate the characteristics of positioning errors generated by local atmospheric disturbances. Finally, we compared PPP processed position solutions using KARAT with those using the latest mapping functions covering a period of two week GEONET data. The KARAT solution is almost identical to the solution using GMF with linear gradient model, but some cases tends to
Determining the Numerical Stability of Quantum Chemistry Algorithms.
Knizia, Gerald; Li, Wenbin; Simon, Sven; Werner, Hans-Joachim
2011-08-01
We present a simple, broadly applicable method for determining the numerical properties of quantum chemistry algorithms. The method deliberately introduces random numerical noise into computations, which is of the same order of magnitude as the floating point precision. Accordingly, repeated runs of an algorithm give slightly different results, which can be analyzed statistically to obtain precise estimates of its numerical stability. This noise is produced by automatic code injection into regular compiler output, so that no substantial programming effort is required, only a recompilation of the affected program sections. The method is applied to investigate: (i) the numerical stability of the three-center Obara-Saika integral evaluation scheme for high angular momenta, (ii) if coupled cluster perturbative triples can be evaluated with single precision arithmetic, (iii) how to implement the density fitting approximation in Møller-Plesset perturbation theory (MP2) most accurately, and (iv) which parts of density fitted MP2 can be safely evaluated with single precision arithmetic. In the integral case, we find a numerical instability in an equation that is used in almost all integral programs. Due to the results of (ii) and (iv), we conjecture that single precision arithmetic can be applied whenever a calculation is done in an orthogonal basis set and excessively long linear sums are avoided. PMID:26606614
Yi, Haeseung; Xiao, Tong; Thomas, Parijatham; Aguirre, Alejandra; Smalletz, Cindy; David, Raven; Crew, Katherine
2015-01-01
Background Breast cancer risk assessment including genetic testing can be used to classify people into different risk groups with screening and preventive interventions tailored to the needs of each group, yet the implementation of risk-stratified breast cancer prevention in primary care settings is complex. Objective To address barriers to breast cancer risk assessment, risk communication, and prevention strategies in primary care settings, we developed a Web-based decision aid, RealRisks, that aims to improve preference-based decision-making for breast cancer prevention, particularly in low-numerate women. Methods RealRisks incorporates experience-based dynamic interfaces to communicate risk aimed at reducing inaccurate risk perceptions, with modules on breast cancer risk, genetic testing, and chemoprevention that are tailored. To begin, participants learn about risk by interacting with two games of experience-based risk interfaces, demonstrating average 5-year and lifetime breast cancer risk. We conducted four focus groups in English-speaking women (age ≥18 years), a questionnaire completed before and after interacting with the decision aid, and a semistructured group discussion. We employed a mixed-methods approach to assess accuracy of perceived breast cancer risk and acceptability of RealRisks. The qualitative analysis of the semistructured discussions assessed understanding of risk, risk models, and risk appropriate prevention strategies. Results Among 34 participants, mean age was 53.4 years, 62% (21/34) were Hispanic, and 41% (14/34) demonstrated low numeracy. According to the Gail breast cancer risk assessment tool (BCRAT), the mean 5-year and lifetime breast cancer risk were 1.11% (SD 0.77) and 7.46% (SD 2.87), respectively. After interacting with RealRisks, the difference in perceived and estimated breast cancer risk according to BCRAT improved for 5-year risk (P=.008). In the qualitative analysis, we identified potential barriers to adopting risk
Exploring accurate Poisson–Boltzmann methods for biomolecular simulations
Wang, Changhao; Wang, Jun; Cai, Qin; Li, Zhilin; Zhao, Hong-Kai; Luo, Ray
2013-01-01
Accurate and efficient treatment of electrostatics is a crucial step in computational analyses of biomolecular structures and dynamics. In this study, we have explored a second-order finite-difference numerical method to solve the widely used Poisson–Boltzmann equation for electrostatic analyses of realistic bio-molecules. The so-called immersed interface method was first validated and found to be consistent with the classical weighted harmonic averaging method for a diversified set of test biomolecules. The numerical accuracy and convergence behaviors of the new method were next analyzed in its computation of numerical reaction field grid potentials, energies, and atomic solvation forces. Overall similar convergence behaviors were observed as those by the classical method. Interestingly, the new method was found to deliver more accurate and better-converged grid potentials than the classical method on or nearby the molecular surface, though the numerical advantage of the new method is reduced when grid potentials are extrapolated to the molecular surface. Our exploratory study indicates the need for further improving interpolation/extrapolation schemes in addition to the developments of higher-order numerical methods that have attracted most attention in the field. PMID:24443709
Fast and Provably Accurate Bilateral Filtering
NASA Astrophysics Data System (ADS)
Chaudhury, Kunal N.; Dabhade, Swapnil D.
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$. The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with state-of-the-art methods in terms of speed and accuracy.
Fast and Provably Accurate Bilateral Filtering.
Chaudhury, Kunal N; Dabhade, Swapnil D
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722
Accurate adiabatic correction in the hydrogen molecule
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule
NASA Astrophysics Data System (ADS)
Pachucki, Krzysztof; Komasa, Jacek
2014-12-01
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Differentiated control of web traffic: a numerical analysis
NASA Astrophysics Data System (ADS)
Guo, Liang; Matta, Ibrahim
2002-07-01
Internet measurements show that the size distribution of Web-based transactions is usually very skewed; a few large requests constitute most of the total traffic. Motivated by the advantages of scheduling algorithms which favor short jobs, we propose to perform differentiated control over Web-based transactions to give preferential service to short web requests. The control is realized through service semantics provided by Internet Traffic Managers, a Diffserv-like architecture. To evaluate the performance of such a control system, it is necessary to have a fast but accurate analytical method. To this end, we model the Internet as a time-shared system and propose a numerical approach which utilizes Kleinrock's conservation law to solve the model. The numerical results are shown to match well those obtained by packet-level simulation, which runs orders of magnitude slower than our numerical method.
Tandon, Manish; Singh, Anshuman; Saluja, Vandana; Dhankhar, Mandeep; Pandey, Chandra Kant; Jain, Priyanka
2016-01-01
Background: Pain scores are used for acute pain management. The assessment of pain by the patient as well as the caregiver can be influenced by a variety of factors. The numeric rating scale (NRS) is widely used due to its easy application. The NRS requires abstract thinking by a patient to assign a score to correctly reflect analgesic needs, and its interpretation is subject to bias. Objectives: The study was done to validate a 4-point objective pain score (OPS) for the evaluation of acute postoperative pain and its comparison with the NRS. Patient and Methods: A total of 1021 paired readings of the OPS and NRS of 93 patients who underwent laparotomy and used patient-controlled analgesia were evaluated. Acute pain service (APS) personnel recorded the OPS and NRS. Rescue analgesia was divided into two incremental levels (level 1-paracetamol 1 g for NRS 2 - 5 and OPS 3, Level 2-Fentanyl 25 mcg for NRS ≥ 6 and OPS 1 and 2). In cases of disagreement between the two scores, an independent consultant decided the rescue analgesia. Results: The NRS and OPS agreed across the range of pain. There were 25 disagreements in 8 patients. On 24 occasions, rescue analgesia was increased from level 1 to 2, and one occasion it was decreased from level 2 to 1. On all 25 occasions, the decision to supplement analgesia went in favor of the OPS over the NRS. Besides these 25 disagreements, there were 17 occasions in which observer bias was possible for level 2 rescue analgesia. Conclusions: The OPS is a good stand-alone pain score and is better than the NRS for defining mild and moderate pain. It may even be used to supplement NRS when it is indicative of mild or moderate pain. PMID:27110530
Kisohara, N.; Suzuki, H.; Akita, K.; Kasahara, N.
2012-07-01
A double-wall-tube is nominated for the steam generator heat transfer tube of future sodium fast reactors (SFRs) in Japan, to decrease the possibility of sodium/water reaction. The double-wall-tube consists of an inner tube and an outer tube, and they are mechanically contacted to keep the heat transfer of the interface between the inner and outer tubes by their residual stress. During long term SG operation, the contact stress at the interface gradually falls down due to stress relaxation. This phenomenon might increase the thermal resistance of the interface and degrade the tube heat transfer performance. The contact stress relaxation can be predicted by numerical analysis, and the analysis requires the data of the initial residual stress distributions in the tubes. However, unclear initial residual stress distributions prevent precious relaxation evaluation. In order to resolve this issue, a neutron diffraction method was employed to reveal the tri-axial (radius, hoop and longitudinal) initial residual stress distributions in the double-wall-tube. Strain gauges also were used to evaluate the contact stress. The measurement results were analyzed using a JAEA's structural computer code to determine the initial residual stress distributions. Based on the stress distributions, the structural computer code has predicted the transition of the relaxation and the decrease of the contact stress. The radial and longitudinal temperature distributions in the tubes were input to the structural analysis model. Since the radial thermal expansion difference between the inner (colder) and outer (hotter) tube reduces the contact stress and the tube inside steam pressure contributes to increasing it, the analytical model also took these effects into consideration. It has been conduced that the inner and outer tubes are contacted with sufficient stresses during the plant life time, and that effective heat transfer degradation dose not occur in the double-wall-tube SG. (authors)
NASA Astrophysics Data System (ADS)
Well, Reinhard; Buchen, Caroline; Lewicka-Szczebak, Dominika; Ruoss, Nicolas
2016-04-01
Common methods for measuring soil denitrification in situ include monitoring the accumulation of 15N labelled N2 and N2O evolved from 15N labelled soil nitrate pool in soil surface chambers. Gas diffusion is considered to be the main accumulation process. Because accumulation of the gases decreases concentration gradients between soil and chamber over time, gas production rates are underestimated if calculated from chamber concentrations. Moreover, concentration gradients to the non-labelled subsoil exist, inevitably causing downward diffusion of 15N labelled denitrification products. A numerical model for simulating gas diffusion in soil was used in order to determine the significance of this source of error. Results show that subsoil diffusion of 15N labelled N2 and N2O - and thus potential underestimation of denitrification derived from chamber fluxes - increases with cover closure time as well as with increasing diffusivity. Simulations based on the range of typical gas diffusivities of unsaturated soils show that the fraction of subsoil diffusion after chamber closure for 1 hour is always significant with values up to >30 % of total production of 15N labelled N2 and N2O. Field experiments for measuring denitrification with the 15N gas flux method were conducted. The ability of the model to predict the time pattern of gas accumulation was evaluated by comparing measured 15N2 concentrations and simulated values.
PRINGLE,SCOTT E.; COOPER,CLAY A.; GLASS JR.,ROBERT J.
2000-12-21
An experimental investigation was conducted to study double-diffusive finger convection in a Hele-Shaw cell by layering a sucrose solution over a more-dense sodium chloride (NaCl) solution. The solutal Rayleigh numbers were on the order of 60,000, based upon the height of the cell (25 cm), and the buoyancy ratio was 1.2. A full-field light transmission technique was used to measure a dye tracer dissolved in the NaCl solution. They analyze the concentration fields to yield the temporal evolution of length scales associated with the vertical and horizontal finger structure as well as the mass flux. These measures show a rapid progression through two early stages to a mature stage and finally a rundown period where mass flux decays rapidly. The data are useful for the development and evaluation of numerical simulators designed to model diffusion and convection of multiple components in porous media. The results are useful for correct formulation at both the process scale (the scale of the experiment) and effective scale (where the lab-scale processes are averaged-up to produce averaged parameters). A fundamental understanding of the fine-scale dynamics of double-diffusive finger convection is necessary in order to successfully parameterize large-scale systems.
Accurate adjoint design sensitivities for nano metal optics.
Hansen, Paul; Hesselink, Lambertus
2015-09-01
We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483
Efficient and accurate sound propagation using adaptive rectangular decomposition.
Raghuvanshi, Nikunj; Narain, Rahul; Lin, Ming C
2009-01-01
Accurate sound rendering can add significant realism to complement visual display in interactive applications, as well as facilitate acoustic predictions for many engineering applications, like accurate acoustic analysis for architectural design. Numerical simulation can provide this realism most naturally by modeling the underlying physics of wave propagation. However, wave simulation has traditionally posed a tough computational challenge. In this paper, we present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, and utilizes an efficient implementation of the Discrete Cosine Transform on Graphics Processors (GPU) to achieve at least a 100-fold performance gain compared to a standard Finite-Difference Time-Domain (FDTD) implementation with comparable accuracy, while also being 10-fold more memory efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range. To the best of our knowledge, it was not previously possible to perform such simulations on a desktop computer. Our work thus enables acoustic analysis on large scenes and auditory display for complex virtual environments on commodity hardware. PMID:19590105
Numerical modeling of combustion dynamics in a lean premixed combustor
Cannon, S.M.; Smith, C.E.
1998-07-01
The objective of this study was to evaluate the ability of a time-accurate, 2-D axi-symmetric CFD model to accurately predict combustion dynamics in a premixed pipe combustor driven by mixture feed variation. Independently measured data, including the magnitude and frequency of combustor pressure, were used to evaluate the model. The Smagorinsky, RGN k-{var{underscore}epsilon}, and molecular viscosity models were used to describe the subgrid turbulence, and a one-step, finite-rate reaction to equilibrium products model was used to describe the subgrid chemistry. Swirl source terms were included within the premix passage's computational domain and allowed the model to retain known boundary conditions at the choked flow inlet and the constant pressure exit. To ensure pressure waves were accurately captured, 1-D numerical analyses were first performed to assess the effects of boundary conditions, temporal and spatial differencing, time step, and grid size. It was found that the selected numerical details produced little numerical dissipation of the pressure waves. Then, 2-D axisymmetric analyses were performed in which the inlet temperature was varied. It was found that increases in the inlet temperature (keeping a constant mass flow rate) had a large effect on the unsteady combustor behavior since reaction and advection rates were increased. The correct trend of decreasing rms pressures with increasing inlet temperature was predicted. This agreement in rms pressure behavior supports the ability of the CFD model to accurately capture unsteady heat release and its coupling with resonant acoustic waves in multi-dimensional combustor systems. The effect of subgrid turbulence model was small for the unstable cases studied here.
Numerical modeling of nonintrusive inspection systems
Hall, J.; Morgan, J.; Sale, K.
1992-12-01
A wide variety of nonintrusive inspection systems have been proposed in the past several years for the detection of hidden contraband in airline luggage and shipping containers. The majority of these proposed techniques depend on the interaction of radiation with matter to produce a signature specific to the contraband of interest, whether drugs or explosives. In the authors` role as diagnostic specialists in the Underground Test Program over the past forty years, L-Division of the Lawrence Livermore National Laboratory has developed a technique expertise in the combined numerical and experimental modeling of these types of system. Based on their experience, they are convinced that detailed numerical modeling provides a much more accurate estimate of the actual performance of complex experiments than simple analytical modeling. Furthermore, the construction of detailed numerical prototypes allows experimenters to explore the entire region of parameter space available to them before committing their ideas to hardware. This sort of systematic analysis has often led to improved experimental designs and reductions in fielding costs. L-Division has developed an extensive suite of computer codes to model proposed experiments and possible background interactions. These codes allow one to simulate complex radiation sources, model 3-dimensional system geometries with {open_quotes}real world{close_quotes} complexity, specify detailed elemental distributions, and predict the response of almost any type of detector. In this work several examples are presented illustrating the use of these codes in modeling experimental systems at LLNL and their potential usefulness in evaluating nonintrusive inspection systems is discussed.
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Numerical solution of boundary-integral equations for molecular electrostatics.
Bardhan, Jaydeep P
2009-03-01
Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived. PMID:19275391
Detecting Cancer Quickly and Accurately
NASA Astrophysics Data System (ADS)
Gourley, Paul; McDonald, Anthony; Hendricks, Judy; Copeland, Guild; Hunter, John; Akhil, Ohmar; Capps, Heather; Curry, Marc; Skirboll, Steve
2000-03-01
We present a new technique for high throughput screening of tumor cells in a sensitive nanodevice that has the potential to quickly identify a cell population that has begun the rapid protein synthesis and mitosis characteristic of cancer cell proliferation. Currently, pathologists rely on microscopic examination of cell morphology using century-old staining methods that are labor-intensive, time-consuming and frequently in error. New micro-analytical methods for automated, real time screening without chemical modification are critically needed to advance pathology and improve diagnoses. We have teamed scientists with physicians to create a microlaser biochip (based upon our R&D award winning bio-laser concept)1 which evaluates tumor cells by quantifying their growth kinetics. The key new discovery was demonstrating that the lasing spectra are sensitive to the biomolecular mass in the cell, which changes the speed of light in the laser microcavity. Initial results with normal and cancerous human brain cells show that only a few hundred cells -- the equivalent of a billionth of a liter -- are required to detect abnormal growth. The ability to detect cancer in such a minute tissue sample is crucial for resecting a tumor margin or grading highly localized tumor malignancy. 1. P. L. Gourley, NanoLasers, Scientific American, March 1998, pp. 56-61. This work supported under DOE contract DE-AC04-94AL85000 and the Office of Basic Energy Sciences.
Detecting cancer quickly and accurately
NASA Astrophysics Data System (ADS)
Gourley, Paul L.; McDonald, Anthony E.; Hendricks, Judy K.; Copeland, G. C.; Hunter, John A.; Akhil, O.; Cheung, D.; Cox, Jimmy D.; Capps, H.; Curry, Mark S.; Skirboll, Steven K.
2000-03-01
We present a new technique for high throughput screening of tumor cells in a sensitive nanodevice that has the potential to quickly identify a cell population that has begun the rapid protein synthesis and mitosis characteristic of cancer cell proliferation. Currently, pathologists rely on microscopic examination of cell morphology using century-old staining methods that are labor-intensive, time-consuming and frequently in error. New micro-analytical methods for automated, real time screening without chemical modification are critically needed to advance pathology and improve diagnoses. We have teamed scientists with physicians to create a microlaser biochip (based upon our R&D award winning bio- laser concept) which evaluates tumor cells by quantifying their growth kinetics. The key new discovery was demonstrating that the lasing spectra are sensitive to the biomolecular mass in the cell, which changes the speed of light in the laser microcavity. Initial results with normal and cancerous human brain cells show that only a few hundred cells -- the equivalent of a billionth of a liter -- are required to detect abnormal growth. The ability to detect cancer in such a minute tissue sample is crucial for resecting a tumor margin or grading highly localized tumor malignancy.
Towards an accurate bioimpedance identification
NASA Astrophysics Data System (ADS)
Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.
2013-04-01
This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.
Wong, Yong Foo; Chin, Sung-Tong; Perlmutter, Patrick; Marriott, Philip J
2015-03-27
To explore the possible obligate interactions between the phytopathogenic fungus and Aquilaria malaccensis which result in generation of a complex array of secondary metabolites, we describe a comprehensive two-dimensional gas chromatography (GC × GC) method, coupled to accurate mass time-of-flight mass spectrometry (TOFMS) for the untargeted and comprehensive metabolic profiling of essential oils from naturally infected A. malaccensis trees. A polar/non-polar column configuration was employed, offering an improved separation pattern of components when compared to other column sets. Four different grades of the oils displayed quite different metabolic patterns, suggesting the evolution of a signalling relationship between the host tree (emergence of various phytoalexins) and fungi (activation of biotransformation). In total, ca. 550 peaks/metabolites were detected, of which tentative identification of 155 of these compounds was reported, representing between 20.1% and 53.0% of the total ion count. These are distributed over the chemical families of monoterpenic and sesquiterpenic hydrocarbons, oxygenated monoterpenes and sesquiterpenes (comprised of ketone, aldehyde, oxide, alcohol, lactone, keto-alcohol and diol), norterpenoids, diterpenoids, short chain glycols, carboxylic acids and others. The large number of metabolites detected, combined with the ease with which they are located in the 2D separation space, emphasises the importance of a comprehensive analytical approach for the phytochemical analysis of plant metabolomes. Furthermore, the potential of this methodology in grading agarwood oils by comparing the obtained metabolic profiles (pattern recognition for unique metabolite chemical families) is discussed. The phytocomplexity of the agarwood oils signified the production of a multitude of plant-fungus mediated secondary metabolites as chemical signals for natural ecological communication. To the best of our knowledge, this is the most complete
NASA Astrophysics Data System (ADS)
Nakamura, Tomoaki; Wada, Akira; Hasegawa, Kazuyuki; Ochiai, Minoru
CO2 oceanic sequestration is one of the technologies for reducing the discharge of CO2 into the atmosphere, which is considered to cause the global warming, and consists in isolating industry-made CO2 gas within the depths of the ocean. This method is expected to enable industry-made CO2 to be separated from the atmosphere for a considerably long period of time. On the other hand, it is also feared that the CO2 injected in the ocean may lower pH of seawater surrounding the sequestration site, thus may adversely affect marine organisms. For evaluating the biological influences, we have studied to precisely predict the CO2 distribution around the CO2 injection site by a numerical simulation method. In previous studies, in which a 2 degree by 2 degree mesh was employed in the simulation, CO2 concentrations tended to be evenly dispersed within the grid, giving lower concentration values. Thus, the calculation accuracy within the area several hundred kilometers from the CO2 injection site was not satisfactory for the biological effect assessment. In the present study, we improved the accuracy of concentration distribution by changing the computational mesh resolution for a 0.2 by 0.2 degree. By the renewed method we could obtain detailed CO2 distribution in waters within several hundred kilometers of the injection site, and clarified that the Moving-ship procedure may have less effects of lowered pH on marine organisms than the fixed-point release procedure of CO2 sequestration.
NASA Astrophysics Data System (ADS)
Benaïchouche, Abed; Stab, Olivier; Tessier, Bruno; Cojan, Isabelle
2016-01-01
In landscapes dominated by fluvial erosion, the landscape morphology is closely related to the hydrographic network system. In this paper, we investigate the hydrographic network reorganization caused by a headward piracy mechanism between two drainage basins in France, the Meuse and the Moselle. Several piracies occurred in the Meuse basin during the past one million years, and the basin's current characteristics are favorable to new piracies by the Moselle river network. This study evaluates the consequences over the next several million years of a relative lowering of the Moselle River (and thus of its basin) with respect to the Meuse River. The problem is addressed with a numerical modeling approach (landscape evolution model, hereafter LEM) that requires empirical determinations of parameters and threshold values. Classically, fitting of the parameters is based on analysis of the relationship between the slope and the drainage area and is conducted under the hypothesis of equilibrium. Application of this conventional approach to the capture issue yields incomplete results that have been consolidated by a parametric sensitivity analysis. The LEM equations give a six-dimensional parameter space that was explored with over 15,000 simulations using the landscape evolution model GOLEM. The results demonstrate that stream piracies occur in only four locations in the studied reach near the city of Toul. The locations are mainly controlled by the local topography and are model-independent. Nevertheless, the chronology of the captures depends on two parameters: the river concavity (given by the fluvial advection equation) and the hillslope erosion factor. Thus, the simulations lead to three different scenarios that are explained by a phenomenon of exclusion or a string of events.
On the accuracy of numerical integration over the unit sphere applied to full network models
NASA Astrophysics Data System (ADS)
Itskov, Mikhail
2016-05-01
This paper is motivated by a recent study by Verron (Mecha Mater 89:216-228, 2015) which revealed huge errors of the numerical integration over the unit sphere in application to large strain problems. For the verification of numerical integration schemes we apply here other analytical integrals over the unit sphere which demonstrate much more accurate results. Relative errors of these integrals with respect to corresponding analytical solutions are evaluated also for a full network model of rubber elasticity based on a Padé approximation of the inverse Langevin function as the chain force. According to the results of our study, the numerical integration over the unit sphere can still be considered as a reliable and accurate tool for full network models.
Important Nearby Galaxies without Accurate Distances
NASA Astrophysics Data System (ADS)
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
López, Mónica García; Fussell, Richard J; Stead, Sara L; Roberts, Dominic; McCullagh, Mike; Rao, Ramesh
2014-12-19
This study reports the development and validation of a screening method for the detection of pesticides in 11 different fruit and vegetable commodities. The method was based on ultra performance liquid chromatography-quadrupole-time of flight-mass spectrometry (UPLC-QTOF-MS). The objective was to validate the method in accordance with the SANCO guidance document (12571/2013) on analytical quality control and validation procedures for pesticide residues analysis in food and feed. Samples were spiked with 199 pesticides, each at two different concentrations (0.01 and 0.05 mg kg(-1)) and extracted using the QuEChERS approach. Extracts were analysed by UPLC-QTOF-MS using generic acquisition parameters. Automated detection and data filtering were performed using the UNIFI™ software and the peaks detected evaluated against a proprietary scientific library containing information for 504 pesticides. The results obtained using different data processing parameters were evaluated for 4378 pesticide/commodities combinations at 0.01 and 0.05 mg kg(-1). Using mass accuracy (± 5 ppm) with retention time (± 0.2 min) and a low response threshold (100 counts) the validated Screening Detection Limits (SDLs) were 0.01 mg kg(-1) and 0.05 mg kg(-1) for 57% and 79% of the compounds tested, respectively, with an average of 10 false detects per sample analysis. Excluding the most complex matrices (onion and leek) the detection rates increased to 69% and 87%, respectively. The use of additional parameters such as isotopic pattern and fragmentation information further reduced the number of false detects but compromised the detection rates, particularly at lower residue concentrations. The challenges associated with the validation and subsequent implementation of a pesticide multi-residue screening method are also discussed. PMID:25465001
Improved dynamic compensation for accurate cutting force measurements in milling applications
NASA Astrophysics Data System (ADS)
Scippa, A.; Sallese, L.; Grossi, N.; Campatelli, G.
2015-03-01
Accurate cutting-force measurements appear to be the key information in most of the machining related studies as they are fundamental in understanding the cutting processes, optimizing the cutting operations and evaluating the presence of instabilities that could affect the effectiveness of cutting processes. A variety of specifically designed transducers are commercially available nowadays and many different approaches in measuring cutting forces are presented in literature. The available transducers, though, express some limitations since they are conditioned by the vibration of the surrounding system and by the transducer's natural frequency. These parameters can drastically affect the measurement accuracy in some cases; hence an effective and accurate tool is required to compensate those dynamically induced errors in cutting force measurements. This work is aimed at developing and testing a compensation technique based on Kalman filter estimator. Two different approaches named "band-fitting" and "parallel elaboration" methods, have been developed to extend applications of this compensation technique, especially for milling purpose. The compensation filter has been designed upon the experimentally identified system's dynamic and its accuracy and effectiveness has been evaluated by numerical and experimental tests. Finally its specific application in cutting force measurements compensation is described.
NASA Technical Reports Server (NTRS)
Wang, Jun; Xu, Xiaoguang; Ding, Shouguo; Zeng, Jing; Spurr, Robert; Liu, Xiong; Chance, Kelly; Mishchenko, Michael I.
2014-01-01
We present a numerical testbed for remote sensing of aerosols, together with a demonstration for evaluating retrieval synergy from a geostationary satellite constellation. The testbed combines inverse (optimal-estimation) software with a forward model containing linearized code for computing particle scattering (for both spherical and non-spherical particles), a kernel-based (land and ocean) surface bi-directional reflectance facility, and a linearized radiative transfer model for polarized radiance. Calculation of gas absorption spectra uses the HITRAN (HIgh-resolution TRANsmission molecular absorption) database of spectroscopic line parameters and other trace species cross-sections. The outputs of the testbed include not only the Stokes 4-vector elements and their sensitivities (Jacobians) with respect to the aerosol single scattering and physical parameters (such as size and shape parameters, refractive index, and plume height), but also DFS (Degree of Freedom for Signal) values for retrieval of these parameters. This testbed can be used as a tool to provide an objective assessment of aerosol information content that can be retrieved for any constellation of (planned or real) satellite sensors and for any combination of algorithm design factors (in terms of wavelengths, viewing angles, radiance and/or polarization to be measured or used). We summarize the components of the testbed, including the derivation and validation of analytical formulae for Jacobian calculations. Benchmark calculations from the forward model are documented. In the context of NASA's Decadal Survey Mission GEOCAPE (GEOstationary Coastal and Air Pollution Events), we demonstrate the use of the testbed to conduct a feasibility study of using polarization measurements in and around the O2 A band for the retrieval of aerosol height information from space, as well as an to assess potential improvement in the retrieval of aerosol fine and coarse mode aerosol optical depth (AOD) through the
Panoutsos, C.S.; Hardalupas, Y.; Taylor, A.M.K.P.
2009-02-15
This work presents results from detailed chemical kinetics calculations of electronically excited OH (A{sup 2}{sigma}, denoted as OH{sup *}) and CH (A{sup 2}{delta}, denoted as CH{sup *}) chemiluminescent species in laminar premixed and non-premixed counterflow methane-air flames, at atmospheric pressure. Eight different detailed chemistry mechanisms, with added elementary reactions that account for the formation and destruction of the chemiluminescent species OH{sup *} and CH{sup *}, are studied. The effects of flow strain rate and equivalence ratio on the chemiluminescent intensities of OH{sup *}, CH{sup *} and their ratio are studied and the results are compared to chemiluminescent intensity ratio measurements from premixed laminar counterflow natural gas-air flames. This is done in order to numerically evaluate the measurement of equivalence ratio using OH{sup *} and CH{sup *} chemiluminescence, an experimental practise that is used in the literature. The calculations reproduced the experimental observation that there is no effect of strain rate on the chemiluminescent intensity ratio of OH{sup *} to CH{sup *}, and that the ratio is a monotonic function of equivalence ratio. In contrast, the strain rate was found to have an effect on both the OH{sup *} and CH{sup *} intensities, in agreement with experiment. The calculated OH{sup *}/CH{sup *} values showed that only five out of the eight mechanisms studied were within the same order of magnitude with the experimental data. A new mechanism, proposed in this work, gave results that agreed with experiment within 30%. It was found that the location of maximum emitted intensity from the excited species OH{sup *} and CH{sup *} was displaced by less than 65 and 115 {mu}m, respectively, away from the maximum of the heat release rate, in agreement with experiments, which is small relative to the spatial resolution of experimental methods applied to combustion applications, and, therefore, it is expected that intensity
NASA Astrophysics Data System (ADS)
Wang, Jun; Xu, Xiaoguang; Ding, Shouguo; Zeng, Jing; Spurr, Robert; Liu, Xiong; Chance, Kelly; Mishchenko, Michael
2014-10-01
We present a numerical testbed for remote sensing of aerosols, together with a demonstration for evaluating retrieval synergy from a geostationary satellite constellation. The testbed combines inverse (optimal-estimation) software with a forward model containing linearized code for computing particle scattering (for both spherical and non-spherical particles), a kernel-based (land and ocean) surface bi-directional reflectance facility, and a linearized radiative transfer model for polarized radiance. Calculation of gas absorption spectra uses the HITRAN (HIgh-resolution TRANsmission molecular absorption) database of spectroscopic line parameters and other trace species cross-sections. The outputs of the testbed include not only the Stokes 4-vector elements and their sensitivities (Jacobians) with respect to the aerosol single scattering and physical parameters (such as size and shape parameters, refractive index, and plume height), but also DFS (Degree of Freedom for Signal) values for retrieval of these parameters. This testbed can be used as a tool to provide an objective assessment of aerosol information content that can be retrieved for any constellation of (planned or real) satellite sensors and for any combination of algorithm design factors (in terms of wavelengths, viewing angles, radiance and/or polarization to be measured or used). We summarize the components of the testbed, including the derivation and validation of analytical formulae for Jacobian calculations. Benchmark calculations from the forward model are documented. In the context of NASAs Decadal Survey Mission GEO-CAPE (GEOstationary Coastal and Air Pollution Events), we demonstrate the use of the testbed to conduct a feasibility study of using polarization measurements in and around the O2A band for the retrieval of aerosol height information from space, as well as an to assess potential improvement in the retrieval of aerosol fine and coarse mode aerosol optical depth (AOD) through the
Accurate Prediction of Docked Protein Structure Similarity.
Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit
2015-09-01
One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807
Accurate, reliable prototype earth horizon sensor head
NASA Technical Reports Server (NTRS)
Schwarz, F.; Cohen, H.
1973-01-01
The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.
NASA Astrophysics Data System (ADS)
Dumoulin, Jean; Boucher, Vincent; Greffier, Florian
2009-08-01
Use of infrared vision in automotive industry has mainly focused on detection of pedestrians or animals at night or under poor weather conditions. In those approaches, the road infrastructure behavior in infrared range has not been investigated. So, research work was realized using numerical simulations associated with specific experiments in a fog tunnel. The present paper deals with numerical simulations developed for both visible spectrum (visibility in fog) and infrared vision applied to road infrastructure perception in foggy night conditions. Results obtained as a function of fog nature (radiation or advection) are presented and discussed.
Benchmarking accurate spectral phase retrieval of single attosecond pulses
NASA Astrophysics Data System (ADS)
Wei, Hui; Le, Anh-Thu; Morishita, Toru; Yu, Chao; Lin, C. D.
2015-02-01
A single extreme-ultraviolet (XUV) attosecond pulse or pulse train in the time domain is fully characterized if its spectral amplitude and phase are both determined. The spectral amplitude can be easily obtained from photoionization of simple atoms where accurate photoionization cross sections have been measured from, e.g., synchrotron radiations. To determine the spectral phase, at present the standard method is to carry out XUV photoionization in the presence of a dressing infrared (IR) laser. In this work, we examine the accuracy of current phase retrieval methods (PROOF and iPROOF) where the dressing IR is relatively weak such that photoelectron spectra can be accurately calculated by second-order perturbation theory. We suggest a modified method named swPROOF (scattering wave phase retrieval by omega oscillation filtering) which utilizes accurate one-photon and two-photon dipole transition matrix elements and removes the approximations made in PROOF and iPROOF. We show that the swPROOF method can in general retrieve accurate spectral phase compared to other simpler models that have been suggested. We benchmark the accuracy of these phase retrieval methods through simulating the spectrogram by solving the time-dependent Schrödinger equation numerically using several known single attosecond pulses with a fixed spectral amplitude but different spectral phases.
Numerical observer for cardiac motion assessment using machine learning
NASA Astrophysics Data System (ADS)
Marin, Thibault; Kalayeh, Mahdi M.; Pretorius, P. H.; Wernick, Miles N.; Yang, Yongyi; Brankov, Jovan G.
2011-03-01
In medical imaging, image quality is commonly assessed by measuring the performance of a human observer performing a specific diagnostic task. However, in practice studies involving human observers are time consuming and difficult to implement. Therefore, numerical observers have been developed, aiming to predict human diagnostic performance to facilitate image quality assessment. In this paper, we present a numerical observer for assessment of cardiac motion in cardiac-gated SPECT images. Cardiac-gated SPECT is a nuclear medicine modality used routinely in the evaluation of coronary artery disease. Numerical observers have been developed for image quality assessment via analysis of detectability of myocardial perfusion defects (e.g., the channelized Hotelling observer), but no numerical observer for cardiac motion assessment has been reported. In this work, we present a method to design a numerical observer aiming to predict human performance in detection of cardiac motion defects. Cardiac motion is estimated from reconstructed gated images using a deformable mesh model. Motion features are then extracted from the estimated motion field and used to train a support vector machine regression model predicting human scores (human observers' confidence in the presence of the defect). Results show that the proposed method could accurately predict human detection performance and achieve good generalization properties when tested on data with different levels of post-reconstruction filtering.
Auguste, Peter; Tsertsvadze, Alexander; Pink, Joshua; Court, Rachel; Seedat, Farah; Gurung, Tara; Freeman, Karoline; Taylor-Phillips, Sian; Walker, Clare; Madan, Jason; Kandala, Ngianga-Bakwin; Clarke, Aileen; Sutcliffe, Paul
2016-01-01
BACKGROUND Tuberculosis (TB), caused by Mycobacterium tuberculosis (MTB) [(Zopf 1883) Lehmann and Neumann 1896], is a major cause of morbidity and mortality. Nearly one-third of the world's population is infected with MTB; TB has an annual incidence of 9 million new cases and each year causes 2 million deaths worldwide. OBJECTIVES To investigate the clinical effectiveness and cost-effectiveness of screening tests [interferon-gamma release assays (IGRAs) and tuberculin skin tests (TSTs)] in latent tuberculosis infection (LTBI) diagnosis to support National Institute for Health and Care Excellence (NICE) guideline development for three population groups: children, immunocompromised people and those who have recently arrived in the UK from high-incidence countries. All of these groups are at higher risk of progression from LTBI to active TB. DATA SOURCES Electronic databases including MEDLINE, EMBASE, The Cochrane Library and Current Controlled Trials were searched from December 2009 up to December 2014. REVIEW METHODS English-language studies evaluating the comparative effectiveness of commercially available tests used for identifying LTBI in children, immunocompromised people and recent arrivals to the UK were eligible. Interventions were IGRAs [QuantiFERON(®)-TB Gold (QFT-G), QuantiFERON(®)-TB Gold-In-Tube (QFT-GIT) (Cellestis/Qiagen, Carnegie, VA, Australia) and T-SPOT.TB (Oxford Immunotec, Abingdon, UK)]. The comparator was TST 5 mm or 10 mm alone or with an IGRA. Two independent reviewers screened all identified records and undertook a quality assessment and data synthesis. A de novo model, structured in two stages, was developed to compare the cost-effectiveness of diagnostic strategies. RESULTS In total, 6687 records were screened, of which 53 unique studies were included (a further 37 studies were identified from a previous NICE guideline). The majority of the included studies compared the strength of association for the QFT-GIT/G IGRA with the TST (5
Detection and accurate localization of harmonic chipless tags
NASA Astrophysics Data System (ADS)
Dardari, Davide
2015-12-01
We investigate the detection and localization properties of harmonic tags working at microwave frequencies. A two-tone interrogation signal and a dedicated signal processing scheme at the receiver are proposed to eliminate phase ambiguities caused by the short signal wavelength and to provide accurate distance/position estimation even in the presence of clutter and multipath. The theoretical limits on tag detection and localization accuracy are investigated starting from a concise characterization of harmonic backscattered signals. Numerical results show that accuracies in the order of centimeters are feasible within an operational range of a few meters in the RFID UHF band.
Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.
Fuchs, Franz G; Hjelmervik, Jon M
2016-02-01
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454
Towards Accurate Application Characterization for Exascale (APEX)
Hammond, Simon David
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
A numerical study of mixing in supersonic combustors with hypermixing injectors
NASA Technical Reports Server (NTRS)
Lee, J.
1992-01-01
A numerical study was conducted to evaluate the performance of wall mounted fuel-injectors designed for potential Supersonic Combustion Ramjet (SCRAM-jet) engine applications. The focus of this investigation was to numerically simulate existing combustor designs for the purpose of validating the numerical technique and the physical models developed. Three different injector designs of varying complexity were studied to fully understand the computational implications involved in accurate predictions. A dual transverse injection system and two streamwise injector designs were studied. The streamwise injectors were designed with swept ramps to enhance fuel-air mixing and combustion characteristics at supersonic speeds without the large flow blockage and drag contribution of the transverse injection system. For this study, the Mass-Averaged Navier-Stokes equations and the chemical species continuity equations were solved. The computations were performed using a finite-volume implicit numerical technique and multiple block structured grid system. The interfaces of the multiple block structured grid systems were numerically resolved using the flux-conservative technique. Detailed comparisons between the computations and existing experimental data are presented. These comparisons show that numerical predictions are in agreement with the experimental data. These comparisons also show that a number of turbulence model improvements are needed for accurate combustor flowfield predictions.
Mill profiler machines soft materials accurately
NASA Technical Reports Server (NTRS)
Rauschl, J. A.
1966-01-01
Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.
Remote balance weighs accurately amid high radiation
NASA Technical Reports Server (NTRS)
Eggenberger, D. N.; Shuck, A. B.
1969-01-01
Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.
Evaluating Large-Scale Studies to Accurately Appraise Children's Performance
ERIC Educational Resources Information Center
Ernest, James M.
2012-01-01
Educational policy is often developed using a top-down approach. Recently, there has been a concerted shift in policy for educators to develop programs and research proposals that evolve from "scientific" studies and focus less on their intuition, aided by professional wisdom. This article analyzes several national and international educational…
Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.
2002-01-01
Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.
Lamb mode selection for accurate wall loss estimation via guided wave tomography
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.
2014-02-18
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.
A Method for Accurate in silico modeling of Ultrasound Transducer Arrays
Guenther, Drake A.; Walker, William F.
2009-01-01
This paper presents a new approach to improve the in silico modeling of ultrasound transducer arrays. While current simulation tools accurately predict the theoretical element spatio-temporal pressure response, transducers do not always behave as theorized. In practice, using the probe's physical dimensions and published specifications in silico, often results in unsatisfactory agreement between simulation and experiment. We describe a general optimization procedure used to maximize the correlation between the observed and simulated spatio-temporal response of a pulsed single element in a commercial ultrasound probe. A linear systems approach is employed to model element angular sensitivity, lens effects, and diffraction phenomena. A numerical deconvolution method is described to characterize the intrinsic electro-mechanical impulse response of the element. Once the response of the element and optimal element characteristics are known, prediction of the pressure response for arbitrary apertures and excitation signals is performed through direct convolution using available tools. We achieve a correlation of 0.846 between the experimental emitted waveform and simulated waveform when using the probe's physical specifications in silico. A far superior correlation of 0.988 is achieved when using the optimized in silico model. Electronic noise appears to be the main effect preventing the realization of higher correlation coefficients. More accurate in silico modeling will improve the evaluation and design of ultrasound transducers as well as aid in the development of sophisticated beamforming strategies. PMID:19041997
Lamb mode selection for accurate wall loss estimation via guided wave tomography
NASA Astrophysics Data System (ADS)
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.
2014-02-01
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1-2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S0 and A0, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A0 to thickness variations was shown to be superior to S0, however, the attenuation from A0 when a liquid loading was present was much higher than S0. A0 was less sensitive to the presence of coatings on the surface of than S0.
Understanding the Code: keeping accurate records.
Griffith, Richard
2015-10-01
In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404
Robust ODF smoothing for accurate estimation of fiber orientation.
Beladi, Somaieh; Pathirana, Pubudu N; Brotchie, Peter
2010-01-01
Q-ball imaging was presented as a model free, linear and multimodal diffusion sensitive approach to reconstruct diffusion orientation distribution function (ODF) using diffusion weighted MRI data. The ODFs are widely used to estimate the fiber orientations. However, the smoothness constraint was proposed to achieve a balance between the angular resolution and noise stability for ODF constructs. Different regularization methods were proposed for this purpose. However, these methods are not robust and quite sensitive to the global regularization parameter. Although, numerical methods such as L-curve test are used to define a globally appropriate regularization parameter, it cannot serve as a universal value suitable for all regions of interest. This may result in over smoothing and potentially end up in neglecting an existing fiber population. In this paper, we propose to include an interpolation step prior to the spherical harmonic decomposition. This interpolation based approach is based on Delaunay triangulation provides a reliable, robust and accurate smoothing approach. This method is easy to implement and does not require other numerical methods to define the required parameters. Also, the fiber orientations estimated using this approach are more accurate compared to other common approaches. PMID:21096202
Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping
NASA Astrophysics Data System (ADS)
Rehak, M.; Skaloud, J.
2015-08-01
In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.
High-order accurate multi-phase simulations: building blocks and whats tricky about them
NASA Astrophysics Data System (ADS)
Kummer, Florian
2015-11-01
We are going to present a high-order numerical method for multi-phase flow problems, which employs a sharp interface representation by a level-set and an extended discontinuous Galerkin (XDG) discretization for the flow properties. The shape of the XDG basis functions is dynamically adapted to the position of the fluid interface, so that the spatial approximation space can represent jumps in pressure and kinks in velocity accurately. By this approach, the `hp-convergence' property of the classical discontinuous Galerkin (DG) method can be preserved for the low-regularity, discontinuous solutions, such as those appearing in multi-phase flows. Within the past years, several building blocks of such a method were presented: this includes numerical integration on cut-cells, the spatial discretization by the XDG method, precise evaluation of curvature and level-set algorithms tailored to the special requirements of XDG-methods. The presentation covers a short review on these building-block and their integration into a full multi-phase solver. A special emphasis is put on the discussion of the several pitfalls one may expire in the formulation of such a solver. German Research Foundation.
Bizarro, J.P.; Belo, J.H.; Figueiredo, A.C.
1997-06-01
Knowing that short-time propagators for Fokker{endash}Planck equations are Gaussian, and based on a path-sum formulation, an efficient and simple numerical method is presented to solve the initial-value problem for electron kinetics during rf heating and current drive. The formulation is thoroughly presented and discussed, its advantages are stressed, and general, practical criteria for its implementation are derived regarding the time step and grid spacing. The new approach is illustrated and validated by solving the one-dimensional model for lower-hybrid current drive, which has a well-known steady-state analytical solution. {copyright} {ital 1997 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Yokota, Yasuhiro; Yamamoto, Takuji; Date, Kensuke
Facebolts are frequently used in order to reinforce the ground ahead of the cutting face. They are especially effective for tunnelling in poor conditions due to ground squeezlng. According to our centrifuge model tests and parametric studies with numeric alanalysis method, it demonstrated that bond strength was influential on failure pattern and ground movement. We subsequently developed new facebolts with checkered steel surface which can present much larger bond strength. Furthermore, this paper describes an actual employment of these new bolts to several site.
Accurate thermoelastic tensor and acoustic velocities of NaCl
NASA Astrophysics Data System (ADS)
Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.
2015-12-01
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
Efficient and accurate computation of generalized singular-value decompositions
NASA Astrophysics Data System (ADS)
Drmac, Zlatko
2001-11-01
We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.
Accurate thermoelastic tensor and acoustic velocities of NaCl
Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.
2015-12-15
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.
MFIX documentation numerical technique
Syamlal, M.
1998-01-01
MFIX (Multiphase Flow with Interphase eXchanges) is a general-purpose hydrodynamic model for describing chemical reactions and heat transfer in dense or dilute fluid-solids flows, which typically occur in energy conversion and chemical processing reactors. The calculations give time-dependent information on pressure, temperature, composition, and velocity distributions in the reactors. The theoretical basis of the calculations is described in the MFIX Theory Guide. Installation of the code, setting up of a run, and post-processing of results are described in MFIX User`s manual. Work was started in April 1996 to increase the execution speed and accuracy of the code, which has resulted in MFIX 2.0. To improve the speed of the code the old algorithm was replaced by a more implicit algorithm. In different test cases conducted the new version runs 3 to 30 times faster than the old version. To increase the accuracy of the computations, second order accurate discretization schemes were included in MFIX 2.0. Bubbling fluidized bed simulations conducted with a second order scheme show that the predicted bubble shape is rounded, unlike the (unphysical) pointed shape predicted by the first order upwind scheme. This report describes the numerical technique used in MFIX 2.0.
Personalized numerical observer
NASA Astrophysics Data System (ADS)
Brankov, Jovan G.; Pretorius, P. Hendrik
2010-02-01
It is widely accepted that medical image quality should be assessed using task-based criteria, such as humanobserver (HO) performance in a lesion-detection (scoring) task. HO studies are time consuming and cost prohibitive to be used for image quality assessment during development of either reconstruction methods or imaging systems. Therefore, a numerical observer (NO), a HO surrogate, is highly desirable. In the past, we have proposed and successfully tested a NO based on a supervised-learning approach (namely a support vector machine) for cardiac gated SPECT image quality assessment. In the supervised-learning approach, the goal is to identify the relationship between measured image features and HO myocardium defect likelihood scores. Thus far we have treated multiple HO readers by simply averaging or pooling their respective scores. Due to observer variability, this may be suboptimal and less accurate. Therefore, in this work, we are setting our goal to predict individual observer scores independently in the hope to better capture some relevant lesion-detection mechanism of the human observers. This is even more important as there are many ways to get equivalent observer performance (measured by area under receiver operating curve), and simply predicting some joint (average or pooled) score alone is not likely to succeed.
Numerical Relativity and Astrophysics
NASA Astrophysics Data System (ADS)
Lehner, Luis; Pretorius, Frans
2014-08-01
Throughout the Universe many powerful events are driven by strong gravitational effects that require general relativity to fully describe them. These include compact binary mergers, black hole accretion, and stellar collapse, where velocities can approach the speed of light and extreme gravitational fields (ΦNewt/c2≃1) mediate the interactions. Many of these processes trigger emission across a broad range of the electromagnetic spectrum. Compact binaries further source strong gravitational wave emission that could directly be detected in the near future. This feat will open up a gravitational wave window into our Universe and revolutionize our understanding of it. Describing these phenomena requires general relativity, and—where dynamical effects strongly modify gravitational fields—the full Einstein equations coupled to matter sources. Numerical relativity is a field within general relativity concerned with studying such scenarios that cannot be accurately modeled via perturbative or analytical calculations. In this review, we examine results obtained within this discipline, with a focus on its impact in astrophysics.
NASA Astrophysics Data System (ADS)
Xiao, Wenbin; Dong, Wencai
2016-06-01
In the framework of 3D potential flow theory, Bessho form translating-pulsating source Green's function in frequency domain is chosen as the integral kernel in this study and hybrid source-and-dipole distribution model of the boundary element method is applied to directly solve the velocity potential for advancing ship in regular waves. Numerical characteristics of the Green function show that the contribution of local-flow components to velocity potential is concentrated at the nearby source point area and the wave component dominates the magnitude of velocity potential in the far field. Two kinds of mathematical models, with or without local-flow components taken into account, are adopted to numerically calculate the longitudinal motions of Wigley hulls, which demonstrates the applicability of translating-pulsating source Green's function method for various ship forms. In addition, the mesh analysis of discrete surface is carried out from the perspective of ship-form characteristics. The study shows that the longitudinal motion results by the simplified model are somewhat greater than the experimental data in the resonant zone, and the model can be used as an effective tool to predict ship seakeeping properties. However, translating-pulsating source Green function method is only appropriate for the qualitative analysis of motion response in waves if the ship geometrical shape fails to satisfy the slender-body assumption.
Experimental and numerical analyses of different extended surfaces
NASA Astrophysics Data System (ADS)
Diani, A.; Mancin, S.; Zilio, C.; Rossetto, L.
2012-11-01
Air is a cheap and safe fluid, widely used in electronic, aerospace and air conditioning applications. Because of its poor heat transfer properties, it always flows through extended surfaces, such as finned surfaces, to enhance the convective heat transfer. In this paper, experimental results are reviewed and numerical studies during air forced convection through extended surfaces are presented. The thermal and hydraulic behaviours of a reference trapezoidal finned surface, experimentally evaluated by present authors in an open-circuit wind tunnel, has been compared with numerical simulations carried out by using the commercial CFD software COMSOL Multiphysics. Once the model has been validated, numerical simulations have been extended to other rectangular finned configurations, in order to study the effects of the fin thickness, fin pitch and fin height on the thermo-hydraulic behaviour of the extended surfaces. Moreover, several pin fin surfaces have been simulated in the same range of operating conditions previously analyzed. Numerical results about heat transfer and pressure drop, for both plain finned and pin fin surfaces, have been compared with empirical correlations from the open literature, and more accurate equations have been developed, proposed, and validated.
Numerical methods for molecular dynamics
Skeel, R.D.
1991-01-01
This report summarizes our research progress to date on the use of multigrid methods for three-dimensional elliptic partial differential equations, with particular emphasis on application to the Poisson-Boltzmann equation of molecular biophysics. This research is motivated by the need for fast and accurate numerical solution techniques for three-dimensional problems arising in physics and engineering. In many applications these problems must be solved repeatedly, and the extremely large number of discrete unknowns required to accurately approximate solutions to partial differential equations in three-dimensional regions necessitates the use of efficient solution methods. This situation makes clear the importance of developing methods which are of optimal order (or nearly so), meaning that the number of operations required to solve the discrete problem is on the order of the number of discrete unknowns. Multigrid methods are generally regarded as being in this class of methods, and are in fact provably optimal order for an increasingly large class of problems. The fundamental goal of this research is to develop a fast and accurate numerical technique, based on multi-level principles, for the solutions of the Poisson-Boltzmann equation of molecular biophysics and similar equations occurring in other applications. An outline of the report is as follows. We first present some background material, followed by a survey of the literature on the use of multigrid methods for solving problems similar to the Poisson-Boltzmann equation. A short description of the software we have developed so far is then given, and numerical results are discussed. Finally, our research plans for the coming year are presented.
TOPLHA: an accurate and efficient numerical tool for analysis and design of LH antennas
NASA Astrophysics Data System (ADS)
Milanesio, D.; Meneghini, O.; Maggiora, R.; Guadamuz, S.; Hillairet, J.; Lancellotti, V.; Vecchi, G.
2012-01-01
This paper presents a self-consistent, integral-equation approach for the analysis of plasma-facing lower hybrid (LH) launchers; the geometry of the waveguide grill structure can be completely arbitrary, including the non-planar mouth of the grill. This work is based on the theoretical approach and code implementation of the TOPICA code, of which it shares the modular structure and constitutes the extension into the LH range. Code results are validated against the literature results and simulations from similar codes.
Pre-Stall Behavior of a Transonic Axial Compressor Stage via Time-Accurate Numerical Simulation
NASA Technical Reports Server (NTRS)
Chen, Jen-Ping; Hathaway, Michael D.; Herrick, Gregory P.
2008-01-01
CFD calculations using high-performance parallel computing were conducted to simulate the pre-stall flow of a transonic compressor stage, NASA compressor Stage 35. The simulations were run with a full-annulus grid that models the 3D, viscous, unsteady blade row interaction without the need for an artificial inlet distortion to induce stall. The simulation demonstrates the development of the rotating stall from the growth of instabilities. Pressure-rise performance and pressure traces are compared with published experimental data before the study of flow evolution prior to the rotating stall. Spatial FFT analysis of the flow indicates a rotating long-length disturbance of one rotor circumference, which is followed by a spike-type breakdown. The analysis also links the long-length wave disturbance with the initiation of the spike inception. The spike instabilities occur when the trajectory of the tip clearance flow becomes perpendicular to the axial direction. When approaching stall, the passage shock changes from a single oblique shock to a dual-shock, which distorts the perpendicular trajectory of the tip clearance vortex but shows no evidence of flow separation that may contribute to stall.
NASA Astrophysics Data System (ADS)
Kihm, J.; Kim, J.
2006-12-01
A series of numerical simulations using a fully coupled hydrogeomechanical numerical model, which is named COWADE123D, is performed to analyze groundwater flow and land deformation in an unsaturated heterogeneous slope and its stability under various rainfall rates. The slope is located along a dam lake in Republic of Korea. The slope consists of the Cretaceous granodiorite and can be subdivided into the four layers such as weathered soil, weathered rock, intermediate rock, and hard rock from its ground surface due to weathering process. The numerical simulation results show that both rainfall rate and heterogeneity play important roles in controlling groundwater flow and land deformation in the unsaturated slope. The slope becomes more saturated, and thus its overall hydrogeomechanical stability deteriorates, especially in the weathered rock and weathered soil layers, as the rainfall increases up to the maximum daily rainfall rate in the return period of one year. However, the slope becomes fully saturated, and thus its hydrogeomechanical responses are almost identical under more than such a critical rainfall rate. From the viewpoint of hydrogeology, the pressure head, and hence the hydraulic head increase as the rainfall rate increases. As a result, the groundwater table rises, the unsaturated zone reduces, the seepage face expands from the slope toe toward the slope crest, and the groundwater flow velocity increases along the seepage face. Particularly, the groundwater flow velocity increases significantly in the weathered soil and weathered rock layers as the rainfall rate increases. This is because their hydraulic conductivity is relatively higher than that of the intermediate rock and hard rock layers. From the viewpoint of geomechanics, the horizontal displacement increases, while the vertical displacement decreases toward the slope toe as the rainfall rate increases. This may result from the buoyancy effect associated with the groundwater table rise as the
A highly accurate interatomic potential for argon
NASA Astrophysics Data System (ADS)
Aziz, Ronald A.
1993-09-01
A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.
Freedman, Vicky L.; Mackley, Rob D.; Waichler, Scott R.; Horner, Jacob A.
2013-09-26
In an open-loop groundwater heat pump (GHP) system, groundwater is extracted, run through a heat exchanger, and injected back into the ground, resulting in no mass balance changes to the flow system. Although the groundwater use is non-consumptive, the withdrawal and injection of groundwater may cause negative hydraulic and thermal impacts to the flow system. Because GHP is a relatively new technology and regulatory guidelines for determining environmental impacts for GHPs may not exist, consumptive use metrics may need to be used for permit applications. For consumptive use permits, a radius of influence is often used, which is defined as the radius beyond which hydraulic impacts to the system are considered negligible. In this paper, the hydraulic radius of influence concept was examined using analytical and numerical methods for a non-consumptive GHP system in southeastern Washington State. At this location, the primary hydraulic concerns were impacts to nearby contaminant plumes and a water supply well field. The results of this study showed that the analytical techniques with idealized radial flow were generally unsuited because they over predicted the influence of the well system. The numerical techniques yielded more reasonable results because they could account for aquifer heterogeneities and flow boundaries. In particular, the use of a capture zone analysis was identified as the best method for determining potential changes in current contaminant plume trajectories. The capture zone analysis is a more quantitative and reliable tool for determining the radius of influence with a greater accuracy and better insight for a non-consumptive GHP assessment.
Numerical investigation of stall flutter
Ekaterinaris, J.A.; Platzer, M.F.
1996-04-01
Unsteady, separated, high Reynolds number flow over an airfoil undergoing oscillatory motion is investigated numerically. The compressible form of the Reynolds-averaged governing equations is solved using a high-order, upwind biased numerical scheme. The turbulent flow region is computed using a one-equation turbulence model. The computed results show that the key to the accurate prediction of the unsteady loads at stall flutter conditions is the modeling of the transitional flow region at the leading edge. A simplified criterion for the transition onset is used. The transitional flow region is computed with a modified form of the turbulence model. The computed solution, where the transitional flow region is included, shows that the small laminar/transitional separation bubble forming during the pitch-up motion has a decisive effect on the near-wall flow and the development of the unsteady loads. Detailed comparisons of computed fully turbulent and transitional flow solutions with experimental data are presented.
Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd
2012-01-01
The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.
ERIC Educational Resources Information Center
McAnany, Emile G.; And Others
1980-01-01
Two lead articles set the theme for this issue devoted to evaluation as Emile G. McAnany examines the usefulness of evaluation and Robert C. Hornik addresses four widely accepted myths about evaluation. Additional articles include a report of a field evaluation done by the Accion Cultural Popular (ACPO); a study of the impact of that evaluation by…
Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2
NASA Technical Reports Server (NTRS)
Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.
1988-01-01
The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.
Coincidental match of numerical simulation and physics
NASA Astrophysics Data System (ADS)
Pierre, B.; Gudmundsson, J. S.
2010-08-01
Consequences of rapid pressure transients in pipelines range from increased fatigue to leakages and to complete ruptures of pipeline. Therefore, accurate predictions of rapid pressure transients in pipelines using numerical simulations are critical. State of the art modelling of pressure transient in general, and water hammer in particular include unsteady friction in addition to the steady frictional pressure drop, and numerical simulations rely on the method of characteristics. Comparison of rapid pressure transient calculations by the method of characteristics and a selected high resolution finite volume method highlights issues related to modelling of pressure waves and illustrates that matches between numerical simulations and physics are purely coincidental.
NASA Astrophysics Data System (ADS)
Caballero, L.; Capra, L.
2014-07-01
Lahar modelling represents an excellent tool to design hazard maps. It allows the definition of potential inundation zones for different lahar magnitude scenarios and sediment concentrations. Here we present the results obtained for the 2001 syneruptive lahar at Popocatépetl volcano, based on simulations performed with FLO2D software. An accurate delineation of this event is needed since it is one of the possible scenarios considered during a volcanic crisis. One of the main issues for lahar simulation using FLO2D is the calibration of the input hydrograph and rheologic flow properties. Here we verified that geophone data can be properly calibrated by means of peak discharge calculations obtained by superelevation method. Simulation results clearly show the influence of concentration and rheologic properties on lahar depth and distribution. Modifying rheologic properties during lahar simulation strongly affect lahar distribution. More viscous lahars have a more restricted aerial distribution, thicker depths, and resulting velocities are noticeable smaller. FLO2D proved to be a very successful tool to delimitate lahar inundation zones as well as to generate different lahar scenarios not only related to lahar volume or magnitude but also to take into account different sediment concentrations and rheologies widely documented to influence lahar prone areas.
NASA Astrophysics Data System (ADS)
Caballero, L.; Capra, L.
2014-12-01
Lahar modeling represents an excellent tool for designing hazard maps. It allows the definition of potential inundation zones for different lahar magnitude scenarios and sediment concentrations. Here, we present the results obtained for the 2001 syneruptive lahar at Popocatépetl volcano, based on simulations performed with FLO2D software. An accurate delineation of this event is needed, since it is one of the possible scenarios considered if magmatic activity increases its magnitude. One of the main issues for lahar simulation using FLO2D is the calibration of the input hydrograph and rheological flow properties. Here, we verified that geophone data can be properly calibrated by means of peak discharge calculations obtained by the superelevation method. Digital elevation model resolution also resulted as an important factor in defining the reliability of the simulated flows. Simulation results clearly show the influence of sediment concentrations and rheological properties on lahar depth and distribution. Modifying rheological properties during lahar simulation strongly affects lahar distribution. More viscous lahars have a more restricted aerial distribution and thicker depths, and resulting velocities are noticeably smaller. FLO2D proved to be a very successful tool for delimitating lahar inundation zones as well as generating different lahar scenarios not only related to lahar volume or magnitude, but also taking into account different sediment concentrations and rheologies widely documented as influencing lahar-prone areas.
Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
Accurate deterministic solutions for the classic Boltzmann shock profile
NASA Astrophysics Data System (ADS)
Yue, Yubei
The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.
NASA Technical Reports Server (NTRS)
Perri, Todd A.; Mckillip, R. M., Jr.; Curtiss, H. C., Jr.
1987-01-01
The development and methodology is presented for development of full-authority implicit model-following and explicit model-following optimal controllers for use on helicopters operating in the Nap-of-the Earth (NOE) environment. Pole placement, input-output frequency response, and step input response were used to evaluate handling qualities performance. The pilot was equipped with velocity-command inputs. A mathematical/computational trajectory optimization method was employed to evaluate the ability of each controller to fly NOE maneuvers. The method determines the optimal swashplate and thruster input histories from the helicopter's dynamics and the prescribed geometry and desired flying qualities of the maneuver. Three maneuvers were investigated for both the implicit and explicit controllers with and without auxiliary propulsion installed: pop-up/dash/descent, bob-up at 40 knots, and glideslope. The explicit controller proved to be superior to the implicit controller in performance and ease of design.
Numerically Controlled Machining Of Wind-Tunnel Models
NASA Technical Reports Server (NTRS)
Kovtun, John B.
1990-01-01
New procedure for dynamic models and parts for wind-tunnel tests or radio-controlled flight tests constructed. Involves use of single-phase numerical control (NC) technique to produce highly-accurate, symmetrical models in less time.
Numerical simulations of cryogenic cavitating flows
NASA Astrophysics Data System (ADS)
Kim, Hyunji; Kim, Hyeongjun; Min, Daeho; Kim, Chongam
2015-12-01
The present study deals with a numerical method for cryogenic cavitating flows. Recently, we have developed an accurate and efficient baseline numerical scheme for all-speed water-gas two-phase flows. By extending such progress, we modify the numerical dissipations to be properly scaled so that it does not show any deficiencies in low Mach number regions. For dealing with cryogenic two-phase flows, previous EOS-dependent shock discontinuity sensing term is replaced with a newly designed EOS-free one. To validate the proposed numerical method, cryogenic cavitating flows around hydrofoil are computed and the pressure and temperature depression effect in cryogenic cavitation are demonstrated. Compared with Hord's experimental data, computed results are turned out to be satisfactory. Afterwards, numerical simulations of flow around KARI turbopump inducer in liquid rocket are carried out under various flow conditions with water and cryogenic fluids, and the difference in inducer flow physics depending on the working fluids are examined.
A numerical method for cardiac mechanoelectric simulations.
Pathmanathan, Pras; Whiteley, Jonathan P
2009-05-01
Much effort has been devoted to developing numerical techniques for solving the equations that describe cardiac electrophysiology, namely the monodomain equations and bidomain equations. Only a limited selection of publications, however, address the development of numerical techniques for mechanoelectric simulations where cardiac electrophysiology is coupled with deformation of cardiac tissue. One problem commonly encountered in mechanoelectric simulations is instability of the coupled numerical scheme. In this study, we develop a stable numerical scheme for mechanoelectric simulations. A number of convergence tests are carried out using this stable technique for simulations where deformations are of the magnitude typically observed in a beating heart. These convergence tests demonstrate that accurate computation of tissue deformation requires a nodal spacing of around 1 mm in the mesh used to calculate tissue deformation. This is a much finer computational grid than has previously been acknowledged, and has implications for the computational efficiency of the resulting numerical scheme. PMID:19263223
Accurate and robust estimation of camera parameters using RANSAC
NASA Astrophysics Data System (ADS)
Zhou, Fuqiang; Cui, Yi; Wang, Yexin; Liu, Liu; Gao, He
2013-03-01
Camera calibration plays an important role in the field of machine vision applications. The popularly used calibration approach based on 2D planar target sometimes fails to give reliable and accurate results due to the inaccurate or incorrect localization of feature points. To solve this problem, an accurate and robust estimation method for camera parameters based on RANSAC algorithm is proposed to detect the unreliability and provide the corresponding solutions. Through this method, most of the outliers are removed and the calibration errors that are the main factors influencing measurement accuracy are reduced. Both simulative and real experiments have been carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large noise condition and quite efficient to improve the calibration accuracy compared with the original state.
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions
Chen, Shengyong; Xiao, Gang; Li, Xiaoli
2014-01-01
This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954
Groundtruth approach to accurate quantitation of fluorescence microarrays
Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J
2000-12-01
To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.
Multimodal spatial calibration for accurately registering EEG sensor positions.
Zhang, Jianhua; Chen, Jian; Chen, Shengyong; Xiao, Gang; Li, Xiaoli
2014-01-01
This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954
Regularization Based Iterative Point Match Weighting for Accurate Rigid Transformation Estimation.
Liu, Yonghuai; De Dominicis, Luigi; Wei, Baogang; Chen, Liang; Martin, Ralph R
2015-09-01
Feature extraction and matching (FEM) for 3D shapes finds numerous applications in computer graphics and vision for object modeling, retrieval, morphing, and recognition. However, unavoidable incorrect matches lead to inaccurate estimation of the transformation relating different datasets. Inspired by AdaBoost, this paper proposes a novel iterative re-weighting method to tackle the challenging problem of evaluating point matches established by typical FEM methods. Weights are used to indicate the degree of belief that each point match is correct. Our method has three key steps: (i) estimation of the underlying transformation using weighted least squares, (ii) penalty parameter estimation via minimization of the weighted variance of the matching errors, and (iii) weight re-estimation taking into account both matching errors and information learnt in previous iterations. A comparative study, based on real shapes captured by two laser scanners, shows that the proposed method outperforms four other state-of-the-art methods in terms of evaluating point matches between overlapping shapes established by two typical FEM methods, resulting in more accurate estimates of the underlying transformation. This improved transformation can be used to better initialize the iterative closest point algorithm and its variants, making 3D shape registration more likely to succeed. PMID:26357287
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-06-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705
NASA Astrophysics Data System (ADS)
Kim, M.; Harman, C. J.
2013-12-01
The distribution of water travel times is one of the crucial hydrologic characteristics of the catchment. Recently, it has been argued that a rigorous treatment of travel time distributions should allow for their variability in time because of the variable fluxes and partitioning of water in the water balance, and the consequent variable storage of a catchment. We would like to be able to observe the structure of the temporal variations in travel time distributions under controlled conditions, such as in a soil column or under irrigation experiments. However, time-variable travel time distributions are difficult to observe using typical active and passive tracer approaches. Time-variability implies that tracers introduced at different times will have different travel time distributions. The distribution may also vary during injection periods. Moreover, repeat application of a single tracer in a system with significant memory leads to overprinting of break-through curves, which makes it difficult to extract the original break-through curves, and the number of ideal tracers that can be applied is usually limited. Recognizing these difficulties, the PERTH (PERiodic Tracer Hierarchy) method has been developed. The method provides a way to estimate time-variable travel time distributions by tracer experiments under controlled conditions by employing a multi-tracer hierarchy under periodical hydrologic forcing inputs. The key assumption of the PERTH method is that as time gets sufficiently large relative to injection time, the average travel time distribution of two distinct ideal tracers injected during overlapping periods become approximately equal. Thus one can be used as a proxy for the other, and the breakthrough curves of tracers applied at different times in a periodic forcing condition can be separated from one another. In this study, we tested the PERTH method numerically for the case of infiltration at the plot scale using HYDRUS-1D and a particle
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
Accurate colorimetric feedback for RGB LED clusters
NASA Astrophysics Data System (ADS)
Man, Kwong; Ashdown, Ian
2006-08-01
We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.
Accurate mask model for advanced nodes
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle
2014-07-01
Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.
Accurate Guitar Tuning by Cochlear Implant Musicians
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
A new generalized correlation for accurate vapor pressure prediction
NASA Astrophysics Data System (ADS)
An, Hui; Yang, Wenming
2012-08-01
An accurate knowledge of the vapor pressure of organic liquids is very important for the oil and gas processing operations. In combustion modeling, the accuracy of numerical predictions is also highly dependent on the fuel properties such as vapor pressure. In this Letter, a new generalized correlation is proposed based on the Lee-Kesler's method where a fuel dependent parameter 'A' is introduced. The proposed method only requires the input parameters of critical temperature, normal boiling temperature and the acentric factor of the fluid. With this method, vapor pressures have been calculated and compared with the data reported in data compilation for 42 organic liquids over 1366 data points, and the overall average absolute percentage deviation is only 1.95%.
Direct computation of parameters for accurate polarizable force fields
Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.
2014-11-21
We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.
Accurate ab initio energy gradients in chemical compound space.
Anatole von Lilienfeld, O
2009-10-28
Analytical potential energy derivatives, based on the Hellmann-Feynman theorem, are presented for any pair of isoelectronic compounds. Since energies are not necessarily monotonic functions between compounds, these derivatives can fail to predict the right trends of the effect of alchemical mutation. However, quantitative estimates without additional self-consistency calculations can be made when the Hellmann-Feynman derivative is multiplied with a linearization coefficient that is obtained from a reference pair of compounds. These results suggest that accurate predictions can be made regarding any molecule's energetic properties as long as energies and gradients of three other molecules have been provided. The linearization coefficent can be interpreted as a quantitative measure of chemical similarity. Presented numerical evidence includes predictions of electronic eigenvalues of saturated and aromatic molecular hydrocarbons. PMID:19894922
An accurate registration technique for distorted images
NASA Technical Reports Server (NTRS)
Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis
1990-01-01
Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.
Accurate maser positions for MALT-45
NASA Astrophysics Data System (ADS)
Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven
2013-10-01
MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.