Semiempirical Theories of the Affinities of Negative Atomic Ions
NASA Technical Reports Server (NTRS)
Edie, John W.
1961-01-01
The determination of the electron affinities of negative atomic ions by means of direct experimental investigation is limited. To supplement the meager experimental results, several semiempirical theories have been advanced. One commonly used technique involves extrapolating the electron affinities along the isoelectronic sequences, The most recent of these extrapolations Is studied by extending the method to Include one more member of the isoelectronic sequence, When the results show that this extension does not increase the accuracy of the calculations, several possible explanations for this situation are explored. A different approach to the problem is suggested by the regularities appearing in the electron affinities. Noting that the regular linear pattern that exists for the ionization potentials of the p electrons as a function of Z, repeats itself for different degrees of ionization q, the slopes and intercepts of these curves are extrapolated to the case of the negative Ion. The method is placed on a theoretical basis by calculating the Slater parameters as functions of q and n, the number of equivalent p-electrons. These functions are no more than quadratic in q and n. The electron affinities are calculated by extending the linear relations that exist for the neutral atoms and positive ions to the negative ions. The extrapolated. slopes are apparently correct, but the intercepts must be slightly altered to agree with experiment. For this purpose one or two experimental affinities (depending on the extrapolation method) are used in each of the two short periods. The two extrapolation methods used are: (A) an isoelectronic sequence extrapolation of the linear pattern as such; (B) the same extrapolation of a linearization of this pattern (configuration centers) combined with an extrapolation of the other terms of the ground configurations. The latter method Is preferable, since it requires only experimental point for each period. The results agree within experimental error with all data, except with the most recent value of C, which lies 10% lower.
NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.
Hinrichs, R N; McLean, S P
1995-10-01
This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.
High-order Newton-penalty algorithms
NASA Astrophysics Data System (ADS)
Dussault, Jean-Pierre
2005-10-01
Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.
Linear prediction data extrapolation superresolution radar imaging
NASA Astrophysics Data System (ADS)
Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing
1993-05-01
Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.
2016-04-01
incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching architecture...Simulation Model, Quasi -Nonlinear, Piloted Simulation, Flight-Test Implications, System Identification, Off-Nominal Loading Extrapolation, Stability...incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching
Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.
OConnor, William; Runquist, Elizabeth A
2008-07-01
Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.
Latychevskaia, T; Chushkin, Y; Fink, H-W
2016-10-01
In coherent diffractive imaging, the resolution of the reconstructed object is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by postextrapolation of coherent diffraction images, such as diffraction patterns or holograms. We demonstrate that a diffraction pattern can unambiguously be extrapolated from only a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal is linearly proportional to the oversampling ratio. Although there could be in principle other methods to achieve extrapolation, we devote our discussion to employing iterative phase retrieval methods and demonstrate their limits. We present two numerical studies; namely, the extrapolation of diffraction patterns of nonbinary and that of phase objects together with a discussion of the optimal extrapolation procedure. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Havasi, Ágnes; Kazemi, Ehsan
2018-04-01
In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.
McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O
2004-06-21
The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
A regularization method for extrapolation of solar potential magnetic fields
NASA Technical Reports Server (NTRS)
Gary, G. A.; Musielak, Z. E.
1992-01-01
The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.
NASA Astrophysics Data System (ADS)
Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid
2016-02-01
In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method
NASA Astrophysics Data System (ADS)
Taitano, William; Knoll, Dana; Chacon, Luis
2009-11-01
The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO
Slaughter, Andrew R; Palmer, Carolyn G; Muller, Wilhelmine J
2007-04-01
In aquatic ecotoxicology, acute to chronic ratios (ACRs) are often used to predict chronic responses from available acute data to derive water quality guidelines, despite many problems associated with this method. This paper explores the comparative protectiveness and accuracy of predicted guideline values derived from the ACR, linear regression analysis (LRA), and multifactor probit analysis (MPA) extrapolation methods applied to acute toxicity data for aquatic macroinvertebrates. Although the authors of the LRA and MPA methods advocate the use of extrapolated lethal effects in the 0.01% to 10% lethal concentration (LC0.01-LC10) range to predict safe chronic exposure levels to toxicants, the use of an extrapolated LC50 value divided by a safety factor of 5 was in addition explored here because of higher statistical confidence surrounding the LC50 value. The LRA LC50/5 method was found to compare most favorably with available experimental chronic toxicity data and was therefore most likely to be sufficiently protective, although further validation with the use of additional species is needed. Values derived by the ACR method were the least protective. It is suggested that there is an argument for the replacement of ACRs in developing water quality guidelines by the LRA LC50/5 method.
Method and apparatus for determining minority carrier diffusion length in semiconductors
Goldstein, Bernard; Dresner, Joseph; Szostak, Daniel J.
1983-07-12
Method and apparatus are provided for determining the diffusion length of minority carriers in semiconductor material, particularly amorphous silicon which has a significantly small minority carrier diffusion length using the constant-magnitude surface-photovoltage (SPV) method. An unmodulated illumination provides the light excitation on the surface of the material to generate the SPV. A manually controlled or automatic servo system maintains a constant predetermined value of the SPV. A vibrating Kelvin method-type probe electrode couples the SPV to a measurement system. The operating optical wavelength of an adjustable monochromator to compensate for the wavelength dependent sensitivity of a photodetector is selected to measure the illumination intensity (photon flux) on the silicon. Measurements of the relative photon flux for a plurality of wavelengths are plotted against the reciprocal of the optical absorption coefficient of the material. A linear plot of the data points is extrapolated to zero intensity. The negative intercept value on the reciprocal optical coefficient axis of the extrapolated linear plot is the diffusion length of the minority carriers.
Brennan, Scott F; Cresswell, Andrew G; Farris, Dominic J; Lichtwark, Glen A
2017-11-07
Ultrasonography is a useful technique to study muscle contractions in vivo, however larger muscles like vastus lateralis may be difficult to visualise with smaller, commonly used transducers. Fascicle length is often estimated using linear trigonometry to extrapolate fascicle length to regions where the fascicle is not visible. However, this approach has not been compared to measurements made with a larger field of view for dynamic muscle contractions. Here we compared two different single-transducer extrapolation methods to measure VL muscle fascicle length to a direct measurement made using two synchronised, in-series transducers. The first method used pennation angle and muscle thickness to extrapolate fascicle length outside the image (extrapolate method). The second method determined fascicle length based on the extrapolated intercept between a fascicle and the aponeurosis (intercept method). Nine participants performed maximal effort, isometric, knee extension contractions on a dynamometer at 10° increments from 50 to 100° of knee flexion. Fascicle length and torque were simultaneously recorded for offline analysis. The dual transducer method showed similar patterns of fascicle length change (overall mean coefficient of multiple correlation was 0.76 and 0.71 compared to extrapolate and intercept methods respectively), but reached different absolute lengths during the contractions. This had the effect of producing force-length curves of the same shape, but each curve was shifted in terms of absolute length. We concluded that dual transducers are beneficial for studies that examine absolute fascicle lengths, whereas either of the single transducer methods may produce similar results for normalised length changes, and repeated measures experimental designs. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Moraitis, Kostas; Archontis, Vasilis; Tziotziou, Konstantinos; Georgoulis, Manolis K.
We calculate the instantaneous free magnetic energy and relative magnetic helicity of solar active regions using two independent approaches: a) a non-linear force-free (NLFF) method that requires only a single photospheric vector magnetogram, and b) well known semi-analytical formulas that require the full three-dimensional (3D) magnetic field structure. The 3D field is obtained either from MHD simulations, or from observed magnetograms via respective NLFF field extrapolations. We find qualitative agreement between the two methods and, quantitatively, a discrepancy not exceeding a factor of 4. The comparison of the two methods reveals, as a byproduct, two independent tests for the quality of a given force-free field extrapolation. We find that not all extrapolations manage to achieve the force-free condition in a valid, divergence-free, magnetic configuration. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.
Minimally invasive estimation of ventricular dead space volume through use of Frank-Starling curves.
Davidson, Shaun; Pretty, Chris; Pironet, Antoine; Desaive, Thomas; Janssen, Nathalie; Lambermont, Bernard; Morimont, Philippe; Chase, J Geoffrey
2017-01-01
This paper develops a means of more easily and less invasively estimating ventricular dead space volume (Vd), an important, but difficult to measure physiological parameter. Vd represents a subject and condition dependent portion of measured ventricular volume that is not actively participating in ventricular function. It is employed in models based on the time varying elastance concept, which see widespread use in haemodynamic studies, and may have direct diagnostic use. The proposed method involves linear extrapolation of a Frank-Starling curve (stroke volume vs end-diastolic volume) and its end-systolic equivalent (stroke volume vs end-systolic volume), developed across normal clinical procedures such as recruitment manoeuvres, to their point of intersection with the y-axis (where stroke volume is 0) to determine Vd. To demonstrate the broad applicability of the method, it was validated across a cohort of six sedated and anaesthetised male Pietrain pigs, encompassing a variety of cardiac states from healthy baseline behaviour to circulatory failure due to septic shock induced by endotoxin infusion. Linear extrapolation of the curves was supported by strong linear correlation coefficients of R = 0.78 and R = 0.80 average for pre- and post- endotoxin infusion respectively, as well as good agreement between the two linearly extrapolated y-intercepts (Vd) for each subject (no more than 7.8% variation). Method validity was further supported by the physiologically reasonable Vd values produced, equivalent to 44.3-53.1% and 49.3-82.6% of baseline end-systolic volume before and after endotoxin infusion respectively. This method has the potential to allow Vd to be estimated without a particularly demanding, specialised protocol in an experimental environment. Further, due to the common use of both mechanical ventilation and recruitment manoeuvres in intensive care, this method, subject to the availability of multi-beat echocardiography, has the potential to allow for estimation of Vd in a clinical environment.
Acceleration of convergence of vector sequences
NASA Technical Reports Server (NTRS)
Sidi, A.; Ford, W. F.; Smith, D. A.
1983-01-01
A general approach to the construction of convergence acceleration methods for vector sequence is proposed. Using this approach, one can generate some known methods, such as the minimal polynomial extrapolation, the reduced rank extrapolation, and the topological epsilon algorithm, and also some new ones. Some of the new methods are easier to implement than the known methods and are observed to have similar numerical properties. The convergence analysis of these new methods is carried out, and it is shown that they are especially suitable for accelerating the convergence of vector sequences that are obtained when one solves linear systems of equations iteratively. A stability analysis is also given, and numerical examples are provided. The convergence and stability properties of the topological epsilon algorithm are likewise given.
Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.
2018-03-01
We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.
A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems
NASA Astrophysics Data System (ADS)
Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong
2017-09-01
In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.
Introduction of risk size in the determination of uncertainty factor UFL in risk assessment
NASA Astrophysics Data System (ADS)
Xue, Jinling; Lu, Yun; Velasquez, Natalia; Yu, Ruozhen; Hu, Hongying; Liu, Zhengtao; Meng, Wei
2012-09-01
The methodology for using uncertainty factors in health risk assessment has been developed for several decades. A default value is usually applied for the uncertainty factor UFL, which is used to extrapolate from LOAEL (lowest observed adverse effect level) to NAEL (no adverse effect level). Here, we have developed a new method that establishes a linear relationship between UFL and the additional risk level at LOAEL based on the dose-response information, which represents a very important factor that should be carefully considered. This linear formula makes it possible to select UFL properly in the additional risk range from 5.3% to 16.2%. Also the results remind us that the default value 10 may not be conservative enough when the additional risk level at LOAEL exceeds 16.2%. Furthermore, this novel method not only provides a flexible UFL instead of the traditional default value, but also can ensure a conservative estimation of the UFL with fewer errors, and avoid the benchmark response selection involved in the benchmark dose method. These advantages can improve the estimation of the extrapolation starting point in the risk assessment.
Surface dose measurements with commonly used detectors: a consistent thickness correction method.
Reynolds, Tatsiana A; Higgins, Patrick
2015-09-08
The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30-360) with other parallel plate chambers RMI-449 (Attix), Capintec PS-033, PTW 30-329 (Markus) and Memorial. Measurements of surface dose for 6MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (-0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid-state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three-detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth-dose curves and is not recommended for these types of measurements.
Solution of the finite Milne problem in stochastic media with RVT Technique
NASA Astrophysics Data System (ADS)
Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.
2017-12-01
This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.
Nonlinear cancer response at ultralow dose: a 40800-animal ED(001) tumor and biomarker study.
Bailey, George S; Reddy, Ashok P; Pereira, Clifford B; Harttig, Ulrich; Baird, William; Spitsbergen, Jan M; Hendricks, Jerry D; Orner, Gayle A; Williams, David E; Swenberg, James A
2009-07-01
Assessment of human cancer risk from animal carcinogen studies is severely limited by inadequate experimental data at environmentally relevant exposures and by procedures requiring modeled extrapolations many orders of magnitude below observable data. We used rainbow trout, an animal model well-suited to ultralow-dose carcinogenesis research, to explore dose-response down to a targeted 10 excess liver tumors per 10000 animals (ED(001)). A total of 40800 trout were fed 0-225 ppm dibenzo[a,l]pyrene (DBP) for 4 weeks, sampled for biomarker analyses, and returned to control diet for 9 months prior to gross and histologic examination. Suspect tumors were confirmed by pathology, and resulting incidences were modeled and compared to the default EPA LED(10) linear extrapolation method. The study provided observed incidence data down to two above-background liver tumors per 10000 animals at the lowest dose (that is, an unmodeled ED(0002) measurement). Among nine statistical models explored, three were determined to fit the liver data well-linear probit, quadratic logit, and Ryzin-Rai. None of these fitted models is compatible with the LED(10) default assumption, and all fell increasingly below the default extrapolation with decreasing DBP dose. Low-dose tumor response was also not predictable from hepatic DBP-DNA adduct biomarkers, which accumulated as a power function of dose (adducts = 100 x DBP(1.31)). Two-order extrapolations below the modeled tumor data predicted DBP doses producing one excess cancer per million individuals (ED(10)(-6)) that were 500-1500-fold higher than that predicted by the five-order LED(10) extrapolation. These results are considered specific to the animal model, carcinogen, and protocol used. They provide the first experimental estimation in any model of the degree of conservatism that may exist for the EPA default linear assumption for a genotoxic carcinogen.
Subsonic panel method for designing wing surfaces from pressure distribution
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Hawk, J. D.
1983-01-01
An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolgonos, Alex; Mason, Thomas O.; Poeppelmeier, Kenneth R., E-mail: krp@northwestern.edu
2016-08-15
The direct optical band gap of semiconductors is traditionally measured by extrapolating the linear region of the square of the absorption curve to the x-axis, and a variation of this method, developed by Tauc, has also been widely used. The application of the Tauc method to crystalline materials is rooted in misconception–and traditional linear extrapolation methods are inappropriate for use on degenerate semiconductors, where the occupation of conduction band energy states cannot be ignored. A new method is proposed for extracting a direct optical band gap from absorption spectra of degenerately-doped bulk semiconductors. This method was applied to pseudo-absorption spectramore » of Sn-doped In{sub 2}O{sub 3} (ITO)—converted from diffuse-reflectance measurements on bulk specimens. The results of this analysis were corroborated by room-temperature photoluminescence excitation measurements, which yielded values of optical band gap and Burstein–Moss shift that are consistent with previous studies on In{sub 2}O{sub 3} single crystals and thin films. - Highlights: • The Tauc method of band gap measurement is re-evaluated for crystalline materials. • Graphical method proposed for extracting optical band gaps from absorption spectra. • The proposed method incorporates an energy broadening term for energy transitions. • Values for ITO were self-consistent between two different measurement methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, T
Purpose: Since 2008 the Physikalisch-Technische Bundesanstalt (PTB) has been offering the calibration of {sup 125}I-brachytherapy sources in terms of the reference air-kerma rate (RAKR). The primary standard is a large air-filled parallel-plate extrapolation chamber. The measurement principle is based on the fact that the air-kerma rate is proportional to the increment of ionization per increment of chamber volume at chamber depths greater than the range of secondary electrons originating from the electrode x{sub 0}. Methods: Two methods for deriving the RAKR from the measured ionization charges are: (1) to determine the RAKR from the slope of the linear fit tomore » the so-called ’extrapolation curve’, the measured ionization charges Q vs. plate separations x or (2) to differentiate Q(x) and to derive the RAKR by a linear extrapolation towards zero plate separation. For both methods, correcting the measured data for all known influencing effects before the evaluation method is applied is a precondition. However, the discrepancy of their results is larger than the uncertainty given for the determination of the RAKR with both methods. Results: A new approach to derive the RAKR from the measurements is investigated as an alternative. The method was developed from the ground up, based on radiation transport theory. A conversion factor C(x{sub 1}, x{sub 2}) is applied to the difference of charges measured at the two plate separations x{sub 1} and x{sub 2}. This factor is composed of quotients of three air-kerma values calculated for different plate separations in the chamber: the air kerma Ka(0) for plate separation zero, and the mean air kermas at the plate separations x{sub 1} and x{sub 2}, respectively. The RAKR determined with method (1) yields 4.877 µGy/h, and with method (2) 4.596 µGy/h. The application of the alternative approach results in 4.810 µGy/h. Conclusion: The alternative method shall be established in the future.« less
Allodji, Rodrigue S; Schwartz, Boris; Diallo, Ibrahima; Agbovon, Césaire; Laurier, Dominique; de Vathaire, Florent
2015-08-01
Analyses of the Life Span Study (LSS) of Japanese atomic bombing survivors have routinely incorporated corrections for additive classical measurement errors using regression calibration. Recently, several studies reported that the efficiency of the simulation-extrapolation method (SIMEX) is slightly more accurate than the simple regression calibration method (RCAL). In the present paper, the SIMEX and RCAL methods have been used to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates. For instance, it is shown that using the SIMEX method, the ERR/Gy is increased by an amount of about 29 % for all solid cancer deaths using a linear model compared to the RCAL method, and the corrected EAR 10(-4) person-years at 1 Gy (the linear terms) is decreased by about 8 %, while the corrected quadratic term (EAR 10(-4) person-years/Gy(2)) is increased by about 65 % for leukaemia deaths based on a linear-quadratic model. The results with SIMEX method are slightly higher than published values. The observed differences were probably due to the fact that with the RCAL method the dosimetric data were partially corrected, while all doses were considered with the SIMEX method. Therefore, one should be careful when comparing the estimated risks and it may be useful to use several correction techniques in order to obtain a range of corrected estimates, rather than to rely on a single technique. This work will enable to improve the risk estimates derived from LSS data, and help to make more reliable the development of radiation protection standards.
Surface dose measurements with commonly used detectors: a consistent thickness correction method
Higgins, Patrick
2015-01-01
The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30‐360) with other parallel plate chambers RMI‐449 (Attix), Capintec PS‐033, PTW 30‐329 (Markus) and Memorial. Measurements of surface dose for 6 MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (−0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid‐state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three‐detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth‐dose curves and is not recommended for these types of measurements. PACS number: 87.56.‐v PMID:26699319
Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M
2002-07-21
The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.
Alternative Method to Simulate a Sub-idle Engine Operation in Order to Synthesize Its Control System
NASA Astrophysics Data System (ADS)
Sukhovii, Sergii I.; Sirenko, Feliks F.; Yepifanov, Sergiy V.; Loboda, Igor
2016-09-01
The steady-state and transient engine performances in control systems are usually evaluated by applying thermodynamic engine models. Most models operate between the idle and maximum power points, only recently, they sometimes address a sub-idle operating range. The lack of information about the component maps at the sub-idle modes presents a challenging problem. A common method to cope with the problem is to extrapolate the component performances to the sub-idle range. Precise extrapolation is also a challenge. As a rule, many scientists concern only particular aspects of the problem such as the lighting combustion chamber or the turbine operation under the turned-off conditions of the combustion chamber. However, there are no reports about a model that considers all of these aspects and simulates the engine starting. The proposed paper addresses a new method to simulate the starting. The method substitutes the non-linear thermodynamic model with a linear dynamic model, which is supplemented with a simplified static model. The latter model is the set of direct relations between parameters that are used in the control algorithms instead of commonly used component performances. Specifically, this model consists of simplified relations between the gas path parameters and the corrected rotational speed.
Generalized Gilat-Raubenheimer method for density-of-states calculation in photonic crystals
NASA Astrophysics Data System (ADS)
Liu, Boyuan; Johnson, Steven G.; Joannopoulos, John D.; Lu, Ling
2018-04-01
An efficient numerical algorithm is the key for accurate evaluation of density of states (DOS) in band theory. The Gilat-Raubenheimer (GR) method proposed in 1966 is an efficient linear extrapolation method which was limited in specific lattices. Here, using an affine transformation, we provide a new generalization of the original GR method to any Bravais lattices and show that it is superior to the tetrahedron method and the adaptive Gaussian broadening method. Finally, we apply our generalized GR method to compute DOS of various gyroid photonic crystals of topological degeneracies.
The Educated Guess: Determining Drug Doses in Exotic Animals Using Evidence-Based Medicine.
Visser, Marike; Oster, Seth C
2018-05-01
Lack of species-specific pharmacokinetic and pharmacodynamic data is a challenge for pharmaceutical and dose selection. If available, dose extrapolation can be accomplished via basic equations. If unavailable, several methods have been described. Linear scaling uses an established milligrams per kilograms dose based on weight. This does not allow for differences in species drug metabolism, sometimes resulting in toxicity. Allometric scaling correlates body weight and metabolic rate but fails for drugs with significant hepatic metabolism and cannot be extrapolated to avians or reptiles. Evidence-based veterinary medicine for dose design based on species similarity is discussed, considering physiologic differences between classes. Copyright © 2018 Elsevier Inc. All rights reserved.
Electronic and spectroscopic characterizations of SNP isomers
NASA Astrophysics Data System (ADS)
Trabelsi, Tarek; Al Mogren, Muneerah Mogren; Hochlaf, Majdi; Francisco, Joseph S.
2018-02-01
High-level ab initio electronic structure calculations were performed to characterize SNP isomers. In addition to the known linear SNP, cyc-PSN, and linear SPN isomers, we identified a fourth isomer, linear PSN, which is located ˜2.4 eV above the linear SNP isomer. The low-lying singlet and triplet electronic states of the linear SNP and SPN isomers were investigated using a multi-reference configuration interaction method and large basis set. Several bound electronic states were identified. However, their upper rovibrational levels were predicted to pre-dissociate, leading to S + PN, P + NS products, and multi-step pathways were discovered. For the ground states, a set of spectroscopic parameters were derived using standard and explicitly correlated coupled-cluster methods in conjunction with augmented correlation-consistent basis sets extrapolated to the complete basis set limit. We also considered scalar and core-valence effects. For linear isomers, the rovibrational spectra were deduced after generation of their 3D-potential energy surfaces along the stretching and bending coordinates and variational treatments of the nuclear motions.
Hall, E J
2001-01-01
The possible risk of induced malignancies in astronauts, as a consequence of the radiation environment in space, is a factor of concern for long term missions. Cancer risk estimates for high doses of low LET radiation are available from the epidemiological studies of the A-bomb survivors. Cancer risks at lower doses cannot be detected in epidemiological studies and must be inferred by extrapolation from the high dose risks. The standard setting bodies, such as the ICRP recommend a linear, no-threshold extrapolation of risks from high to low doses, but this is controversial. A study of mechanisms of carcinogenesis may shed some light on the validity of a linear extrapolation. The multi-step nature of carcinogenesis suggests that the role of radiation may be to induce a mutation leading to a mutator phenotype. High energy Fe ions, such as those encountered in space are highly effective in inducing genomic instability. Experiments involving the single particle microbeam have demonstrated a "bystander effect", ie a biological effect in cells not themselves hit, but in close proximity to those that are, as well as the induction of mutations in cells where only the cytoplasm, and not the nucleus, have been traversed by a charged particle. These recent experiments cast doubt on the validity of a simple linear extrapolation, but the data are so far fragmentary and conflicting. More studies are necessary. While mechanistic studies cannot replace epidemiology as a source of quantitative risk estimates, they may shed some light on the shape of the dose response relationship and therefore on the limitations of a linear extrapolation to low doses.
NASA Technical Reports Server (NTRS)
Hall, E. J.
2001-01-01
The possible risk of induced malignancies in astronauts, as a consequence of the radiation environment in space, is a factor of concern for long term missions. Cancer risk estimates for high doses of low LET radiation are available from the epidemiological studies of the A-bomb survivors. Cancer risks at lower doses cannot be detected in epidemiological studies and must be inferred by extrapolation from the high dose risks. The standard setting bodies, such as the ICRP recommend a linear, no-threshold extrapolation of risks from high to low doses, but this is controversial. A study of mechanisms of carcinogenesis may shed some light on the validity of a linear extrapolation. The multi-step nature of carcinogenesis suggests that the role of radiation may be to induce a mutation leading to a mutator phenotype. High energy Fe ions, such as those encountered in space are highly effective in inducing genomic instability. Experiments involving the single particle microbeam have demonstrated a "bystander effect", ie a biological effect in cells not themselves hit, but in close proximity to those that are, as well as the induction of mutations in cells where only the cytoplasm, and not the nucleus, have been traversed by a charged particle. These recent experiments cast doubt on the validity of a simple linear extrapolation, but the data are so far fragmentary and conflicting. More studies are necessary. While mechanistic studies cannot replace epidemiology as a source of quantitative risk estimates, they may shed some light on the shape of the dose response relationship and therefore on the limitations of a linear extrapolation to low doses.
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.
1984-01-01
A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.
Studies of superresolution range-Doppler imaging
NASA Astrophysics Data System (ADS)
Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing; Yin, Jun; She, Zhishun
1993-02-01
This paper presents three superresolution imaging methods, including the linear prediction data extrapolation DFT (LPDEDFT), the dynamic optimization linear least squares (DOLLS), and the Hopfield neural network nonlinear least squares (HNNNLS). Live data of a metalized scale model B-52 aircraft, mounted on a rotating platform in a microwave anechoic chamber, have in this way been processed, as has a flying Boeing-727 aircraft. The imaging results indicate that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle in imaging, or equal-quality images from smaller bandwidth and total rotation, angle may be obtained by these superresolution approaches. Moreover, these methods are compared in respect of their resolution capability and computational complexity.
Inferring thermodynamic stability relationship of polymorphs from melting data.
Yu, L
1995-08-01
This study investigates the possibility of inferring the thermodynamic stability relationship of polymorphs from their melting data. Thermodynamic formulas are derived for calculating the Gibbs free energy difference (delta G) between two polymorphs and its temperature slope from mainly the temperatures and heats of melting. This information is then used to estimate delta G, thus relative stability, at other temperatures by extrapolation. Both linear and nonlinear extrapolations are considered. Extrapolating delta G to zero gives an estimation of the transition (or virtual transition) temperature, from which the presence of monotropy or enantiotropy is inferred. This procedure is analogous to the use of solubility data measured near the ambient temperature to estimate a transition point at higher temperature. For several systems examined, the two methods are in good agreement. The qualitative rule introduced this way for inferring the presence of monotropy or enantiotropy is approximately the same as The Heat of Fusion Rule introduced previously on a statistical mechanical basis. This method is applied to 96 pairs of polymorphs from the literature. In most cases, the result agrees with the previous determination. The deviation of the calculated transition temperatures from their previous values (n = 18) is 2% on average and 7% at maximum.
Extrapolation of operators acting into quasi-Banach spaces
NASA Astrophysics Data System (ADS)
Lykov, K. V.
2016-01-01
Linear and sublinear operators acting from the scale of L_p spaces to a certain fixed quasinormed space are considered. It is shown how the extrapolation construction proposed by Jawerth and Milman at the end of 1980s can be used to extend a bounded action of an operator from the L_p scale to wider spaces. Theorems are proved which generalize Yano's extrapolation theorem to the case of a quasinormed target space. More precise results are obtained under additional conditions on the quasinorm. Bibliography: 35 titles.
NASA Astrophysics Data System (ADS)
Bischoff, Jan-Moritz; Jeckelmann, Eric
2017-11-01
We improve the density-matrix renormalization group (DMRG) evaluation of the Kubo formula for the zero-temperature linear conductance of one-dimensional correlated systems. The dynamical DMRG is used to compute the linear response of a finite system to an applied ac source-drain voltage; then the low-frequency finite-system response is extrapolated to the thermodynamic limit to obtain the dc conductance of an infinite system. The method is demonstrated on the one-dimensional spinless fermion model at half filling. Our method is able to replicate several predictions of the Luttinger liquid theory such as the renormalization of the conductance in a homogeneous conductor, the universal effects of a single barrier, and the resonant tunneling through a double barrier.
Daily air temperature interpolated at high spatial resolution over a large mountainous region
Dodson, R.; Marks, D.
1997-01-01
Two methods are investigated for interpolating daily minimum and maximum air temperatures (Tmin and Tmax) at a 1 km spatial resolution over a large mountainous region (830 000 km2) in the U.S. Pacific Northwest. The methods were selected because of their ability to (1) account for the effect of elevation on temperature and (2) efficiently handle large volumes of data. The first method, the neutral stability algorithm (NSA), used the hydrostatic and potential temperature equations to convert measured temperatures and elevations to sea-level potential temperatures. The potential temperatures were spatially interpolated using an inverse-squared-distance algorithm and then mapped to the elevation surface of a digital elevation model (DEM). The second method, linear lapse rate adjustment (LLRA), involved the same basic procedure as the NSA, but used a constant linear lapse rate instead of the potential temperature equation. Cross-validation analyses were performed using the NSA and LLRA methods to interpolate Tmin and Tmax each day for the 1990 water year, and the methods were evaluated based on mean annual interpolation error (IE). The NSA method showed considerable bias for sites associated with vertical extrapolation. A correction based on climate station/grid cell elevation differences was developed and found to successfully remove the bias. The LLRA method was tested using 3 lapse rates, none of which produced a serious extrapolation bias. The bias-adjusted NSA and the 3 LLRA methods produced almost identical levels of accuracy (mean absolute errors between 1.2 and 1.3??C), and produced very similar temperature surfaces based on image difference statistics. In terms of accuracy, speed, and ease of implementation, LLRA was chosen as the best of the methods tested.
Method and system for non-linear motion estimation
NASA Technical Reports Server (NTRS)
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
Goodarzi, Mohammad; Jensen, Richard; Vander Heyden, Yvan
2012-12-01
A Quantitative Structure-Retention Relationship (QSRR) is proposed to estimate the chromatographic retention of 83 diverse drugs on a Unisphere poly butadiene (PBD) column, using isocratic elutions at pH 11.7. Previous work has generated QSRR models for them using Classification And Regression Trees (CART). In this work, Ant Colony Optimization is used as a feature selection method to find the best molecular descriptors from a large pool. In addition, several other selection methods have been applied, such as Genetic Algorithms, Stepwise Regression and the Relief method, not only to evaluate Ant Colony Optimization as a feature selection method but also to investigate its ability to find the important descriptors in QSRR. Multiple Linear Regression (MLR) and Support Vector Machines (SVMs) were applied as linear and nonlinear regression methods, respectively, giving excellent correlation between the experimental, i.e. extrapolated to a mobile phase consisting of pure water, and predicted logarithms of the retention factors of the drugs (logk(w)). The overall best model was the SVM one built using descriptors selected by ACO. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Clark, William S.; Hall, Kenneth C.
1994-01-01
A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.
Fractal Dimensionality of Pore and Grain Volume of a Siliciclastic Marine Sand
NASA Astrophysics Data System (ADS)
Reed, A. H.; Pandey, R. B.; Lavoie, D. L.
Three-dimensional (3D) spatial distributions of pore and grain volumes were determined from high-resolution computer tomography (CT) images of resin-impregnated marine sands. Using a linear gradient extrapolation method, cubic three-dimensional samples were constructed from two-dimensional CT images. Image porosity (0.37) was found to be consistent with the estimate of porosity by water weight loss technique (0.36). Scaling of the pore volume (Vp) with the linear size (L), V~LD provides the fractal dimensionalities of the pore volume (D=2.74+/-0.02) and grain volume (D=2.90+/-0.02) typical for sedimentary materials.
Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.
2016-01-21
Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.
The radiation environment of OSO missions from 1974 to 1978
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.
1973-01-01
Trapped particle radiation levels on several OSO missions were calculated for nominal trajectories using improved computational methods and new electron environment models. Temporal variations of the electron fluxes were considered and partially accounted for. Magnetic field calculations were performed with a current field model and extrapolated to a later epoch with linear time terms. Orbital flux integration results, which are presented in graphical and tabular form, are analyzed, explained, and discussed.
Density functional Theory Based Generalized Effective Fragment Potential Method (Postprint)
2014-07-01
is acceptable for other applications) leads to induced dipole moments within 10−6 to 10−7 au of the precise values . Thus, the applied field of 10−4...noncovalent interactions. The water-benzene clusters17 and WATER2711 reference values were also ob- tained at the CCSD(T)/CBS level, except for the clusters...with n = 20,42 where MP2/CBS was used. The n-alkane dimers18 benchmark values were CCSD(T)/CBS for ethane to butane and a linear extrapolation method
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
Development of MCAERO wing design panel method with interactive graphics module
NASA Technical Reports Server (NTRS)
Hawk, J. D.; Bristow, D. R.
1984-01-01
A reliable and efficient iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical pressure distribution. The design process is initialized by using MCAERO (MCAIR 3-D Subsonic Potential Flow Analysis Code) to analyze a baseline configuration. A second program DMCAERO is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter by applying a first-order expansion to the baseline equations in MCAERO. This matrix is calculated only once but is used in each iteration cycle to calculate the geometry perturbation and to analyze the perturbed geometry. The potential on the new geometry is calculated by linear extrapolation from the baseline solution. This extrapolated potential is converted to velocity by numerical differentiation, and velocity is converted to pressure by using Bernoulli's equation. There is an interactive graphics option which allows the user to graphically display the results of the design process and to interactively change either the geometry or the prescribed pressure distribution.
Li, Zenghui; Xu, Bin; Yang, Jian; Song, Jianshe
2015-01-01
This paper focuses on suppressing spectral overlap for sub-band spectral estimation, with which we can greatly decrease the computational complexity of existing spectral estimation algorithms, such as nonlinear least squares spectral analysis and non-quadratic regularized sparse representation. Firstly, our study shows that the nominal ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when filtering a finite-length sequence, because many meaningless zeros are used as samples in convolution operations. Next, an extrapolation-based filtering strategy is proposed to produce a series of estimates as the substitutions of the zeros and to recover the suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are applied to demonstrate the effectiveness of the proposed strategy. PMID:25609038
Bilinear modeling and nonlinear estimation
NASA Technical Reports Server (NTRS)
Dwyer, Thomas A. W., III; Karray, Fakhreddine; Bennett, William H.
1989-01-01
New methods are illustrated for online nonlinear estimation applied to the lateral deflection of an elastic beam on board measurements of angular rates and angular accelerations. The development of the filter equations, together with practical issues of their numerical solution as developed from global linearization by nonlinear output injection are contrasted with the usual method of the extended Kalman filter (EKF). It is shown how nonlinear estimation due to gyroscopic coupling can be implemented as an adaptive covariance filter using off-the-shelf Kalman filter algorithms. The effect of the global linearization by nonlinear output injection is to introduce a change of coordinates in which only the process noise covariance is to be updated in online implementation. This is in contrast to the computational approach which arises in EKF methods arising by local linearization with respect to the current conditional mean. Processing refinements for nonlinear estimation based on optimal, nonlinear interpolation between observations are also highlighted. In these methods the extrapolation of the process dynamics between measurement updates is obtained by replacing a transition matrix with an operator spline that is optimized off-line from responses to selected test inputs.
NASA Astrophysics Data System (ADS)
Niedzielski, Tomasz; Kosek, Wiesław
2008-02-01
This article presents the application of a multivariate prediction technique for predicting universal time (UT1-UTC), length of day (LOD) and the axial component of atmospheric angular momentum (AAM χ 3). The multivariate predictions of LOD and UT1-UTC are generated by means of the combination of (1) least-squares (LS) extrapolation of models for annual, semiannual, 18.6-year, 9.3-year oscillations and for the linear trend, and (2) multivariate autoregressive (MAR) stochastic prediction of LS residuals (LS + MAR). The MAR technique enables the use of the AAM χ 3 time-series as the explanatory variable for the computation of LOD or UT1-UTC predictions. In order to evaluate the performance of this approach, two other prediction schemes are also applied: (1) LS extrapolation, (2) combination of LS extrapolation and univariate autoregressive (AR) prediction of LS residuals (LS + AR). The multivariate predictions of AAM χ 3 data, however, are computed as a combination of the extrapolation of the LS model for annual and semiannual oscillations and the LS + MAR. The AAM χ 3 predictions are also compared with LS extrapolation and LS + AR prediction. It is shown that the predictions of LOD and UT1-UTC based on LS + MAR taking into account the axial component of AAM are more accurate than the predictions of LOD and UT1-UTC based on LS extrapolation or on LS + AR. In particular, the UT1-UTC predictions based on LS + MAR during El Niño/La Niña events exhibit considerably smaller prediction errors than those calculated by means of LS or LS + AR. The AAM χ 3 time-series is predicted using LS + MAR with higher accuracy than applying LS extrapolation itself in the case of medium-term predictions (up to 100 days in the future). However, the predictions of AAM χ 3 reveal the best accuracy for LS + AR.
Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor
2018-01-01
Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over pH and BOD. Copyright © 2017 Elsevier B.V. All rights reserved.
An automatic multigrid method for the solution of sparse linear systems
NASA Technical Reports Server (NTRS)
Shapira, Yair; Israeli, Moshe; Sidi, Avram
1993-01-01
An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.
Extrapolating bound state data of anions into the metastable domain
NASA Astrophysics Data System (ADS)
Feuerbacher, Sven; Sommerfeld, Thomas; Cederbaum, Lorenz S.
2004-10-01
Computing energies of electronically metastable resonance states is still a great challenge. Both scattering techniques and quantum chemistry based L2 methods are very time consuming. Here we investigate two more economical extrapolation methods. Extrapolating bound states energies into the metastable region using increased nuclear charges has been suggested almost 20 years ago. We critically evaluate this attractive technique employing our complex absorbing potential/Green's function method that allows us to follow a bound state into the continuum. Using the 2Πg resonance of N2- and the 2Πu resonance of CO2- as examples, we found that the extrapolation works suprisingly well. The second extrapolation method involves increasing of bond lengths until the sought resonance becomes stable. The keystone is to extrapolate the attachment energy and not the total energy of the system. This method has the great advantage that the whole potential energy curve is obtained with quite good accuracy by the extrapolation. Limitations of the two techniques are discussed.
SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C; Kumarasiri, A; Chetvertkov, M
2014-06-01
Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformablymore » registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (μ±σ) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.« less
Magnetic Nulls and Super-radial Expansion in the Solar Corona
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, Sarah E.; Dalmasse, Kevin; Tomczyk, Steven
Magnetic fields in the Sun’s outer atmosphere—the corona—control both solar-wind acceleration and the dynamics of solar eruptions. We present the first clear observational evidence of coronal magnetic nulls in off-limb linearly polarized observations of pseudostreamers, taken by the Coronal Multichannel Polarimeter (CoMP) telescope. These nulls represent regions where magnetic reconnection is likely to act as a catalyst for solar activity. CoMP linear-polarization observations also provide an independent, coronal proxy for magnetic expansion into the solar wind, a quantity often used to parameterize and predict the solar wind speed at Earth. We introduce a new method for explicitly calculating expansion factorsmore » from CoMP coronal linear-polarization observations, which does not require photospheric extrapolations. We conclude that linearly polarized light is a powerful new diagnostic of critical coronal magnetic topologies and the expanding magnetic flux tubes that channel the solar wind.« less
Magnetic Nulls and Super-Radial Expansion in the Solar Corona
NASA Technical Reports Server (NTRS)
Gibson, Sarah E.; Dalmasse, Kevin; Rachmeler, Laurel A.; De Rosa, Marc L.; Tomczyk, Steven; De Toma, Giuliana; Burkepile, Joan; Galloy, Michael
2017-01-01
Magnetic fields in the Sun's outer atmosphere, the corona, control both solar-wind acceleration and the dynamics of solar eruptions. We present the first clear observational evidence of coronal magnetic nulls in off-limb linearly polarized observations of pseudostreamers, taken by the Coronal Multichannel Polarimeter (CoMP) telescope. These nulls represent regions where magnetic reconnection is likely to act as a catalyst for solar activity.CoMP linear-polarization observations also provide an independent, coronal proxy for magnetic expansion into the solar wind, a quantity often used to parameterize and predict the solar wind speed at Earth. We introduce a new method for explicitly calculating expansion factors from CoMP coronal linear-polarization observations, which does not require photospheric extrapolations. We conclude that linearly polarized light is a powerful new diagnostic of critical coronal magnetic topologies and the expanding magnetic flux tubes that channel the solar wind.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparks, R.B.; Aydogan, B.
In the development of new radiopharmaceuticals, animal studies are typically performed to get a first approximation of the expected radiation dose in humans. This study evaluates the performance of some commonly used data extrapolation techniques to predict residence times in humans using data collected from animals. Residence times were calculated using animal and human data, and distributions of ratios of the animal results to human results were constructed for each extrapolation method. Four methods using animal data to predict human residence times were examined: (1) using no extrapolation, (2) using relative organ mass extrapolation, (3) using physiological time extrapolation, andmore » (4) using a combination of the mass and time methods. The residence time ratios were found to be log normally distributed for the nonextrapolated and extrapolated data sets. The use of relative organ mass extrapolation yielded no statistically significant change in the geometric mean or variance of the residence time ratios as compared to using no extrapolation. Physiologic time extrapolation yielded a statistically significant improvement (p < 0.01, paired t test) in the geometric mean of the residence time ratio from 0.5 to 0.8. Combining mass and time methods did not significantly improve the results of using time extrapolation alone. 63 refs., 4 figs., 3 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.; Brenner, D.S.; Casten, R.F.
1987-12-10
A new semi-empirical method, based on the use of the P-factor (P = N/sub p/N/sub n//(N/sub p/+N/sub n/)), is shown to simplify significantly the systematics of atomic masses. Its uses is illustrated for actinide nuclei where complicated patterns of mass systematics seen in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences. The linearization of the systematics by this procedure provides a simple basis for mass prediction. For many unmeasured nuclei beyond the known mass surface, the P-factor method operates by interpolation among data for known nuclei rathermore » than by extrapolation, as is common in other mass models.« less
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
NASA Astrophysics Data System (ADS)
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
NASA Astrophysics Data System (ADS)
Hernández-Pajares, Manuel; Garcia-Fernández, Miquel; Rius, Antonio; Notarpietro, Riccardo; von Engeln, Axel; Olivares-Pulido, Germán.; Aragón-Àngel, Àngela; García-Rigo, Alberto
2017-08-01
The new radio-occultation (RO) instrument on board the future EUMETSAT Polar System-Second Generation (EPS-SG) satellites, flying at a height of 820 km, is primarily focusing on neutral atmospheric profiling. It will also provide an opportunity for RO ionospheric sounding, but only below impact heights of 500 km, in order to guarantee a full data gathering of the neutral part. This will leave a gap of 320 km, which impedes the application of the direct inversion techniques to retrieve the electron density profile. To overcome this challenge, we have looked for new ways (accurate and simple) of extrapolating the electron density (also applicable to other low-Earth orbiting, LEO, missions like CHAMP): a new Vary-Chap Extrapolation Technique (VCET). VCET is based on the scale height behavior, linearly dependent on the altitude above hmF2. This allows extrapolating the electron density profile for impact heights above its peak height (this is the case for EPS-SG), up to the satellite orbital height. VCET has been assessed with more than 3700 complete electron density profiles obtained in four representative scenarios of the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) in the United States and the Formosa Satellite Mission 3 (FORMOSAT-3) in Taiwan, in solar maximum and minimum conditions, and geomagnetically disturbed conditions, by applying an updated Improved Abel Transform Inversion technique to dual-frequency GPS measurements. It is shown that VCET performs much better than other classical Chapman models, with 60% of occultations showing relative extrapolation errors below 20%, in contrast with conventional Chapman model extrapolation approaches with 10% or less of the profiles with relative error below 20%.
A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND
Bessonova, O.V.; Khokhlova, V.A.; Canney, M.S.; Bailey, M.R.; Crum, L.A.
2010-01-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue. PMID:20582159
A derating method for therapeutic applications of high intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Bessonova, O. V.; Khokhlova, V. A.; Canney, M. S.; Bailey, M. R.; Crum, L. A.
2010-05-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. A new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal wave-forms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.
A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND.
Bessonova, O V; Khokhlova, V A; Canney, M S; Bailey, M R; Crum, L A
2010-01-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
High speed civil transport: Sonic boom softening and aerodynamic optimization
NASA Technical Reports Server (NTRS)
Cheung, Samson
1994-01-01
An improvement in sonic boom extrapolation techniques has been the desire of aerospace designers for years. This is because the linear acoustic theory developed in the 60's is incapable of predicting the nonlinear phenomenon of shock wave propagation. On the other hand, CFD techniques are too computationally expensive to employ on sonic boom problems. Therefore, this research focused on the development of a fast and accurate sonic boom extrapolation method that solves the Euler equations for axisymmetric flow. This new technique has brought the sonic boom extrapolation techniques up to the standards of the 90's. Parallel computing is a fast growing subject in the field of computer science because of its promising speed. A new optimizer (IIOWA) for the parallel computing environment has been developed and tested for aerodynamic drag minimization. This is a promising method for CFD optimization making use of the computational resources of workstations, which unlike supercomputers can spend most of their time idle. Finally, the OAW concept is attractive because of its overall theoretical performance. In order to fully understand the concept, a wind-tunnel model was built and is currently being tested at NASA Ames Research Center. The CFD calculations performed under this cooperative agreement helped to identify the problem of the flow separation, and also aided the design by optimizing the wing deflection for roll trim.
2006-07-01
linearity; (4) determination of polarization as a function of radiographic parameters ; and (5) determination of the effect of binding energy on... hydroxyapatite . Type II calcifications are known to be associated with carcinoma, while it is generally accepted that the exclusive finding of type I...concentrate on the extrapolation of the Rh target spectra. The extrapolation was split in two parts. Below 24 keV we used the parameters from Boone’s paper
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
NASA Astrophysics Data System (ADS)
Mueller, David S.
2013-04-01
Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.
Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.
Sakaino, Hidetomo
2016-09-01
Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.
Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L
2014-10-01
Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Extending the Operational Envelope of a Turbofan Engine Simulation into the Sub-Idle Region
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.; Hamley, Andrew J.; Guo, Ten-Huei; Litt, Jonathan S.
2016-01-01
In many non-linear gas turbine simulations, operation in the sub-idle region can lead to model instability. This paper lays out a method for extending the operational envelope of a map based gas turbine simulation to include the sub-idle region. This method develops a multi-simulation solution where the baseline component maps are extrapolated below the idle level and an alternate model is developed to serve as a safety net when the baseline model becomes unstable or unreliable. Sub-idle model development takes place in two distinct operational areas, windmilling/shutdown and purge/cranking/ startup. These models are based on derived steady state operating points with transient values extrapolated between initial (known) and final (assumed) states. Model transitioning logic is developed to predict baseline model sub-idle instability, and transition smoothly and stably to the backup sub-idle model. Results from the simulation show a realistic approximation of sub-idle behavior as compared to generic sub-idle engine performance that allows the engine to operate continuously and stably from shutdown to full power.
Extending the Operational Envelope of a Turbofan Engine Simulation into the Sub-Idle Region
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes Walter; Hamley, Andrew J.; Guo, Ten-Huei; Litt, Jonathan S.
2016-01-01
In many non-linear gas turbine simulations, operation in the sub-idle region can lead to model instability. This paper lays out a method for extending the operational envelope of a map based gas turbine simulation to include the sub-idle region. This method develops a multi-simulation solution where the baseline component maps are extrapolated below the idle level and an alternate model is developed to serve as a safety net when the baseline model becomes unstable or unreliable. Sub-idle model development takes place in two distinct operational areas, windmilling/shutdown and purge/cranking/startup. These models are based on derived steady state operating points with transient values extrapolated between initial (known) and final (assumed) states. Model transitioning logic is developed to predict baseline model sub-idle instability, and transition smoothly and stably to the backup sub-idle model. Results from the simulation show a realistic approximation of sub-idle behavior as compared to generic sub-idle engine performance that allows the engine to operate continuously and stably from shutdown to full power.
Low dose radiation risks for women surviving the a-bombs in Japan: generalized additive model.
Dropkin, Greg
2016-11-24
Analyses of cancer mortality and incidence in Japanese A-bomb survivors have been used to estimate radiation risks, which are generally higher for women. Relative Risk (RR) is usually modelled as a linear function of dose. Extrapolation from data including high doses predicts small risks at low doses. Generalized Additive Models (GAMs) are flexible methods for modelling non-linear behaviour. GAMs are applied to cancer incidence in female low dose subcohorts, using anonymous public data for the 1958 - 1998 Life Span Study, to test for linearity, explore interactions, adjust for the skewed dose distribution, examine significance below 100 mGy, and estimate risks at 10 mGy. For all solid cancer incidence, RR estimated from 0 - 100 mGy and 0 - 20 mGy subcohorts is significantly raised. The response tapers above 150 mGy. At low doses, RR increases with age-at-exposure and decreases with time-since-exposure, the preferred covariate. Using the empirical cumulative distribution of dose improves model fit, and capacity to detect non-linear responses. RR is elevated over wide ranges of covariate values. Results are stable under simulation, or when removing exceptional data cells, or adjusting neutron RBE. Estimates of Excess RR at 10 mGy using the cumulative dose distribution are 10 - 45 times higher than extrapolations from a linear model fitted to the full cohort. Below 100 mGy, quasipoisson models find significant effects for all solid, squamous, uterus, corpus, and thyroid cancers, and for respiratory cancers when age-at-exposure > 35 yrs. Results for the thyroid are compatible with studies of children treated for tinea capitis, and Chernobyl survivors. Results for the uterus are compatible with studies of UK nuclear workers and the Techa River cohort. Non-linear models find large, significant cancer risks for Japanese women exposed to low dose radiation from the atomic bombings. The risks should be reflected in protection standards.
2014-01-01
Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018
Mueller, David S.
2013-01-01
profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers’ software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morales, Johnny E., E-mail: johnny.morales@lh.org.
Purpose: An experimental extrapolation technique is presented, which can be used to determine the relative output factors for very small x-ray fields using the Gafchromic EBT3 film. Methods: Relative output factors were measured for the Brainlab SRS cones ranging in diameters from 4 to 30 mm{sup 2} on a Novalis Trilogy linear accelerator with 6 MV SRS x-rays. The relative output factor was determined from an experimental reducing circular region of interest (ROI) extrapolation technique developed to remove the effects of volume averaging. This was achieved by scanning the EBT3 film measurements with a high scanning resolution of 1200 dpi.more » From the high resolution scans, the size of the circular regions of interest was varied to produce a plot of relative output factors versus area of analysis. The plot was then extrapolated to zero to determine the relative output factor corresponding to zero volume. Results: Results have shown that for a 4 mm field size, the extrapolated relative output factor was measured as a value of 0.651 ± 0.018 as compared to 0.639 ± 0.019 and 0.633 ± 0.021 for 0.5 and 1.0 mm diameter of analysis values, respectively. This showed a change in the relative output factors of 1.8% and 2.8% at these comparative regions of interest sizes. In comparison, the 25 mm cone had negligible differences in the measured output factor between zero extrapolation, 0.5 and 1.0 mm diameter ROIs, respectively. Conclusions: This work shows that for very small fields such as 4.0 mm cone sizes, a measureable difference can be seen in the relative output factor based on the circular ROI and the size of the area of analysis using radiochromic film dosimetry. The authors recommend to scan the Gafchromic EBT3 film at a resolution of 1200 dpi for cone sizes less than 7.5 mm and to utilize an extrapolation technique for the output factor measurements of very small field dosimetry.« less
NASA Astrophysics Data System (ADS)
Green, Jonathan; Schmitz, Oliver; Severn, Greg; van Ruremonde, Lars; Winters, Victoria
2017-10-01
The MARIA device at the UW-Madison is used primarily to investigate the dynamics and fueling of neutral particles in helicon discharges. A new systematic method is in development to measure key plasma and neutral particle parameters by spectroscopic methods. The setup relies on spectroscopic line ratios for investigating basic plasma parameters and extrapolation to other states using a collisional radiative model. Active pumping using a Nd:YAG pumped dye laser is used to benchmark and correct the underlying atomic data for the collisional radiative model. First results show a matching linear dependence between electron density and laser induced fluorescence on the magnetic field above 500G. This linear dependence agrees with the helicon dispersion relation and implies MARIA can reliably support the helicon mode and support future measurements. This work was funded by the NSF CAREER award PHY-1455210.
Interpretation guidelines of a standard Y-chromosome STR 17-plex PCR-CE assay for crime casework.
Roewer, Lutz; Geppert, Maria
2012-01-01
Y-STR analysis is an invaluable tool to examine evidence in sexual assault cases and in other forensic casework. Unambiguous detection of the male component in DNA mixtures with a high female background is still the main field of application of forensic Y-STR haplotyping. In the last years, powerful technologies including a 17-locus multiplex PCR assay have been introduced in the forensic laboratories. At the same time, statistical methods have been developed and adapted for interpretation of a nonrecombining, linear marker as the Y-chromosome which shows a strongly clustered geographical distribution due to the linear inheritance and the patrilocality of ancestral groups. Large population databases, namely the Y-STR Haplotype Reference Database (YHRD), have been established to assess the evidentiary value of Y-STR matches by means of frequency estimation methods (counting and extrapolation).
Present constraints on the H-dibaryon at the physical point from Lattice QCD
Beane, S. R.; Chang, E.; Detmold, W.; ...
2011-11-10
The current constraints from Lattice QCD on the existence of the H-dibaryon are discussed. With only two significant Lattice QCD calculations of the H-dibaryon binding energy at approximately the same lattice spacing, the form of the chiral and continuum extrapolations to the physical point are not determined. In this brief report, an extrapolation that is quadratic in the pion mass, motivated by low-energy effective field theory, is considered. An extrapolation that is linear in the pion mass is also considered, a form that has no basis in the effective field theory, but is found to describe the light-quark mass dependencemore » observed in Lattice QCD calculations of the octet baryon masses. In both cases, the extrapolation to the physical pion mass allows for a bound H-dibaryon or a near-threshold scattering state.« less
Non-linearities in Holocene floodplain sediment storage
NASA Astrophysics Data System (ADS)
Notebaert, Bastiaan; Nils, Broothaerts; Jean-François, Berger; Gert, Verstraeten
2013-04-01
Floodplain sediment storage is an important part of the sediment cascade model, buffering sediment delivery between hillslopes and oceans, which is hitherto not fully quantified in contrast to other global sediment budget components. Quantification and dating of floodplain sediment storage is data and financially demanding, limiting contemporary estimates for larger spatial units to simple linear extrapolations from a number of smaller catchments. In this paper we will present non-linearities in both space and time for floodplain sediment budgets in three different catchments. Holocene floodplain sediments of the Dijle catchment in the Belgian loess region, show a clear distinction between morphological stages: early Holocene peat accumulation, followed by mineral floodplain aggradation from the start of the agricultural period on. Contrary to previous assumptions, detailed dating of this morphological change at different shows an important non-linearity in geomorphologic changes of the floodplain, both between and within cross sections. A second example comes from the Pre-Alpine French Valdaine region, where non-linearities and complex system behavior exists between (temporal) patterns of soil erosion and floodplain sediment deposition. In this region Holocene floodplain deposition is characterized by different cut-and-fill phases. The quantification of these different phases shows a complicated image of increasing and decreasing floodplain sediment storage, which hampers the image of increasing sediment accumulation over time. Although fill stages may correspond with large quantities of deposited sediment and traditionally calculated sedimentation rates for such stages are high, they do not necessary correspond with a long-term net increase in floodplain deposition. A third example is based on the floodplain sediment storage in the Amblève catchment, located in the Belgian Ardennes uplands. Detailed floodplain sediment quantification for this catchments shows that a strong multifractality is present in the scaling relationship between sediment storage and catchment area, depending on geomorphic landscape properties. Extrapolation of data from one spatial scale to another inevitably leads to large errors: when only the data of the upper floodplains are considered, a regression analysis results in an overestimation of total floodplain deposition for the entire catchment of circa 115%. This example demonstrates multifractality and related non-linearity in scaling relationships, which influences extrapolations beyond the initial range of measurements. These different examples indicate how traditional extrapolation techniques and assumptions in sediment budget studies can be challenged by field data, further complicating our understanding of these systems. Although simplifications are often necessary when working on large spatial scale, such non-linearities may form challenges for a better understanding of system behavior.
Evaluation of algorithms for geological thermal-inertia mapping
NASA Technical Reports Server (NTRS)
Miller, S. H.; Watson, K.
1977-01-01
The errors incurred in producing a thermal inertia map are of three general types: measurement, analysis, and model simplification. To emphasize the geophysical relevance of these errors, they were expressed in terms of uncertainty in thermal inertia and compared with the thermal inertia values of geologic materials. Thus the applications and practical limitations of the technique were illustrated. All errors were calculated using the parameter values appropriate to a site at the Raft River, Id. Although these error values serve to illustrate the magnitudes that can be expected from the three general types of errors, extrapolation to other sites should be done using parameter values particular to the area. Three surface temperature algorithms were evaluated: linear Fourier series, finite difference, and Laplace transform. In terms of resulting errors in thermal inertia, the Laplace transform method is the most accurate (260 TIU), the forward finite difference method is intermediate (300 TIU), and the linear Fourier series method the least accurate (460 TIU).
NASA Technical Reports Server (NTRS)
Cuddihy, Edward F. (Inventor); Willis, Paul B. (Inventor)
1989-01-01
A method of predicting aging of polymers operates by heating a polymer in the outdoors to an elevated temperature until a change of property is induced. The test is conducted at a plurality of temperatures to establish a linear Arrhenius plot which is extrapolated to predict the induction period for failure of the polymer at ambient temperature. An Outdoor Photo Thermal Aging Reactor (OPTAR) is also described including a heatable platen for receiving a sheet of polymer, means to heat the platen, and switching means such as a photoelectric switch for turning off the heater during dark periods.
NASA Technical Reports Server (NTRS)
Cuddihy, Edward F. (Inventor); Willis, Paul B. (Inventor)
1990-01-01
A method of predicting aging of polymers operates by heating a polymer in the outdoors to an elevated temperature until a change of property is induced. The test is conducted at a plurality of temperatures to establish a linear Arrhenius plot which is extrapolated to predict the induction period for failure of the polymer at ambient temperature. An Outdoor Photo Thermal Aging Reactor (OPTAR) is also described including a heatable platen for receiving a sheet of polymer, means to heat the platen and switching means such as a photoelectric switch for turning off the heater during dark periods.
Estimating the size of an open population using sparse capture-recapture data.
Huggins, Richard; Stoklosa, Jakub; Roach, Cameron; Yip, Paul
2018-03-01
Sparse capture-recapture data from open populations are difficult to analyze using currently available frequentist statistical methods. However, in closed capture-recapture experiments, the Chao sparse estimator (Chao, 1989, Biometrics 45, 427-438) may be used to estimate population sizes when there are few recaptures. Here, we extend the Chao (1989) closed population size estimator to the open population setting by using linear regression and extrapolation techniques. We conduct a small simulation study and apply the models to several sparse capture-recapture data sets. © 2017, The International Biometric Society.
The correlation of fractal structures in the photospheric and the coronal magnetic field
NASA Astrophysics Data System (ADS)
Dimitropoulou, M.; Georgoulis, M.; Isliker, H.; Vlahos, L.; Anastasiadis, A.; Strintzi, D.; Moussas, X.
2009-10-01
Context: This work examines the relation between the fractal properties of the photospheric magnetic patterns and those of the coronal magnetic fields in solar active regions. Aims: We investigate whether there is any correlation between the fractal dimensions of the photospheric structures and the magnetic discontinuities formed in the corona. Methods: To investigate the connection between the photospheric and coronal complexity, we used a nonlinear force-free extrapolation method that reconstructs the 3d magnetic fields using 2d observed vector magnetograms as boundary conditions. We then located the magnetic discontinuities, which are considered as spatial proxies of reconnection-related instabilities. These discontinuities form well-defined volumes, called here unstable volumes. We calculated the fractal dimensions of these unstable volumes and compared them to the fractal dimensions of the boundary vector magnetograms. Results: Our results show no correlation between the fractal dimensions of the observed 2d photospheric structures and the extrapolated unstable volumes in the corona, when nonlinear force-free extrapolation is used. This result is independent of efforts to (1) bring the photospheric magnetic fields closer to a nonlinear force-free equilibrium and (2) omit the lower part of the modeled magnetic field volume that is almost completely filled by unstable volumes. A significant correlation between the fractal dimensions of the photospheric and coronal magnetic features is only observed at the zero level (lower limit) of approximation of a current-free (potential) magnetic field extrapolation. Conclusions: We conclude that the complicated transition from photospheric non-force-free fields to coronal force-free ones hampers any direct correlation between the fractal dimensions of the 2d photospheric patterns and their 3d counterparts in the corona at the nonlinear force-free limit, which can be considered as a second level of approximation in this study. Correspondingly, in the zero and first levels of approximation, namely, the potential and linear force-free extrapolation, respectively, we reveal a significant correlation between the fractal dimensions of the photospheric and coronal structures, which can be attributed to the lack of electric currents or to their purely field-aligned orientation.
Proton radius from electron scattering data
NASA Astrophysics Data System (ADS)
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent; Meekins, David; Norum, Blaine; Sawatzky, Brad
2016-05-01
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon, and Stanford. Methods: We make use of stepwise regression techniques using the F test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate error estimates. Results: Starting with the precision, low four-momentum transfer (Q2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q2 data on GE to select functions which extrapolate to high Q2, we find that a Padé (N =M =1 ) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, GE(Q2) =(1+Q2/0.66 GeV2) -2 . Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extremely-low-Q2 data or by use of the Padé approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering results and the muonic hydrogen results are consistent. It is the atomic hydrogen results that are the outliers.
NASA Astrophysics Data System (ADS)
Häberlen, Oliver D.; Chung, Sai-Cheong; Stener, Mauro; Rösch, Notker
1997-03-01
A series of gold clusters spanning the size range from Au6 through Au147 (with diameters from 0.7 to 1.7 nm) in icosahedral, octahedral, and cuboctahedral structure has been theoretically investigated by means of a scalar relativistic all-electron density functional method. One of the main objectives of this work was to analyze the convergence of cluster properties toward the corresponding bulk metal values and to compare the results obtained for the local density approximation (LDA) to those for a generalized gradient approximation (GGA) to the exchange-correlation functional. The average gold-gold distance in the clusters increases with their nuclearity and correlates essentially linearly with the average coordination number in the clusters. An extrapolation to the bulk coordination of 12 yields a gold-gold distance of 289 pm in LDA, very close to the experimental bulk value of 288 pm, while the extrapolated GGA gold-gold distance is 297 pm. The cluster cohesive energy varies linearly with the inverse of the calculated cluster radius, indicating that the surface-to-volume ratio is the primary determinant of the convergence of this quantity toward bulk. The extrapolated LDA binding energy per atom, 4.7 eV, overestimates the experimental bulk value of 3.8 eV, while the GGA value, 3.2 eV, underestimates the experiment by almost the same amount. The calculated ionization potentials and electron affinities of the clusters may be related to the metallic droplet model, although deviations due to the electronic shell structure are noticeable. The GGA extrapolation to bulk values yields 4.8 and 4.9 eV for the ionization potential and the electron affinity, respectively, remarkably close to the experimental polycrystalline work function of bulk gold, 5.1 eV. Gold 4f core level binding energies were calculated for sites with bulk coordination and for different surface sites. The core level shifts for the surface sites are all positive and distinguish among the corner, edge, and face-centered sites; sites in the first subsurface layer show still small positive shifts.
NASA Astrophysics Data System (ADS)
Lee, G. H.; Arnold, S. T.; Eaton, J. G.; Sarkas, H. W.; Bowen, K. H.; Ludewigt, C.; Haberland, H.
1991-03-01
The photodetachment spectra of (H2O){/n =2-69/-} and (NH3){/n =41-1100/-} have been recorded, and vertical detachment energies (VDEs) were obtained from the spectra. For both systems, the cluster anion VDEs increase smoothly with increasing sizes and most species plot linearly with n -1/3, extrapolating to a VDE ( n=∞) value which is very close to the photoelectric threshold energy for the corresponding condensed phase solvated electron system. The linear extrapolation of this data to the analogous condensed phase property suggests that these cluster anions are gas phase counterparts to solvated electrons, i.e. they are embryonic forms of hydrated and ammoniated electrons which mature with increasing cluster size toward condensed phase solvated electrons.
Statistical security for Social Security.
Soneji, Samir; King, Gary
2012-08-01
The financial viability of Social Security, the single largest U.S. government program, depends on accurate forecasts of the solvency of its intergenerational trust fund. We begin by detailing information necessary for replicating the Social Security Administration's (SSA's) forecasting procedures, which until now has been unavailable in the public domain. We then offer a way to improve the quality of these procedures via age- and sex-specific mortality forecasts. The most recent SSA mortality forecasts were based on the best available technology at the time, which was a combination of linear extrapolation and qualitative judgments. Unfortunately, linear extrapolation excludes known risk factors and is inconsistent with long-standing demographic patterns, such as the smoothness of age profiles. Modern statistical methods typically outperform even the best qualitative judgments in these contexts. We show how to use such methods, enabling researchers to forecast using far more information, such as the known risk factors of smoking and obesity and known demographic patterns. Including this extra information makes a substantial difference. For example, by improving only mortality forecasting methods, we predict three fewer years of net surplus, $730 billion less in Social Security Trust Funds, and program costs that are 0.66% greater for projected taxable payroll by 2031 compared with SSA projections. More important than specific numerical estimates are the advantages of transparency, replicability, reduction of uncertainty, and what may be the resulting lower vulnerability to the politicization of program forecasts. In addition, by offering with this article software and detailed replication information, we hope to marshal the efforts of the research community to include ever more informative inputs and to continue to reduce uncertainties in Social Security forecasts.
Sargent, Daniel J.; Buyse, Marc; Burzykowski, Tomasz
2011-01-01
SUMMARY Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download. PMID:21838732
NASA Technical Reports Server (NTRS)
Darden, C. M.
1984-01-01
A method for analyzing shock coalescence which includes three dimensional effects was developed. The method is based on an extension of the axisymmetric solution, with asymmetric effects introduced through an additional set of governing equations, derived by taking the second circumferential derivative of the standard shock equations in the plane of symmetry. The coalescence method is consistent with and has been combined with a nonlinear sonic boom extrapolation program which is based on the method of characteristics. The extrapolation program, is able to extrapolate pressure signatures which include embedded shocks from an initial data line in the plane of symmetry at approximately one body length from the axis of the aircraft to the ground. The axisymmetric shock coalescence solution, the asymmetric shock coalescence solution, the method of incorporating these solutions into the extrapolation program, and the methods used to determine spatial derivatives needed in the coalescence solution are described. Results of the method are shown for a body of revolution at a small, positive angle of attack.
Derosa, Pedro A
2009-06-01
A computationally cheap approach combining time-independent density functional theory (TIDFT) and semiempirical methods with an appropriate extrapolation procedure is proposed to accurately estimate geometrical and electronic properties of conjugated polymers using just a small set of oligomers. The highest occupied molecular orbital-lowest unoccupied molecular orbital gap (HLG) obtained at a TIDFT level (B3PW91) for two polymers, trans-polyacetylene--the simplest conjugated polymer, and a much larger poly(2-methoxy-5-(2,9-ethyl-hexyloxy)-1,4-phenylenevinylene (MEH-PPV) polymer converge to virtually the same asymptotic value than the excitation energy obtained with time-dependent DFT (TDDFT) calculations using the same functional. For TIDFT geometries, the HLG is found to converge to a value within the experimentally accepted range for the band gap of these polymers, when an exponential extrapolation is used; however if semiempirical geometries are used, a linear fit of the HLG versus 1/n is found to produce the best results. Geometrical parameters are observed to reach a saturation value in good agreement with experimental information, within the length of oligomers calculated here and no extrapolation was considered necessary. Finally, the performance of three different semiempirical methods (AM1, PM3, and MNDO) and for the TIDFT calculations, the performance of 7 different full electron basis sets (6-311+G**, 6-31+ +G**, 6-311+ +G**, 6-31+G**, 6-31G**, 6-31+G*, and 6-31G) is compared and it is determined that the choice of semiempirical method or the basis set does not significantly affect the results. 2008 Wiley Periodicals, Inc.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C
2010-09-21
We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.
The Extrapolation of High Altitude Solar Cell I(V) Characteristics to AM0
NASA Technical Reports Server (NTRS)
Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Reinke, William; Blankenship, Kurt; Demers, James
2007-01-01
The high altitude aircraft method has been used at NASA GRC since the early 1960's to calibrate solar cell short circuit current, ISC, to Air Mass Zero (AMO). This method extrapolates ISC to AM0 via the Langley plot method, a logarithmic extrapolation to 0 air mass, and includes corrections for the varying Earth-Sun distance to 1.0 AU and compensating for the non-uniform ozone distribution in the atmosphere. However, other characteristics of the solar cell I(V) curve do not extrapolate in the same way. Another approach is needed to extrapolate VOC and the maximum power point (PMAX) to AM0 illumination. As part of the high altitude aircraft method, VOC and PMAX can be obtained as ISC changes during the flight. These values can then the extrapolated, sometimes interpolated, to the ISC(AM0) value. This approach should be valid as long as the shape of the solar spectra in the stratosphere does not change too much from AMO. As a feasibility check, the results are compared to AMO I(V) curves obtained using the NASA GRC X25 based multi-source simulator. This paper investigates the approach on both multi-junction solar cells and sub-cells.
Correlation energy extrapolation by many-body expansion
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...
2017-01-09
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Correlation energy extrapolation by many-body expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
In situ LTE exposure of the general public: Characterization and extrapolation.
Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc
2012-09-01
In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.
Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar
2015-01-01
Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.
A study of alternative schemes for extrapolation of secular variation at observatories
Alldredge, L.R.
1976-01-01
The geomagnetic secular variation is not well known. This limits the useful life of geomagnetic models. The secular variation is usually assumed to be linear with time. It is found that attenative schemes that employ quasiperiodic variations from internal and external sources can improve the extrapolation of secular variation at high-quality observatories. Although the schemes discussed are not yet fully applicable in worldwide model making, they do suggest some basic ideas that may be developed into useful tools in future model work. ?? 1976.
Extrapolation to Nonequilibrium from Coarse-Grained Response Theory
NASA Astrophysics Data System (ADS)
Basu, Urna; Helden, Laurent; Krüger, Matthias
2018-05-01
Nonlinear response theory, in contrast to linear cases, involves (dynamical) details, and this makes application to many-body systems challenging. From the microscopic starting point we obtain an exact response theory for a small number of coarse-grained degrees of freedom. With it, an extrapolation scheme uses near-equilibrium measurements to predict far-from-equilibrium properties (here, second order responses). Because it does not involve system details, this approach can be applied to many-body systems. It is illustrated in a four-state model and in the near critical Ising model.
Extrapolation methods for vector sequences
NASA Technical Reports Server (NTRS)
Smith, David A.; Ford, William F.; Sidi, Avram
1987-01-01
This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.
NASA Astrophysics Data System (ADS)
Güleçyüz, M. Ç.; Şenyiğit, M.; Ersoy, A.
2018-01-01
The Milne problem is studied in one speed neutron transport theory using the linearly anisotropic scattering kernel which combines forward and backward scatterings (extremely anisotropic scattering) for a non-absorbing medium with specular and diffuse reflection boundary conditions. In order to calculate the extrapolated endpoint for the Milne problem, Legendre polynomial approximation (PN method) is applied and numerical results are tabulated for selected cases as a function of different degrees of anisotropic scattering. Finally, some results are discussed and compared with the existing results in literature.
Extrapolation of sonic boom pressure signatures by the waveform parameter method
NASA Technical Reports Server (NTRS)
Thomas, C. L.
1972-01-01
The waveform parameter method of sonic boom extrapolation is derived and shown to be equivalent to the F-function method. A computer program based on the waveform parameter method is presented and discussed, with a sample case demonstrating program input and output.
NASA Technical Reports Server (NTRS)
Matthews, Clarence W
1953-01-01
An analysis is made of the effects of compressibility on the pressure coefficients about several bodies of revolution by comparing experimentally determined pressure coefficients with corresponding pressure coefficients calculated by the use of the linearized equations of compressible flow. The results show that the theoretical methods predict the subsonic pressure-coefficient changes over the central part of the body but do not predict the pressure-coefficient changes near the nose. Extrapolation of the linearized subsonic theory into the mixed subsonic-supersonic flow region fails to predict a rearward movement of the negative pressure-coefficient peak which occurs after the critical stream Mach number has been attained. Two equations developed from a consideration of the subsonic compressible flow about a prolate spheroid are shown to predict, approximately, the change with Mach number of the subsonic pressure coefficients for regular bodies of revolution of fineness ratio 6 or greater.
SU-E-T-91: Correction Method to Determine Surface Dose for OSL Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, T; Higgins, P
Purpose: OSL detectors are commonly used in clinic due to their numerous advantages, such as linear response, negligible energy, angle and temperature dependence in clinical range, for verification of the doses beyond the dmax. Although, due to the bulky shielding envelope, this type of detectors fails to measure skin dose, which is an important assessment of patient ability to finish the treatment on time and possibility of acute side effects. This study aims to optimize the methodology of determination of skin dose for conventional accelerators and a flattening filter free Tomotherapy. Methods: Measurements were done for x-ray beams: 6 MVmore » (Varian Clinac 2300, 10×10 cm{sup 2} open field, SSD = 100 cm) and for 5.5 MV (Tomotherapy, 15×40 cm{sup 2} field, SAD = 85 cm). The detectors were placed at the surface of the solid water phantom and at the reference depth (dref=1.7cm (Varian 2300), dref =1.0 cm (Tomotherapy)). The measurements for OSLs were related to the externally exposed OSLs measurements, and further were corrected to surface dose using an extrapolation method indexed to the baseline Attix ion chamber measurements. A consistent use of the extrapolation method involved: 1) irradiation of three OSLs stacked on top of each other on the surface of the phantom; 2) measurement of the relative dose value for each layer; and, 3) extrapolation of these values to zero thickness. Results: OSL measurements showed an overestimation of surface doses by the factor 2.31 for Varian 2300 and 2.65 for Tomotherapy. The relationships: SD{sup 2300} = 0.68 × M{sup 2300}-12.7 and SDτoμo = 0.73 × Mτoμo-13.1 were found to correct the single OSL measurements to surface doses in agreement with Attix measurements to within 0.1% for both machines. Conclusion: This work provides simple empirical relationships for surface dose measurements using single OSL detectors.« less
De Vore, Karl W; Fatahi, Nadia M; Sass, John E
2016-08-01
Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.
Constructing Current Singularity in a 3D Line-tied Plasma
Zhou, Yao; Huang, Yi-Min; Qin, Hong; ...
2017-12-27
We revisit Parker's conjecture of current singularity formation in 3D line-tied plasmas using a recently developed numerical method, variational integration for ideal magnetohydrodynamics in Lagrangian labeling. With the frozen-in equation built-in, the method is free of artificial reconnection, and hence it is arguably an optimal tool for studying current singularity formation. Using this method, the formation of current singularity has previously been confirmed in the Hahm–Kulsrud–Taylor problem in 2D. In this paper, we extend this problem to 3D line-tied geometry. The linear solution, which is singular in 2D, is found to be smooth for arbitrary system length. However, with finitemore » amplitude, the linear solution can become pathological when the system is sufficiently long. The nonlinear solutions turn out to be smooth for short systems. Nonetheless, the scaling of peak current density versus system length suggests that the nonlinear solution may become singular at finite length. Finally, with the results in hand, we can neither confirm nor rule out this possibility conclusively, since we cannot obtain solutions with system length near the extrapolated critical value.« less
Equilibrium and Effective Climate Sensitivity
NASA Astrophysics Data System (ADS)
Rugenstein, M.; Bloch-Johnson, J.
2016-12-01
Atmosphere-ocean general circulation models, as well as the real world, take thousands of years to equilibrate to CO2 induced radiative perturbations. Equilibrium climate sensitivity - a fully equilibrated 2xCO2 perturbation - has been used for decades as a benchmark in model intercomparisons, as a test of our understanding of the climate system and paleo proxies, and to predict or project future climate change. Computational costs and limited time lead to the widespread practice of extrapolating equilibrium conditions from just a few decades of coupled simulations. The most common workaround is the "effective climate sensitivity" - defined through an extrapolation of a 150 year abrupt2xCO2 simulation, including the assumption of linear climate feedbacks. The definitions of effective and equilibrium climate sensitivity are often mixed up and used equivalently, and it is argued that "transient climate sensitivity" is the more relevant measure for predicting the next decades. We present an ongoing model intercomparison, the "LongRunMIP", to study century and millennia time scales of AOGCM equilibration and the linearity assumptions around feedback analysis. As a true ensemble of opportunity, there is no protocol and the only condition to participate is a coupled model simulation of any stabilizing scenario simulating more than 1000 years. Many of the submitted simulations took several years to conduct. As of July 2016 the contribution comprises 27 scenario simulations of 13 different models originating from 7 modeling centers, each between 1000 and 6000 years. To contribute, please contact the authors as soon as possible We present preliminary results, discussing differences between effective and equilibrium climate sensitivity, the usefulness of transient climate sensitivity, extrapolation methods, and the state of the coupled climate system close to equilibrium. Caption for the Figure below: Evolution of temperature anomaly and radiative imbalance of 22 simulations with 12 models (color indicates the model). 20 year moving average.
Tran, Van; Little, Mark P
2017-11-01
Murine experiments were conducted at the JANUS reactor in Argonne National Laboratory from 1970 to 1992 to study the effect of acute and protracted radiation dose from gamma rays and fission neutron whole body exposure. The present study reports the reanalysis of the JANUS data on 36,718 mice, of which 16,973 mice were irradiated with neutrons, 13,638 were irradiated with gamma rays, and 6107 were controls. Mice were mostly Mus musculus, but one experiment used Peromyscus leucopus. For both types of radiation exposure, a Cox proportional hazards model was used, using age as timescale, and stratifying on sex and experiment. The optimal model was one with linear and quadratic terms in cumulative lagged dose, with adjustments to both linear and quadratic dose terms for low-dose rate irradiation (<5 mGy/h) and with adjustments to the dose for age at exposure and sex. After gamma ray exposure there is significant non-linearity (generally with upward curvature) for all tumours, lymphoreticular, respiratory, connective tissue and gastrointestinal tumours, also for all non-tumour, other non-tumour, non-malignant pulmonary and non-malignant renal diseases (p < 0.001). Associated with this the low-dose extrapolation factor, measuring the overestimation in low-dose risk resulting from linear extrapolation is significantly elevated for lymphoreticular tumours 1.16 (95% CI 1.06, 1.31), elevated also for a number of non-malignant endpoints, specifically all non-tumour diseases, 1.63 (95% CI 1.43, 2.00), non-malignant pulmonary disease, 1.70 (95% CI 1.17, 2.76) and other non-tumour diseases, 1.47 (95% CI 1.29, 1.82). However, for a rather larger group of malignant endpoints the low-dose extrapolation factor is significantly less than 1 (implying downward curvature), with central estimates generally ranging from 0.2 to 0.8, in particular for tumours of the respiratory system, vasculature, ovary, kidney/urinary bladder and testis. For neutron exposure most endpoints, malignant and non-malignant, show downward curvature in the dose response, and for most endpoints this is statistically significant (p < 0.05). Associated with this, the low-dose extrapolation factor associated with neutron exposure is generally statistically significantly less than 1 for most malignant and non-malignant endpoints, with central estimates mostly in the range 0.1-0.9. In contrast to the situation at higher dose rates, there are statistically non-significant decreases of risk per unit dose at gamma dose rates of less than or equal to 5 mGy/h for most malignant endpoints, and generally non-significant increases in risk per unit dose at gamma dose rates ≤5 mGy/h for most non-malignant endpoints. Associated with this, the dose-rate extrapolation factor, the ratio of high dose-rate to low dose-rate (≤5 mGy/h) gamma dose response slopes, for many tumour sites is in the range 1.2-2.3, albeit not statistically significantly elevated from 1, while for most non-malignant endpoints the gamma dose-rate extrapolation factor is less than 1, with most estimates in the range 0.2-0.8. After neutron exposure there are non-significant indications of lower risk per unit dose at dose rates ≤5 mGy/h compared to higher dose rates for most malignant endpoints, and for all tumours (p = 0.001), and respiratory tumours (p = 0.007) this reduction is conventionally statistically significant; for most non-malignant outcomes risks per unit dose non-significantly increase at lower dose rates. Associated with this, the neutron dose-rate extrapolation factor is less than 1 for most malignant and non-malignant endpoints, in many cases statistically significantly so, with central estimates mostly in the range 0.0-0.2.
Proton radius from electron scattering data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon and Stanford. Methods: We make use of stepwise regression techniques using the F-test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate errormore » estimates. Results: Starting with the precision, low four-momentum transfer (Q 2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F-test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q 2 data on G E to select functions which extrapolate to high Q 2, we find that a Pad´e (N = M = 1) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, G E(Q 2) = (1 + Q 2/0.66 GeV 2) -2. Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extreme low-Q 2 data or by use of the Pad´e approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering result and the muonic hydrogen result are consistent. Lastly, it is the atomic hydrogen results that are the outliers.« less
Proton radius from electron scattering data
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent; ...
2016-05-31
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon and Stanford. Methods: We make use of stepwise regression techniques using the F-test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate errormore » estimates. Results: Starting with the precision, low four-momentum transfer (Q 2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F-test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q 2 data on G E to select functions which extrapolate to high Q 2, we find that a Pad´e (N = M = 1) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, G E(Q 2) = (1 + Q 2/0.66 GeV 2) -2. Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extreme low-Q 2 data or by use of the Pad´e approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering result and the muonic hydrogen result are consistent. Lastly, it is the atomic hydrogen results that are the outliers.« less
A Method for Estimating Zero-Flow Pressure and Intracranial Pressure
Caren, Marzban; Paul, Raymond Illian; David, Morison; Anne, Moore; Michel, Kliot; Marek, Czosnyka; Pierre, Mourad
2012-01-01
Background It has been hypothesized that critical closing pressure of cerebral circulation, or zero-flow pressure (ZFP), can estimate intracranial pressure (ICP). One ZFP estimation method employs extrapolation of arterial blood pressure versus blood-flow velocity. The aim of this study is to improve ICP predictions. Methods Two revisions are considered: 1) The linear model employed for extrapolation is extended to a nonlinear equation, and 2) the parameters of the model are estimated by an alternative criterion (not least-squares). The method is applied to data on transcranial Doppler measurements of blood-flow velocity, arterial blood pressure, and ICP, from 104 patients suffering from closed traumatic brain injury, sampled across the United States and England. Results The revisions lead to qualitative (e.g., precluding negative ICP) and quantitative improvements in ICP prediction. In going from the original to the revised method, the ±2 standard deviation of error is reduced from 33 to 24 mm Hg; the root-mean-squared error (RMSE) is reduced from 11 to 8.2 mm Hg. The distribution of RMSE is tighter as well; for the revised method the 25th and 75th percentiles are 4.1 and 13.7 mm Hg, respectively, as compared to 5.1 and 18.8 mm Hg for the original method. Conclusions Proposed alterations to a procedure for estimating ZFP lead to more accurate and more precise estimates of ICP, thereby offering improved means of estimating it noninvasively. The quality of the estimates is inadequate for many applications, but further work is proposed which may lead to clinically useful results. PMID:22824923
Shang, Chao; Rice, James A.; Eberl, Dennis D.; Lin, Jar-Shyong
2003-01-01
It has been suggested that interstratified illite-smectite (I-S) minerals are composed of aggregates of fundamental particles. Many attempts have been made to measure the thickness of such fundamental particles, but each of the methods used suffers from its own limitations and uncertainties. Small-angle X-ray scattering (SAXS) can be used to measure the thickness of particles that scatter X-rays coherently. We used SAXS to study suspensions of Na-rectorite and other illites with varying proportions of smectite. The scattering intensity (I) was recorded as a function of the scattering vector, q = (4 /) sin(/2), where is the X-ray wavelength and is the scattering angle. The experimental data were treated with a direct Fourier transform to obtain the pair distance distribution function (PDDF) that was then used to determine the thickness of illite particles. The Guinier and Porod extrapolations were used to obtain the scattering intensity beyond the experimental q, and the effects of such extrapolations on the PDDF were examined. The thickness of independent rectorite particles (used as a reference mineral) is 18.3 Å. The SAXS results are compared with those obtained by X-ray diffraction peak broadening methods. It was found that the power-law exponent (α) obtained by fitting the data in the region of q = 0.1-0.6 nm-1 to the power law (I = I0q-α) is a linear function of illite particle thickness. Therefore, illite particle thickness could be predicted by the linear relationship as long as the thickness is within the limit where α <4.0.
Swenberg, J A; Richardson, F C; Boucheron, J A; Deal, F H; Belinsky, S A; Charbonneau, M; Short, B G
1987-12-01
Recent investigations on mechanism of carcinogenesis have demonstrated important quantitative relationships between the induction of neoplasia, the molecular dose of promutagenic DNA adducts and their efficiency for causing base-pair mismatch, and the extent of cell proliferation in target organ. These factors are involved in the multistage process of carcinogenesis, including initiation, promotion, and progression. The molecular dose of DNA adducts can exhibit supralinear, linear, or sublinear relationships to external dose due to differences in absorption, biotransformation, and DNA repair at high versus low doses. In contrast, increased cell proliferation is a common phenomena that is associated with exposures to relatively high doses of toxic chemicals. As such, it enhances the carcinogenic response at high doses, but has little effect at low doses. Since data on cell proliferation can be obtained for any exposure scenario and molecular dosimetry studies are beginning to emerge on selected chemical carcinogens, methods are needed so that these critical factors can be utilized in extrapolation from high to low doses and across species. The use of such information may provide a scientific basis for quantitative risk assessment.
Swenberg, J A; Richardson, F C; Boucheron, J A; Deal, F H; Belinsky, S A; Charbonneau, M; Short, B G
1987-01-01
Recent investigations on mechanism of carcinogenesis have demonstrated important quantitative relationships between the induction of neoplasia, the molecular dose of promutagenic DNA adducts and their efficiency for causing base-pair mismatch, and the extent of cell proliferation in target organ. These factors are involved in the multistage process of carcinogenesis, including initiation, promotion, and progression. The molecular dose of DNA adducts can exhibit supralinear, linear, or sublinear relationships to external dose due to differences in absorption, biotransformation, and DNA repair at high versus low doses. In contrast, increased cell proliferation is a common phenomena that is associated with exposures to relatively high doses of toxic chemicals. As such, it enhances the carcinogenic response at high doses, but has little effect at low doses. Since data on cell proliferation can be obtained for any exposure scenario and molecular dosimetry studies are beginning to emerge on selected chemical carcinogens, methods are needed so that these critical factors can be utilized in extrapolation from high to low doses and across species. The use of such information may provide a scientific basis for quantitative risk assessment. PMID:3447904
Calculation methods study on hot spot stress of new girder structure detail
NASA Astrophysics Data System (ADS)
Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing
2017-10-01
To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.
A Kalman filter for a two-dimensional shallow-water model
NASA Technical Reports Server (NTRS)
Parrish, D. F.; Cohn, S. E.
1985-01-01
A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.
1972-01-01
Vehicle encountered electron and proton fluxes were calculated for a set of nominal UK-5 trajectories with new computational methods and new electron environment models. Temporal variations in the electron data were considered and partially accounted for. Field strength calculations were performed with an extrapolated model on the basis of linear secular variation predictions. Tabular maps for selected electron and proton energies were constructed as functions of latitude and longitude for specified altitudes. Orbital flux integration results are presented in graphical and tabular form; they are analyzed, explained, and discussed.
Fourier functional analysis for unsteady aerodynamic modeling
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Chin, Suei
1991-01-01
A method based on Fourier analysis is developed to analyze the force and moment data obtained in large amplitude forced oscillation tests at high angles of attack. The aerodynamic models for normal force, lift, drag, and pitching moment coefficients are built up from a set of aerodynamic responses to harmonic motions at different frequencies. Based on the aerodynamic models of harmonic data, the indicial responses are formed. The final expressions for the models involve time integrals of the indicial type advocated by Tobak and Schiff. Results from linear two- and three-dimensional unsteady aerodynamic theories as well as test data for a 70-degree delta wing are used to verify the models. It is shown that the present modeling method is accurate in producing the aerodynamic responses to harmonic motions and the ramp type motions. The model also produces correct trend for a 70-degree delta wing in harmonic motion with different mean angles-of-attack. However, the current model cannot be used to extrapolate data to higher angles-of-attack than that of the harmonic motions which form the aerodynamic model. For linear ramp motions, a special method is used to calculate the corresponding frequency and phase angle at a given time. The calculated results from modeling show a higher lift peak for linear ramp motion than for harmonic ramp motion. The current model also shows reasonably good results for the lift responses at different angles of attack.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
NASA Astrophysics Data System (ADS)
Tyagi, Chetna; Yadav, Preeti; Sharma, Ambika
2018-05-01
The present work reveals the optical study of Se82Te15Bi1.0Sn2.0/polyvinylpyrrolidone (PVP) nanocomposites. Bulk glasses of chalcogenide was prepared by well-known melt quenching technique. Wet chemical technique is proposed for making the composite of Se82Te15Bi1.0Sn2.0 and PVP polymer as it is easy to handle and cost effective. The composites films were made on glass slide from the solution of Se-Te-Bi-Sn and PVP polymer using spin coating technique. The transmission as well as absorbance is recorded by using UV-Vis-NIR spectrophotometer in the spectral range 350-700 nm. The linear refractive index (n) of polymer nanocomposites are calculated by Swanepoel approach. The linear refractive index (n) PVP doped Se82Te15Bi1.0Sn2.0 chalcogenide is found to be 1.7. The optical band gap has been evaluated by means of Tauc extrapolation method. Tichy and Ticha model was utilized for the characterization of nonlinear refractive index (n2).
Thyroid Patient Salivary Radioiodine Transit and Dysfunction Assessment Using Chewing Gums.
Okkalides, Demetrios
2016-11-01
Radiation-induced salivary gland dysfunction is the most frequent side-effect of I-131 thyroid therapy. Here, a novel saliva sampling method with ordinary chewing gums administered to the patients at appropriate time intervals post-treatment (TIPT) was used to relate this effect to chewing gum saliva activity (CGSA) content. Saliva samples were acquired after the oral administration of prescribed I-131 activity (radioactivity administered [RA]) to 19 differentiated thyroid cancer (DTC) and 16 hyperthyroidism patients of the radioisotope unit (RIU) during 2014 and 2015. The error of this saliva collecting process was found to be 1.2%-2.05%, and so, the method was considered satisfactory. For each patient, the CGSA was plotted against the TIPT producing a curve, R(t). On this, two functions were fitted: a linear on the first few rising data points and a gamma variate over the peak of the R(t). From these, several parameters related to the radioactivity oral transit were calculated and the total radioactivity administered (TRA) during all past treatments of each patient was obtained from RIU records. The patients were asked to report any swelling, dry mouth, taste-smell change, or pain and were graded as a morbidity score (MS) describing the quality of life of each. The peak radioactivity in the saliva samples, R max , was found to be proportional to RA and was plotted against the CGSA extrapolated at 24 and 36 hours. The linear fits produced were used to estimate the salivary glands' activity average effective half-life (16.3 hours). The MS of DTC patients was found to depend linearly both on R max and TRA (MS = 0.0032 × R max - 0.7107 and MS = 0.1862 × TRA +0.66, respectively). Both lines were used to extrapolate symptom thresholds. The measurement of R max in DTC patients proved very useful for individualized radiation protection, and the dependence of MS on TRA should be used when additional treatments are considered for repeat DTC patients.
Extrapolation of Functions of Many Variables by Means of Metric Analysis
NASA Astrophysics Data System (ADS)
Kryanev, Alexandr; Ivanov, Victor; Romanova, Anastasiya; Sevastianov, Leonid; Udumyan, David
2018-02-01
The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.
Power maps and wavefront for progressive addition lenses in eyeglass frames.
Mejía, Yobani; Mora, David A; Díaz, Daniel E
2014-10-01
To evaluate a method for measuring the cylinder, sphere, and wavefront of progressive addition lenses (PALs) in eyeglass frames. We examine the contour maps of cylinder, sphere, and wavefront of a PAL assembled in an eyeglass frame using an optical system based on a Hartmann test. To reduce the data noise, particularly in the border of the eyeglass frame, we implement a method based on the Fourier analysis to extrapolate spots outside the eyeglass frame. The spots are extrapolated up to a circular pupil that circumscribes the eyeglass frame and compared with data obtained from a circular uncut PAL. By using the Fourier analysis to extrapolate spots outside the eyeglass frame, we can remove the edge artifacts of the PAL within its frame and implement the modal method to fit wavefront data with Zernike polynomials within a circular aperture that circumscribes the frame. The extrapolated modal maps from framed PALs accurately reflect maps obtained from uncut PALs and provide smoothed maps for the cylinder and sphere inside the eyeglass frame. The proposed method for extrapolating spots outside the eyeglass frame removes edge artifacts of the contour maps (wavefront, cylinder, and sphere), which may be useful to facilitate measurements such as the length and width of the progressive corridor for a PAL in its frame. The method can be applied to any shape of eyeglass frame.
NASA Technical Reports Server (NTRS)
Wu, S. T.; Sun, M. T.; Sakurai, Takashi
1990-01-01
This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.
Analysis and Forecasting of Shoreline Position
NASA Astrophysics Data System (ADS)
Barton, C. C.; Tebbens, S. F.
2007-12-01
Analysis of historical shoreline positions on sandy coasts, in the geologic record, and study of sea-level rise curves reveals that the dynamics of the underlying processes produce temporal/spatial signals that exhibit power scaling and are therefore self-affine fractals. Self-affine time series signals can be quantified over many orders of magnitude in time and space in terms of persistence, a measure of the degree of correlation between adjacent values in the stochastic portion of a time series. Fractal statistics developed for self-affine time series are used to forecast a probability envelope bounding future shoreline positions. The envelope provides the standard deviation as a function of three variables: persistence, a constant equal to the value of the power spectral density when 1/period equals 1, and the number of time increments. The persistence of a twenty-year time series of the mean-high-water (MHW) shoreline positions was measured for four profiles surveyed at Duck, NC at the Field Research Facility (FRF) by the U.S. Army Corps of Engineers. The four MHW shoreline time series signals are self-affine with persistence ranging between 0.8 and 0.9, which indicates that the shoreline position time series is weakly persistent (where zero is uncorrelated), and has highly varying trends for all time intervals sampled. Forecasts of a probability envelope for future MHW positions are made for the 20 years of record and beyond to 50 years from the start of the data records. The forecasts describe the twenty-year data sets well and indicate that within a 96% confidence envelope, future decadal MHW shoreline excursions should be within 14.6 m of the position at the start of data collection. This is a stable-oscillatory shoreline. The forecasting method introduced here includes the stochastic portion of the time series while the traditional method of predicting shoreline change reduces the time series to a linear trend line fit to historic shoreline positions and extrapolated linearly to forecast future positions with a linearly increasing mean that breaks the confidence envelope eight years into the future and continues to increase. The traditional method is a poor representation of the observed shoreline position time series and is a poor basis for extrapolating future shoreline positions.
ARSENIC MODE OF ACTION AND DEVELOPING A BBDR MODEL
The current USEPA cancer risk assessment for inorganic arsenic is based on a linear extrapolation of the epidemiological data from exposed populations in Taiwan. However, proposed key events in the mode of action (MoA) for arsenic-induced cancer (which may include altered DNA me...
A Comparison of Methods for Computing the Residual Resistivity Ratio of High-Purity Niobium
Splett, J. D.; Vecchia, D. F.; Goodrich, L. F.
2011-01-01
We compare methods for estimating the residual resistivity ratio (RRR) of high-purity niobium and investigate the effects of using different functional models. RRR is typically defined as the ratio of the electrical resistances measured at 273 K (the ice point) and 4.2 K (the boiling point of helium at standard atmospheric pressure). However, pure niobium is superconducting below about 9.3 K, so the low-temperature resistance is defined as the normal-state (i.e., non-superconducting state) resistance extrapolated to 4.2 K and zero magnetic field. Thus, the estimated value of RRR depends significantly on the model used for extrapolation. We examine three models for extrapolation based on temperature versus resistance, two models for extrapolation based on magnetic field versus resistance, and a new model based on the Kohler relationship that can be applied to combined temperature and field data. We also investigate the possibility of re-defining RRR so that the quantity is not dependent on extrapolation. PMID:26989580
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fry, R.J.M.
The author discusses some examples of how different experimental animal systems have helped to answer questions about the effects of radiation, in particular, carcinogenesis, and to indicate how the new experimental model systems promise an even more exciting future. Entwined in these themes will be observations about susceptibility and extrapolation across species. The hope of developing acceptable methods of extrapolation of estimates of the risk of radiogenic cancer increases as molecular biology reveals the trail of remarkable similarities in the genetic control of many functions common to many species. A major concern about even attempting to extrapolate estimates of risksmore » of radiation-induced cancer across species has been that the mechanisms of carcinogenesis were so different among different species that it would negate the validity of extrapolation. The more that has become known about the genes involved in cancer, especially those related to the initial events in carcinogenesis, the more have the reasons for considering methods of extrapolation across species increased.« less
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1982-01-01
The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.
NASA Technical Reports Server (NTRS)
Brinson, H. F.
1985-01-01
The utilization of adhesive bonding for composite structures is briefly assessed. The need for a method to determine damage initiation and propagation for such joints is outlined. Methods currently in use to analyze both adhesive joints and fiber reinforced plastics is mentioned and it is indicated that all methods require the input of the mechanical properties of the polymeric adhesive and composite matrix material. The mechanical properties of polymers are indicated to be viscoelastic and sensitive to environmental effects. A method to analytically characterize environmentally dependent linear and nonlinear viscoelastic properties is given. It is indicated that the methodology can be used to extrapolate short term data to long term design lifetimes. That is, the method can be used for long term durability predictions. Experimental results for near adhesive resins, polymers used as composite matrices and unidirectional composite laminates is given. The data is fitted well with the analytical durability methodology. Finally, suggestions are outlined for the development of an analytical methodology for the durability predictions of adhesively bonded composite structures.
Measurement of charge transfer potential barrier in pinned photodiode CMOS image sensors
NASA Astrophysics Data System (ADS)
Chen, Cao; Bing, Zhang; Junfeng, Wang; Longsheng, Wu
2016-05-01
The charge transfer potential barrier (CTPB) formed beneath the transfer gate causes a noticeable image lag issue in pinned photodiode (PPD) CMOS image sensors (CIS), and is difficult to measure straightforwardly since it is embedded inside the device. From an understanding of the CTPB formation mechanism, we report on an alternative method to feasibly measure the CTPB height by performing a linear extrapolation coupled with a horizontal left-shift on the sensor photoresponse curve under the steady-state illumination. The theoretical study was performed in detail on the principle of the proposed method. Application of the measurements on a prototype PPD-CIS chip with an array of 160 × 160 pixels is demonstrated. Such a method intends to shine new light on the guidance for the lag-free and high-speed sensors optimization based on PPD devices. Project supported by the National Defense Pre-Research Foundation of China (No. 51311050301095).
Petit, Caroline; Samson, Adeline; Morita, Satoshi; Ursino, Moreno; Guedj, Jérémie; Jullien, Vincent; Comets, Emmanuelle; Zohar, Sarah
2018-06-01
The number of trials conducted and the number of patients per trial are typically small in paediatric clinical studies. This is due to ethical constraints and the complexity of the medical process for treating children. While incorporating prior knowledge from adults may be extremely valuable, this must be done carefully. In this paper, we propose a unified method for designing and analysing dose-finding trials in paediatrics, while bridging information from adults. The dose-range is calculated under three extrapolation options, linear, allometry and maturation adjustment, using adult pharmacokinetic data. To do this, it is assumed that target exposures are the same in both populations. The working model and prior distribution parameters of the dose-toxicity and dose-efficacy relationships are obtained using early-phase adult toxicity and efficacy data at several dose levels. Priors are integrated into the dose-finding process through Bayesian model selection or adaptive priors. This calibrates the model to adjust for misspecification, if the adult and pediatric data are very different. We performed a simulation study which indicates that incorporating prior adult information in this way may improve dose selection in children.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogen, K T
2007-01-30
As reflected in the 2005 USEPA Guidelines for Cancer Risk Assessment, some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained using separate ''linear'' (genotoxic) vs. ''nonlinear'' (nongenotoxic) approaches to low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient to parameterize a biologically based model that reliably extrapolates risk to lowmore » levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach--similar to that used in reference dose procedures for classic toxicity endpoints--can address MOA uncertainty in a way that avoids explicit modeling of low-dose risk as a function of administered or internal dose. Even when a ''nonlinear'' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was illustrated for the rodent carcinogen naphthalene. Bioassay data, supplemental toxicokinetic data, and related physiologically based pharmacokinetic and 2-stage stochastic carcinogenesis modeling results all clearly indicate that naphthalene is a DMOA carcinogen. Plausibility bounds on rat-tumor-type specific DMOA-related uncertainty were obtained using a 2-stage model adapted to reflect the empirical link between genotoxic and cytotoxic effects of the most potent identified genotoxic naphthalene metabolites, 1,2- and 1,4-naphthoquinone. Resulting bounds each provided the basis for a corresponding ''uncertainty'' factor <1 appropriate to apply to estimates of naphthalene risk obtained by linear extrapolation under a default genotoxic MOA assumption. This procedure is proposed as scientifically credible method to address MOA uncertainty for DMOA carcinogens.« less
A nowcasting technique based on application of the particle filter blending algorithm
NASA Astrophysics Data System (ADS)
Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai
2017-10-01
To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.
Benhaim, Deborah; Grushka, Eli
2010-01-01
This study investigates lipophilicity determination by chromatographic measurements using the polar embedded Ascentis RP-Amide stationary phase. As a new generation of amide-functionalized silica stationary phase, the Ascentis RP-Amide column is evaluated as a possible substitution to the n-octanol/water partitioning system for lipophilicity measurements. For this evaluation, extrapolated retention factors, log k'w, of a set of diverse compounds were determined using different methanol contents in the mobile phase. The use of n-octanol enriched mobile phase enhances the relationship between the slope (S) of the extrapolation lines and the extrapolated log k'w (the intercept of the extrapolation),as well as the correlation between log P values and the extrapolated log k'w (1:1 correlation, r2 = 0.966).In addition, the use of isocratic retention factors, at 40% methanol in the mobile phase, provides a rapid tool for lipophilicity determination. The intermolecular interactions that contribute to the retention process in the Ascentis RP-Amide phase are characterized using the solvation parameter model of Abraham.The LSER system constants for the column are very similar to the LSER constants of the n-octanol/water extraction system. Tanaka radar plots are used for quick visual comparison of the system constants of the Ascentis RP-Amide column and the n-octanol/water extraction system. The results all indicate that the Ascentis RP-Amide stationary phase can provide reliable lipophilic data. Copyright 2009 Elsevier B.V. All rights reserved.
GIS Well Temperature Data from the Roosevelt Hot Springs, Utah FORGE Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gwynn, Mark; Hill, Jay; Allis, Rick
This is a GIS point feature shapefile representing wells, and their temperatures, that are located in the general Utah FORGE area near Milford, Utah. There are also fields that represent interpolated temperature values at depths of 200 m, 1000 m, 2000 m, 3000 m, and 4000 m. in degrees Fahrenheit. The temperature values at specific depths as mentioned above were derived as follows. In cases where the well reached a given depth (200 m and 1, 2, 3, or 4 km), the temperature is the measured temperature. For the shallower wells (and at deeper depths in the wells reaching onemore » or more of the target depths), temperatures were extrapolated from the temperature-depth profiles that appeared to have stable (re-equilibrated after drilling) and linear profiles within the conductive regime (i.e. below the water table or other convective influences such as shallow hydrothermal outflow from the Roosevelt Hydrothermal System). Measured temperatures/gradients from deeper wells (when available and reasonably close to a given well) were used to help constrain the extrapolation to greater depths. Most of the field names in the attribute table are intuitive, however HF = heat flow, intercept = the temperature at the surface (x-axis of the temperature-depth plots) based on the linear segment of the plot that was used to extrapolate the temperature profiles to greater depths, and depth_m is the total well depth. This information is also present in the shapefile metadata.« less
X-ray surface dose measurements using TLD extrapolation.
Kron, T; Elliot, A; Wong, T; Showell, G; Clubb, B; Metcalfe, P
1993-01-01
Surface dose measurements in therapeutic x-ray beams are of importance in determining the dose to the skin of patients undergoing radiotherapy. Measurements were performed in the 6-MV beam of a medical linear accelerator with LiF thermoluminescence dosimeters (TLD) using a solid water phantom. TLD chips (surface area 3.17 x 3.17 cm2) of three different thicknesses (0.230, 0.099, and 0.038 g/cm2) were used to extrapolate dose readings to an infinitesimally thin layer of LiF. This surface dose was measured for field sizes ranging from 1 x 1 cm2 to 40 x 40 cm2. The surface dose relative to maximum dose was found to be 10.0% for a field size of 5 x 5 cm2, 16.3% for 10 x 10 cm2, and 26.9% for 20 x 20 cm2. Using a 6-mm Perspex block tray in the beam increased the surface dose in these fields to 10.7%, 17.7%, and 34.2% respectively. Due to the small size of the TLD chips, TLD extrapolation is applicable also for intracavity and exit dose determinations. The technique used for in vivo dosimetry could provide clinicians information about the build up of dose up to 1-mm depth in addition to an extrapolated surface dose measurement.
NASA Technical Reports Server (NTRS)
Mendelson, A.; Manson, S. S.
1960-01-01
A method using finite-difference recurrence relations is presented for direct extrapolation of families of curves. The method is illustrated by applications to creep-rupture data for several materials and it is shown that good results can be obtained without the necessity for any of the usual parameter concepts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.
2014-12-14
A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less
New method of extrapolation of the resistance of a model planing boat to full size
NASA Technical Reports Server (NTRS)
Sottorf, W
1942-01-01
The previously employed method of extrapolating the total resistance to full size with lambda(exp 3) (model scale) and thereby foregoing a separate appraisal of the frictional resistance, was permissible for large models and floats of normal size. But faced with the ever increasing size of aircraft a reexamination of the problem of extrapolation to full size is called for. A method is described by means of which, on the basis of an analysis of tests on planing surfaces, the variation of the wetted surface over the take-off range is analytically obtained. The friction coefficients are read from Prandtl's curve for turbulent boundary layer with laminar approach. With these two values a correction for friction is obtainable.
Interpolation Method Needed for Numerical Uncertainty
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2011 CFR
2011-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2012 CFR
2012-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2013 CFR
2013-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2014 CFR
2014-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
NASA Astrophysics Data System (ADS)
Huang, Xinchuan; Valeev, Edward F.; Lee, Timothy J.
2010-12-01
One-particle basis set extrapolation is compared with one of the new R12 methods for computing highly accurate quartic force fields (QFFs) and spectroscopic data, including molecular structures, rotational constants, and vibrational frequencies for the H2O, N2H+, NO2+, and C2H2 molecules. In general, agreement between the spectroscopic data computed from the best R12 and basis set extrapolation methods is very good with the exception of a few parameters for N2H+ where it is concluded that basis set extrapolation is still preferred. The differences for H2O and NO2+ are small and it is concluded that the QFFs from both approaches are more or less equivalent in accuracy. For C2H2, however, a known one-particle basis set deficiency for C-C multiple bonds significantly degrades the quality of results obtained from basis set extrapolation and in this case the R12 approach is clearly preferred over one-particle basis set extrapolation. The R12 approach used in the present study was modified in order to obtain high precision electronic energies, which are needed when computing a QFF. We also investigated including core-correlation explicitly in the R12 calculations, but conclude that current approaches are lacking. Hence core-correlation is computed as a correction using conventional methods. Considering the results for all four molecules, it is concluded that R12 methods will soon replace basis set extrapolation approaches for high accuracy electronic structure applications such as computing QFFs and spectroscopic data for comparison to high-resolution laboratory or astronomical observations, provided one uses a robust R12 method as we have done here. The specific R12 method used in the present study, CCSD(T)R12, incorporated a reformulation of one intermediate matrix in order to attain machine precision in the electronic energies. Final QFFs for N2H+ and NO2+ were computed, including basis set extrapolation, core-correlation, scalar relativity, and higher-order correlation and then used to compute highly accurate spectroscopic data for all isotopologues. Agreement with high-resolution experiment for 14N2H+ and 14N2D+ was excellent, but for 14N16O2+ agreement for the two stretching fundamentals is outside the expected residual uncertainty in the theoretical values, and it is concluded that there is an error in the experimental quantities. It is hoped that the highly accurate spectroscopic data presented for the minor isotopologues of N2H+ and NO2+ will be useful in the interpretation of future laboratory or astronomical observations.
Detecting, anticipating, and predicting critical transitions in spatially extended systems.
Kwasniok, Frank
2018-03-01
A data-driven linear framework for detecting, anticipating, and predicting incipient bifurcations in spatially extended systems based on principal oscillation pattern (POP) analysis is discussed. The dynamics are assumed to be governed by a system of linear stochastic differential equations which is estimated from the data. The principal modes of the system together with corresponding decay or growth rates and oscillation frequencies are extracted as the eigenvectors and eigenvalues of the system matrix. The method can be applied to stationary datasets to identify the least stable modes and assess the proximity to instability; it can also be applied to nonstationary datasets using a sliding window approach to track the changing eigenvalues and eigenvectors of the system. As a further step, a genuinely nonstationary POP analysis is introduced. Here, the system matrix of the linear stochastic model is time-dependent, allowing for extrapolation and prediction of instabilities beyond the learning data window. The methods are demonstrated and explored using the one-dimensional Swift-Hohenberg equation as an example, focusing on the dynamics of stochastic fluctuations around the homogeneous stable state prior to the first bifurcation. The POP-based techniques are able to extract and track the least stable eigenvalues and eigenvectors of the system; the nonstationary POP analysis successfully predicts the timing of the first instability and the unstable mode well beyond the learning data window.
Detecting, anticipating, and predicting critical transitions in spatially extended systems
NASA Astrophysics Data System (ADS)
Kwasniok, Frank
2018-03-01
A data-driven linear framework for detecting, anticipating, and predicting incipient bifurcations in spatially extended systems based on principal oscillation pattern (POP) analysis is discussed. The dynamics are assumed to be governed by a system of linear stochastic differential equations which is estimated from the data. The principal modes of the system together with corresponding decay or growth rates and oscillation frequencies are extracted as the eigenvectors and eigenvalues of the system matrix. The method can be applied to stationary datasets to identify the least stable modes and assess the proximity to instability; it can also be applied to nonstationary datasets using a sliding window approach to track the changing eigenvalues and eigenvectors of the system. As a further step, a genuinely nonstationary POP analysis is introduced. Here, the system matrix of the linear stochastic model is time-dependent, allowing for extrapolation and prediction of instabilities beyond the learning data window. The methods are demonstrated and explored using the one-dimensional Swift-Hohenberg equation as an example, focusing on the dynamics of stochastic fluctuations around the homogeneous stable state prior to the first bifurcation. The POP-based techniques are able to extract and track the least stable eigenvalues and eigenvectors of the system; the nonstationary POP analysis successfully predicts the timing of the first instability and the unstable mode well beyond the learning data window.
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
Conic state extrapolation. [computer program for space shuttle navigation and guidance requirements
NASA Technical Reports Server (NTRS)
Shepperd, S. W.; Robertson, W. M.
1973-01-01
The Conic State Extrapolation Routine provides the capability to conically extrapolate any spacecraft inertial state vector either backwards or forwards as a function of time or as a function of transfer angle. It is merely the coded form of two versions of the solution of the two-body differential equations of motion of the spacecraft center of mass. Because of its relatively fast computation speed and moderate accuracy, it serves as a preliminary navigation tool and as a method of obtaining quick solutions for targeting and guidance functions. More accurate (but slower) results are provided by the Precision State Extrapolation Routine.
Hao, Zisu; Malyala, Divya; Dean, Lisa; Ducoste, Joel
2017-04-01
Long Chain Free Fatty Acids (LCFFAs) from the hydrolysis of fat, oil and grease (FOG) are major components in the formation of insoluble saponified solids known as FOG deposits that accumulate in sewer pipes and lead to sanitary sewer overflows (SSOs). A Double Wavenumber Extrapolative Technique (DWET) was developed to simultaneously measure LCFFAs and FOG concentrations in oily wastewater suspensions. This method is based on the analysis of the Attenuated Total Reflectance-Fourier transform infrared spectroscopy (ATR-FTIR) spectrum, in which the absorbance of carboxyl bond (1710cm -1 ) and triglyceride bond (1745cm -1 ) were selected as the characteristic wavenumbers for total LCFFAs and FOG, respectively. A series of experiments using pure organic samples (Oleic acid/Palmitic acid in Canola oil) were performed that showed a linear relationship between the absorption at these two wavenumbers and the total LCFFA. In addition, the DWET method was validated using GC analyses, which displayed a high degree of agreement between the two methods for simulated oily wastewater suspensions (1-35% Oleic acid in Canola oil/Peanut oil). The average determination error of the DWET approach was ~5% when the LCFFA fraction was above 10wt%, indicating that the DWET could be applied as an experimental method for the determination of both LCFFAs and FOG concentrations in oily wastewater suspensions. Potential applications of this DWET approach includes: (1) monitoring the LCFFAs and FOG concentrations in grease interceptor (GI) effluents for regulatory compliance; (2) evaluating alternative LCFFAs/FOG removal technologies; and (3) quantifying potential FOG deposit high accumulation zones in the sewer collection system. Published by Elsevier B.V.
Joiner, Wilsaan M; Ajayi, Obafunso; Sing, Gary C; Smith, Maurice A
2011-01-01
The ability to generalize learned motor actions to new contexts is a key feature of the motor system. For example, the ability to ride a bicycle or swing a racket is often first developed at lower speeds and later applied to faster velocities. A number of previous studies have examined the generalization of motor adaptation across movement directions and found that the learned adaptation decays in a pattern consistent with the existence of motor primitives that display narrow Gaussian tuning. However, few studies have examined the generalization of motor adaptation across movement speeds. Following adaptation to linear velocity-dependent dynamics during point-to-point reaching arm movements at one speed, we tested the ability of subjects to transfer this adaptation to short-duration higher-speed movements aimed at the same target. We found near-perfect linear extrapolation of the trained adaptation with respect to both the magnitude and the time course of the velocity profiles associated with the high-speed movements: a 69% increase in movement speed corresponded to a 74% extrapolation of the trained adaptation. The close match between the increase in movement speed and the corresponding increase in adaptation beyond what was trained indicates linear hypergeneralization. Computational modeling shows that this pattern of linear hypergeneralization across movement speeds is not compatible with previous models of adaptation in which motor primitives display isotropic Gaussian tuning of motor output around their preferred velocities. Instead, we show that this generalization pattern indicates that the primitives involved in the adaptation to viscous dynamics display anisotropic tuning in velocity space and encode the gain between motor output and motion state rather than motor output itself.
Lunar terrain mapping and relative-roughness analysis
NASA Technical Reports Server (NTRS)
Rowan, L. C.; Mccauley, J. F.; Holm, E. A.
1971-01-01
Terrain maps of the equatorial zone were prepared at scales of 1:2,000,000 and 1:1,000,000 to classify lunar terrain with respect to roughness and to provide a basis for selecting sites for Surveyor and Apollo landings, as well as for Ranger and Lunar Orbiter photographs. Lunar terrain was described by qualitative and quantitative methods and divided into four fundamental classes: maria, terrae, craters, and linear features. Some 35 subdivisions were defined and mapped throughout the equatorial zone, and, in addition, most of the map units were illustrated by photographs. The terrain types were analyzed quantitatively to characterize and order their relative roughness characteristics. For some morphologically homogeneous mare areas, relative roughness can be extrapolated to the large scales from measurements at small scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogen, K T
A relatively simple, quantitative approach is proposed to address a specific, important gap in the appr approach recommended by the USEPA Guidelines for Cancer Risk Assessment to oach address uncertainty in carcinogenic mode of action of certain chemicals when risk is extrapolated from bioassay data. These Guidelines recognize that some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained usingmore » separate 'linear' (genotoxic) vs. 'nonlinear' (nongenotoxic) approaches to low low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient t to parameterize a biologically based model that reliably o extrapolates risk to low levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach - similar to that used in reference dose procedures for classic toxicity endpoints - can address MOA uncertainty in a way that avoids explicit modeling of low low-dose risk as a function of administere administered or internal dose. Even when a 'nonlinear' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was i illustrated llustrated for a likely DMOA rodent carcinogen naphthalene, specifically to the issue of risk extrapolation from bioassay data on naphthalene naphthalene-induced nasal tumors in rats. Bioassay data, supplemental toxicokinetic data, and related physiologically based p pharmacokinetic and 2 harmacokinetic 2-stage stochastic carcinogenesis modeling results all clearly indicate that naphthalene is a DMOA carcinogen. Plausibility bounds on rat rat-tumor tumor-type specific DMOA DMOA-related uncertainty were obtained using a 2-stage model adapted to reflec reflect the empirical link between genotoxic and cytotoxic effects of t the most potent identified genotoxic naphthalene metabolites, 1,2 1,2- and 1,4 1,4-naphthoquinone. Bound Bound-specific 'adjustment' factors were then used to reduce naphthalene risk estimated by linear ex extrapolation (under the default genotoxic MOA assumption), to account for the DMOA trapolation exhibited by this compound.« less
Super Resolution and Interference Suppression Technique applied to SHARAD Radar Data
NASA Astrophysics Data System (ADS)
Raguso, M. C.; Mastrogiuseppe, M.; Seu, R.; Piazzo, L.
2017-12-01
We will present a super resolution and interference suppression technique applied to the data acquired by the SHAllow RADar (SHARAD) on board the NASA's 2005 Mars Reconnaissance Orbiter (MRO) mission, currently operating around Mars [1]. The algorithms allow to improve the range resolution roughly by a factor of 3 and the Signal to Noise Ratio (SNR) by a several decibels. Range compression algorithms usually adopt conventional Fourier transform techniques, which are limited in the resolution by the transmitted signal bandwidth, analogous to the Rayleigh's criterion in optics. In this work, we investigate a super resolution method based on autoregressive models and linear prediction techniques [2]. Starting from the estimation of the linear prediction coefficients from the spectral data, the algorithm performs the radar bandwidth extrapolation (BWE), thereby improving the range resolution of the pulse-compressed coherent radar data. Moreover, the EMIs (ElectroMagnetic Interferences) are detected and the spectra is interpolated in order to reconstruct an interference free spectrum, thereby improving the SNR. The algorithm can be applied to the single complex look image after synthetic aperture processing (SAR). We apply the proposed algorithm to simulated as well as to real radar data. We will demonstrate the effective enhancement on vertical resolution with respect to the classical spectral estimator. We will show that the imaging of the subsurface layered structures observed in radargrams is improved, allowing additional insights for the scientific community in the interpretation of the SHARAD radar data, which will help to further our understanding of the formation and evolution of known geological features on Mars. References: [1] Seu et al. 2007, Science, 2007, 317, 1715-1718 [2] K.M. Cuomo, "A Bandwidth Extrapolation Technique for Improved Range Resolution of Coherent Radar Data", Project Report CJP-60, Revision 1, MIT Lincoln Laboratory (4 Dec. 1992).
Murphy, A B
2004-01-01
A number of assessments of electron temperatures in atmospheric-pressure arc plasmas using Thomson scattering of laser light have recently been published. However, in this method, the electron temperature is perturbed due to strong heating of the electrons by the incident laser beam. This heating was taken into account by measuring the electron temperature as a function of the laser pulse energy, and linearly extrapolating the results to zero pulse energy to obtain an unperturbed electron temperature. In the present paper, calculations show that the laser heating process has a highly nonlinear dependence on laser power, and that the usual linear extrapolation leads to an overestimate of the electron temperature, typically by 5000 K. The nonlinearity occurs due to the strong dependence on electron temperature of the absorption of laser energy and of the collisional and radiative cooling of the heated electrons. There are further problems in deriving accurate electron temperatures from laser scattering due to necessary averages that have to be made over the duration of the laser pulse and over the finite volume from which laser light is scattered. These problems are particularly acute in measurements in which the laser beam is defocused in order to minimize laser heating; this can lead to the derivation of electron temperatures that are significantly greater than those existing anywhere in the scattering volume. It was concluded from the earlier Thomson scattering measurements that there were significant deviations from equilibrium between the electron and heavy-particle temperatures at the center of arc plasmas of industrial interest. The present calculations indicate that such deviations are only of the order of 1000 K in 20 000 K, so that the usual approximation that arc plasmas are approximately in local thermodynamic equilibrium still applies.
Cox, Kieran D; Black, Morgan J; Filip, Natalia; Miller, Matthew R; Mohns, Kayla; Mortimor, James; Freitas, Thaise R; Greiter Loerzer, Raquel; Gerwing, Travis G; Juanes, Francis; Dudas, Sarah E
2017-12-01
Diversity estimates play a key role in ecological assessments. Species richness and abundance are commonly used to generate complex diversity indices that are dependent on the quality of these estimates. As such, there is a long-standing interest in the development of monitoring techniques, their ability to adequately assess species diversity, and the implications for generated indices. To determine the ability of substratum community assessment methods to capture species diversity, we evaluated four methods: photo quadrat, point intercept, random subsampling, and full quadrat assessments. Species density, abundance, richness, Shannon diversity, and Simpson diversity were then calculated for each method. We then conducted a method validation at a subset of locations to serve as an indication for how well each method captured the totality of the diversity present. Density, richness, Shannon diversity, and Simpson diversity estimates varied between methods, despite assessments occurring at the same locations, with photo quadrats detecting the lowest estimates and full quadrat assessments the highest. Abundance estimates were consistent among methods. Sample-based rarefaction and extrapolation curves indicated that differences between Hill numbers (richness, Shannon diversity, and Simpson diversity) were significant in the majority of cases, and coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. Method validation highlighted the inability of the tested methods to capture the totality of the diversity present, while further supporting the notion of extrapolating abundances. Our results highlight the need for consistency across research methods, the advantages of utilizing multiple diversity indices, and potential concerns and considerations when comparing data from multiple sources.
Solid H2 in the interstellar medium
NASA Astrophysics Data System (ADS)
Füglistaler, A.; Pfenniger, D.
2018-06-01
Context. Condensation of H2 in the interstellar medium (ISM) has long been seen as a possibility, either by deposition on dust grains or thanks to a phase transition combined with self-gravity. H2 condensation might explain the observed low efficiency of star formation and might help to hide baryons in spiral galaxies. Aims: Our aim is to quantify the solid fraction of H2 in the ISM due to a phase transition including self-gravity for different densities and temperatures in order to use the results in more complex simulations of the ISM as subgrid physics. Methods: We used molecular dynamics simulations of fluids at different temperatures and densities to study the formation of solids. Once the simulations reached a steady state, we calculated the solid mass fraction, energy increase, and timescales. By determining the power laws measured over several orders of magnitude, we extrapolated to lower densities the higher density fluids that can be simulated with current computers. Results: The solid fraction and energy increase of fluids in a phase transition are above 0.1 and do not follow a power law. Fluids out of a phase transition are still forming a small amount of solids due to chance encounters of molecules. The solid mass fraction and energy increase of these fluids are linearly dependent on density and can easily be extrapolated. The timescale is below one second, the condensation can be considered instantaneous. Conclusions: The presence of solid H2 grains has important dynamic implications on the ISM as they may be the building blocks for larger solid bodies when gravity is included. We provide the solid mass fraction, energy increase, and timescales for high density fluids and extrapolation laws for lower densities.
Properties of infrared extrapolations in a harmonic oscillator basis
Coon, Sidney A.; Kruse, Michael K. G.
2016-02-22
Here, the success and utility of effective field theory (EFT) in explaining the structure and reactions of few-nucleon systems has prompted the initiation of EFT-inspired extrapolations to larger model spaces in ab initio methods such as the no-core shell model (NCSM). In this contribution, we review and continue our studies of infrared (ir) and ultraviolet (uv) regulators of NCSM calculations in which the input is phenomenological NN and NNN interactions fitted to data. We extend our previous findings that an extrapolation in the ir cutoff with the uv cutoff above the intrinsic uv scale of the interaction is quite successful,more » not only for the eigenstates of the Hamiltonian but also for expectation values of operators, such as r 2, considered long range. The latter results are obtained with Hamiltonians transformed by the similarity renormalization group (SRG) evolution. On the other hand, a possible extrapolation of ground state energies in the uv cutoff when the ir cutoff is below the intrinsic ir scale is not robust and does not agree with the ir extrapolation of the same data or with independent calculations using other methods.« less
1987-12-01
have claimed an advantage to deter- mining values of k’ in 100% aqueous mobile phases by extrapolation of linear plots of log k’ vs. percent organic...im parti- cle size chemically bonded octadecylsilane (ODS) packing ( Alltech Econo- sphere). As required, this column was saturated with I-octanol by in
Bioaccumulation of heavy metals in fish and Ardeid at Pearl River Estuary, China.
Kwok, C K; Liang, Y; Wang, H; Dong, Y H; Leung, S Y; Wong, M H
2014-08-01
Sediment, fish (tilapia, Oreochromis mossambicus and snakehead, Channa asiatica), eggs and eggshells of Little Egrets (Egretta garzetta) and Chinese Pond Herons (Ardeola bacchus) were collected from Mai Po Ramsar site of Hong Kong, as well as from wetlands in the Gu Cheng County, Shang Hu County and Dafeng Milu National Nature Reserve of Jiangsu Province, China between 2004 and 2007 (n=3-9). Concentrations of six heavy metals were analyzed, based on inductively coupled plasma optical emission spectrometry (ICP-OES). Significant bioaccumulations of Cd (BAF: 165-1271 percent) were observed in the muscle and viscera of large tilapia and snakehead, suggesting potential health risks to the two bird species, as the fishes are the main preys of waterbirds. Significant (p<0.01) linear relationships were obtained between concentrations of Cd, Cr, Cu, Mn, Pb and Zn in the eggs and eggshells of various Ardeid species, and these regression models were used to extrapolate the heavy metal concentrations in the Ardeid eggs of Mai Po. Extrapolated concentrations are consistent with data in the available literature, and advocate the potential use of these models as a non-invasive sampling method for predicting heavy metal contamination in Ardeid eggs. Copyright © 2014 Elsevier Inc. All rights reserved.
Li, Y Q; Varandas, A J C
2010-09-16
An accurate single-sheeted double many-body expansion potential energy surface is reported for the title system which is suitable for dynamics and kinetics studies of the reactions of N(2D) + H2(X1Sigmag+) NH(a1Delta) + H(2S) and their isotopomeric variants. It is obtained by fitting ab initio energies calculated at the multireference configuration interaction level with the aug-cc-pVQZ basis set, after slightly correcting semiempirically the dynamical correlation using the double many-body expansion-scaled external correlation method. The function so obtained is compared in detail with a potential energy surface of the same family obtained by extrapolating the calculated raw energies to the complete basis set limit. The topographical features of the novel global potential energy surface are examined in detail and found to be in general good agreement with those calculated directly from the raw ab initio energies, as well as previous calculations available in the literature. The novel function has been built so as to become degenerate at linear geometries with the ground-state potential energy surface of A'' symmetry reported by our group, where both form a Renner-Teller pair.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, J; Yudelev, M
2016-06-15
Purpose: The provided output factors for Elekta Nucletron’s skin applicators are based on Monte Carlo simulations. These outputs have not been independently verified, and there is no recognized method for output verification of the vendor’s applicators. The purpose of this work is to validate the outputs provided by the vendor experimentally. Methods: Using a Flexitron Ir-192 HDR unit, three experimental methods were employed to determine dose with the 30 mm diameter Valencia applicator: first a gradient method using extrapolation ionization chamber (Far West Technology, EIC-1) measurements in solid water phantom at 3 mm SCD was used. The dose was derivedmore » based on first principles. Secondly a combination of a parallel plate chamber (Exradin A-10) and the EIC-1 was used to determine air kerma at 3 mm SCD. The air kerma was converted to dose to water in line with TG-61 formalism by using a muen ratio and a scatter factor measured with the skin applicators. Similarly a combination of the A-10 parallel plate chamber and gafchromic film (EBT 3) was also used. The Nk factor for the A-10 chamber was obtained through linear interpolation between ADCL supplied Nk factors for Cs-137 and M250. Results: EIC-1 measurements in solid water defined the outputs factor at 3 mm as 0.1343 cGy/U hr. The combination of A-10/ EIC-1 and A-10/EBT3 lead to output factors of 0.1383 and 0.1568 cGy/U hr, respectively. For comparison the output recommended by the vendor is 0.1659 cGy/U hr. Conclusion: All determined dose rates were lower than the vendor supplied values. The observed discrepancy between extrapolation chamber and film methods can be ascribed to extracameral gradient effects that may not be fully accounted for by the former method.« less
Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Groves, Curtis; Ilie, Marcel; Schallhorn, Paul
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature
Robust approaches to quantification of margin and uncertainty for sparse data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less
NASA Astrophysics Data System (ADS)
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...
Loss tolerant speech decoder for telecommunications
NASA Technical Reports Server (NTRS)
Prieto, Jr., Jaime L. (Inventor)
1999-01-01
A method and device for extrapolating past signal-history data for insertion into missing data segments in order to conceal digital speech frame errors. The extrapolation method uses past-signal history that is stored in a buffer. The method is implemented with a device that utilizes a finite-impulse response (FIR) multi-layer feed-forward artificial neural network that is trained by back-propagation for one-step extrapolation of speech compression algorithm (SCA) parameters. Once a speech connection has been established, the speech compression algorithm device begins sending encoded speech frames. As the speech frames are received, they are decoded and converted back into speech signal voltages. During the normal decoding process, pre-processing of the required SCA parameters will occur and the results stored in the past-history buffer. If a speech frame is detected to be lost or in error, then extrapolation modules are executed and replacement SCA parameters are generated and sent as the parameters required by the SCA. In this way, the information transfer to the SCA is transparent, and the SCA processing continues as usual. The listener will not normally notice that a speech frame has been lost because of the smooth transition between the last-received, lost, and next-received speech frames.
Development and Testing of a Sustained Release System for the Prevention of Malaria.
1979-09-01
linear function of time to 100% excretion the extrapolated dur- ation of the control group would be 517 days (203 days/0.393). As used in leprosy ...use in leprosy treatment, the suspending vehicle is 40% benzyl benzoate, 60% castor oil. Solubility of WR-4593 in water is given as 3.0 pg/ml while in
NASA Astrophysics Data System (ADS)
Dalmasse, K.; Pariat, É.; Valori, G.; Jing, J.; Démoulin, P.
2018-01-01
In the solar corona, magnetic helicity slowly and continuously accumulates in response to plasma flows tangential to the photosphere and magnetic flux emergence through it. Analyzing this transfer of magnetic helicity is key for identifying its role in the dynamics of active regions (ARs). The connectivity-based helicity flux density method was recently developed for studying the 2D and 3D transfer of magnetic helicity in ARs. The method takes into account the 3D nature of magnetic helicity by explicitly using knowledge of the magnetic field connectivity, which allows it to faithfully track the photospheric flux of magnetic helicity. Because the magnetic field is not measured in the solar corona, modeled 3D solutions obtained from force-free magnetic field extrapolations must be used to derive the magnetic connectivity. Different extrapolation methods can lead to markedly different 3D magnetic field connectivities, thus questioning the reliability of the connectivity-based approach in observational applications. We address these concerns by applying this method to the isolated and internally complex AR 11158 with different magnetic field extrapolation models. We show that the connectivity-based calculations are robust to different extrapolation methods, in particular with regard to identifying regions of opposite magnetic helicity flux. We conclude that the connectivity-based approach can be reliably used in observational analyses and is a promising tool for studying the transfer of magnetic helicity in ARs and relating it to their flaring activity.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
NASA Technical Reports Server (NTRS)
Huang, Xinchuan; Taylor, Peter R.; Lee, Timothy J.
2011-01-01
High levels of theory have been used to compute quartic force fields (QFFs) for the cyclic and linear forms of the C H + molecular cation, referred to as c-C H + and I-C H +. Specifically the 33 3333 singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, CCSD(T), has been used in conjunction with extrapolation to the one-particle basis set limit and corrections for scalar relativity and core correlation have been included. The QFFs have been used to compute highly accurate fundamental vibrational frequencies and other spectroscopic constants using both vibrational 2nd-order perturbation theory and variational methods to solve the nuclear Schroedinger equation. Agreement between our best computed fundamental vibrational frequencies and recent infrared photodissociation experiments is reasonable for most bands, but there are a few exceptions. Possible sources for the discrepancies are discussed. We determine the energy difference between the cyclic and linear forms of C H +, 33 obtaining 27.9 kcal/mol at 0 K, which should be the most reliable available. It is expected that the fundamental vibrational frequencies and spectroscopic constants presented here for c-C H + 33 and I-C H + are the most reliable available for the free gas-phase species and it is hoped that 33 these will be useful in the assignment of future high-resolution laboratory experiments or astronomical observations.
NASA Astrophysics Data System (ADS)
Cesario, Roberto; Cardinali, Alessandro; Castaldo, Carmine; Amicucci, Luca; Ceccuzzi, Silvio; Galli, Alessandro; Napoli, Francesco; Panaccione, Luigi; Santini, Franco; Schettini, Giuseppe; Tuccillo, Angelo Antonio
2017-10-01
The main research on the energy from thermonuclear fusion uses deuterium plasmas magnetically trapped in toroidal devices. To suppress the turbulent eddies that impair thermal insulation and pressure tight of the plasma, current drive (CD) is necessary, but tools envisaged so far are unable accomplishing this task while efficiently and flexibly matching the natural current profiles self-generated at large radii of the plasma column [1-5]. The lower hybrid current drive (LHCD) [6] can satisfy this important need of a reactor [1], but the LHCD system has been unexpectedly mothballed on JET. The problematic extrapolation of the LHCD tool at reactor graded high values of, respectively, density and temperatures of plasma has been now solved. The high density problem is solved by the FTU (Frascati Tokamak Upgrade) method [7], and solution of the high temperature one is presented here. Model results based on quasi-linear (QL) theory evidence the capability, w.r.t linear theory, of suitable operating parameters of reducing the wave damping in hot reactor plasmas. Namely, using higher RF power densities [8], or a narrower antenna power spectrum in refractive index [9,10], the obstacle for LHCD represented by too high temperature of reactor plasmas should be overcome. The former method cannot be used for routinely, safe antenna operations, Thus, only the latter key is really exploitable in a reactor. The proposed solutions are ultimately necessary for viability of an economic reactor.
Radiation environment for ATS-F. [including ambient trapped particle fluxes
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.
1974-01-01
The ambient trapped particle fluxes incident on the ATS-F satellite were determined. Several synchronous circular flight paths were evaluated and the effect of parking longitude on vehicle encountered intensities was investigated. Temporal variations in the electron environment were considered and partially accounted for. Magnetic field calculations were performed with a current field model extrapolated to a later epoch with linear time terms. Orbital flux integrations were performed with the latest proton and electron environment models using new improved computational methods. The results are presented in graphical and tabular form; they are analyzed, explained, and discussed. Estimates of energetic solar proton fluxes are given for a one year mission at selected integral energies ranging from 10 to 100 Mev, calculated for a year of maximum solar activity during the next solar cycle.
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latychevskaia, Tatiana, E-mail: tatiana@physik.uzh.ch; Fink, Hans-Werner; Chushkin, Yuriy
Coherent diffraction imaging is a high-resolution imaging technique whose potential can be greatly enhanced by applying the extrapolation method presented here. We demonstrate the enhancement in resolution of a non-periodical object reconstructed from an experimental X-ray diffraction record which contains about 10% missing information, including the pixels in the center of the diffraction pattern. A diffraction pattern is extrapolated beyond the detector area and as a result, the object is reconstructed at an enhanced resolution and better agreement with experimental amplitudes is achieved. The optimal parameters for the iterative routine and the limits of the extrapolation procedure are discussed.
Calculation of excitation energies from the CC2 linear response theory using Cholesky decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baudin, Pablo, E-mail: baudin.pablo@gmail.com; qLEAP – Center for Theoretical Chemistry, Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C; Marín, José Sánchez
2014-03-14
A new implementation of the approximate coupled cluster singles and doubles CC2 linear response model is reported. It employs a Cholesky decomposition of the two-electron integrals that significantly reduces the computational cost and the storage requirements of the method compared to standard implementations. Our algorithm also exploits a partitioning form of the CC2 equations which reduces the dimension of the problem and avoids the storage of doubles amplitudes. We present calculation of excitation energies of benzene using a hierarchy of basis sets and compare the results with conventional CC2 calculations. The reduction of the scaling is evaluated as well asmore » the effect of the Cholesky decomposition parameter on the quality of the results. The new algorithm is used to perform an extrapolation to complete basis set investigation on the spectroscopically interesting benzylallene conformers. A set of calculations on medium-sized molecules is carried out to check the dependence of the accuracy of the results on the decomposition thresholds. Moreover, CC2 singlet excitation energies of the free base porphin are also presented.« less
Verevkin, Sergey P; Zaitsau, Dzmitry H; Emel'yanenko, Vladimir N; Yermalayeu, Andrei V; Schick, Christoph; Liu, Hongjun; Maginn, Edward J; Bulut, Safak; Krossing, Ingo; Kalb, Roland
2013-05-30
Vaporization enthalpy of an ionic liquid (IL) is a key physical property for applications of ILs as thermofluids and also is useful in developing liquid state theories and validating intermolecular potential functions used in molecular modeling of these liquids. Compilation of the data for a homologous series of 1-alkyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([C(n)mim][NTf2]) ILs has revealed an embarrassing disarray of literature results. New experimental data, based on the concurring results from quartz crystal microbalance, thermogravimetric analyses, and molecular dynamics simulation have revealed a clear linear dependence of IL vaporization enthalpies on the chain length of the alkyl group on the cation. Ambiguity of the procedure for extrapolation of vaporization enthalpies to the reference temperature 298 K was found to be a major source of the discrepancies among previous data sets. Two simple methods for temperature adjustment of vaporization enthalpies have been suggested. Resulting vaporization enthalpies obey group additivity, although the values of the additivity parameters for ILs are different from those for molecular compounds.
NASA Technical Reports Server (NTRS)
Marcum, Jeremy W.; Ferkul, Paul V.; Olson, Sandra L.
2017-01-01
Normal gravity flame blowoff limits in an axisymmetric pmma rod geometry in upward axial stagnation flow are compared with microgravity Burning and Suppression of Solids II (BASS-II) results recently obtained aboard the International Space Station. This testing utilized the same BASS-II concurrent rod geometry, but with the addition of normal gravity buoyant flow. Cast polymethylmethacrylate (pmma) rods of diameters ranging from 0.635 cm to 3.81 cm were burned at oxygen concentrations ranging from 14 to 18 by volume. The forced flow velocity where blowoff occurred was determined for each rod size and oxygen concentration. These blowoff limits compare favorably with the BASS-II results when the buoyant stretch is included and the flow is corrected by considering the blockage factor of the fuel. From these results, the normal gravity blowoff boundary for this axisymmetric rod geometry is determined to be linear, with oxygen concentration directly proportional to flow speed. We describe a new normal gravity upward flame spread test method which extrapolates the linear blowoff boundary to the zero stretch limit to resolve microgravity flammability limits, something current methods cannot do. This new test method can improve spacecraft fire safety for future exploration missions by providing a tractable way to obtain good estimates of material flammability in low gravity.
The allometry of coarse root biomass: log-transformed linear regression or nonlinear regression?
Lai, Jiangshan; Yang, Bo; Lin, Dunmei; Kerkhoff, Andrew J; Ma, Keping
2013-01-01
Precise estimation of root biomass is important for understanding carbon stocks and dynamics in forests. Traditionally, biomass estimates are based on allometric scaling relationships between stem diameter and coarse root biomass calculated using linear regression (LR) on log-transformed data. Recently, it has been suggested that nonlinear regression (NLR) is a preferable fitting method for scaling relationships. But while this claim has been contested on both theoretical and empirical grounds, and statistical methods have been developed to aid in choosing between the two methods in particular cases, few studies have examined the ramifications of erroneously applying NLR. Here, we use direct measurements of 159 trees belonging to three locally dominant species in east China to compare the LR and NLR models of diameter-root biomass allometry. We then contrast model predictions by estimating stand coarse root biomass based on census data from the nearby 24-ha Gutianshan forest plot and by testing the ability of the models to predict known root biomass values measured on multiple tropical species at the Pasoh Forest Reserve in Malaysia. Based on likelihood estimates for model error distributions, as well as the accuracy of extrapolative predictions, we find that LR on log-transformed data is superior to NLR for fitting diameter-root biomass scaling models. More importantly, inappropriately using NLR leads to grossly inaccurate stand biomass estimates, especially for stands dominated by smaller trees.
NASA Technical Reports Server (NTRS)
Marcum, Jeremy W.; Olson, Sandra L.; Ferkul, Paul V.
2016-01-01
The axisymmetric rod geometry in upward axial stagnation flow provides a simple way to measure normal gravity blowoff limits to compare with microgravity Burning and Suppression of Solids - II (BASS-II) results recently obtained aboard the International Space Station. This testing utilized the same BASS-II concurrent rod geometry, but with the addition of normal gravity buoyant flow. Cast polymethylmethacrylate (PMMA) rods of diameters ranging from 0.635 cm to 3.81 cm were burned at oxygen concentrations ranging from 14 to 18% by volume. The forced flow velocity where blowoff occurred was determined for each rod size and oxygen concentration. These blowoff limits compare favorably with the BASS-II results when the buoyant stretch is included and the flow is corrected by considering the blockage factor of the fuel. From these results, the normal gravity blowoff boundary for this axisymmetric rod geometry is determined to be linear, with oxygen concentration directly proportional to flow speed. We describe a new normal gravity 'upward flame spread test' method which extrapolates the linear blowoff boundary to the zero stretch limit in order to resolve microgravity flammability limits-something current methods cannot do. This new test method can improve spacecraft fire safety for future exploration missions by providing a tractable way to obtain good estimates of material flammability in low gravity.
An evaluation of methods for estimating decadal stream loads
NASA Astrophysics Data System (ADS)
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
An evaluation of methods for estimating decadal stream loads
Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-01-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
Higher-order finite-difference formulation of periodic Orbital-free Density Functional Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Swarnava; Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu
2016-02-15
We present a real-space formulation and higher-order finite-difference implementation of periodic Orbital-free Density Functional Theory (OF-DFT). Specifically, utilizing a local reformulation of the electrostatic and kernel terms, we develop a generalized framework for performing OF-DFT simulations with different variants of the electronic kinetic energy. In particular, we propose a self-consistent field (SCF) type fixed-point method for calculations involving linear-response kinetic energy functionals. In this framework, evaluation of both the electronic ground-state and forces on the nuclei are amenable to computations that scale linearly with the number of atoms. We develop a parallel implementation of this formulation using the finite-difference discretization.more » We demonstrate that higher-order finite-differences can achieve relatively large convergence rates with respect to mesh-size in both the energies and forces. Additionally, we establish that the fixed-point iteration converges rapidly, and that it can be further accelerated using extrapolation techniques like Anderson's mixing. We validate the accuracy of the results by comparing the energies and forces with plane-wave methods for selected examples, including the vacancy formation energy in Aluminum. Overall, the suitability of the proposed formulation for scalable high performance computing makes it an attractive choice for large-scale OF-DFT calculations consisting of thousands of atoms.« less
NASA Technical Reports Server (NTRS)
Kvernadze, George; Hagstrom,Thomas; Shapiro, Henry
1997-01-01
A key step for some methods dealing with the reconstruction of a function with jump discontinuities is the accurate approximation of the jumps and their locations. Various methods have been suggested in the literature to obtain this valuable information. In the present paper, we develop an algorithm based on identities which determine the jumps of a 2(pi)-periodic bounded not-too-highly oscillating function by the partial sums of its differentiated Fourier series. The algorithm enables one to approximate the locations of discontinuities and the magnitudes of jumps of a bounded function. We study the accuracy of approximation and establish asymptotic expansions for the approximations of a 27(pi)-periodic piecewise smooth function with one discontinuity. By an appropriate linear combination, obtained via derivatives of different order, we significantly improve the accuracy. Next, we use Richardson's extrapolation method to enhance the accuracy even more. For a function with multiple discontinuities we establish simple formulae which "eliminate" all discontinuities of the function but one. Then we treat the function as if it had one singularity following the method described above.
The contribution of benzene to smoking-induced leukemia.
Korte, J E; Hertz-Picciotto, I; Schulz, M R; Ball, L M; Duell, E J
2000-04-01
Cigarette smoking is associated with an increased risk of leukemia; benzene, an established leukemogen, is present in cigarette smoke. By combining epidemiologic data on the health effects of smoking with risk assessment techniques for low-dose extrapolation, we assessed the proportion of smoking-induced total leukemia and acute myeloid leukemia (AML) attributable to the benzene in cigarette smoke. We fit both linear and quadratic models to data from two benzene-exposed occupational cohorts to estimate the leukemogenic potency of benzene. Using multiple-decrement life tables, we calculated lifetime risks of total leukemia and AML deaths for never, light, and heavy smokers. We repeated these calculations, removing the effect of benzene in cigarettes based on the estimated potencies. From these life tables we determined smoking-attributable risks and benzene-attributable risks. The ratio of the latter to the former constitutes the proportion of smoking-induced cases attributable to benzene. Based on linear potency models, the benzene in cigarette smoke contributed from 8 to 48% of smoking-induced total leukemia deaths [95% upper confidence limit (UCL), 20-66%], and from 12 to 58% of smoking-induced AML deaths (95% UCL, 19-121%). The inclusion of a quadratic term yielded results that were comparable; however, potency models with only quadratic terms resulted in much lower attributable fractions--all < 1%. Thus, benzene is estimated to be responsible for approximately one-tenth to one-half of smoking-induced total leukemia mortality and up to three-fifths of smoking-related AML mortality. In contrast to theoretical arguments that linear models substantially overestimate low-dose risk, linear extrapolations from empirical data over a dose range of 10- to 100-fold resulted in plausible predictions.
NASA Technical Reports Server (NTRS)
Siclari, M. J.
1992-01-01
A CFD analysis of the near-field sonic boom environment of several low boom High Speed Civilian Transport (HSCT) concepts is presented. The CFD method utilizes a multi-block Euler marching code within the context of an innovative mesh topology that allows for the resolution of shock waves several body lengths from the aircraft. Three-dimensional pressure footprints at one body length below three-different low boom aircraft concepts are presented. Models of two concepts designed by NASA to cruise at Mach 2 and Mach 3 were built and tested in the wind tunnel. The third concept was designed by Boeing to cruise at Mach 1.7. Centerline and sideline samples of these footprints are then extrapolated to the ground using a linear waveform parameter method to estimate the ground signatures or sonic boom ground overpressure levels. The Mach 2 concept achieved its centerline design signature but indicated higher sideline booms due to the outboard wing crank of the configuration. Nacelles are also included on two of NASA's low boom concepts. Computations are carried out for both flow-through nacelles and nacelles with engine exhaust simulation. The flow-through nacelles with the assumption of zero spillage and zero inlet lip radius showed very little effect on the sonic boom signatures. On the other hand, it was shown that the engine exhaust plumes can have an effect on the levels of overpressure reaching the ground depending on the engine operating conditions. The results of this study indicate that engine integration into a low boom design should be given some attention.
A Method for Extrapolation of Atmospheric Soundings
2014-05-01
14 3.1.2 WRF Inter-Comparisons...8 Figure 5. Profiles comparing the 00 UTC 14 January 2013 GJT radiosonde to 1-km WRF data from 23 UTC extended from...comparing 1-km WRF data and 3-km WRF data extended from the “old surface” to the radiosonde surface using the standard extrapolation and extended
Inorganic arsenic is classified as a carcinogen and has been linked to lung and bladder cancer as well as other non-cancerous health effects. Because of these health effects the U.S. EPA has set a Maximum Contaminant Level (MCL) at 10ppb based on a linear extrapolation of risk an...
NASA Astrophysics Data System (ADS)
Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch
2017-09-01
A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.
Soft tissue modelling through autowaves for surgery simulation.
Zhong, Yongmin; Shirinzadeh, Bijan; Alici, Gursel; Smith, Julian
2006-09-01
Modelling of soft tissue deformation is of great importance to virtual reality based surgery simulation. This paper presents a new methodology for simulation of soft tissue deformation by drawing an analogy between autowaves and soft tissue deformation. The potential energy stored in a soft tissue as a result of a deformation caused by an external force is propagated among mass points of the soft tissue by non-linear autowaves. The novelty of the methodology is that (i) autowave techniques are established to describe the potential energy distribution of a deformation for extrapolating internal forces, and (ii) non-linear materials are modelled with non-linear autowaves other than geometric non-linearity. Integration with a haptic device has been achieved to simulate soft tissue deformation with force feedback. The proposed methodology not only deals with large-range deformations, but also accommodates isotropic, anisotropic and inhomogeneous materials by simply changing diffusion coefficients.
Jaffrin, M Y; Maasrani, M; Le Gourrier, A; Boudailliez, B
1997-05-01
A method is presented for monitoring the relative variation of extracellular and intracellular fluid volumes using a multifrequency impedance meter and the Cole-Cole extrapolation technique. It is found that this extrapolation is necessary to obtain reliable data for the resistance of the intracellular fluid. The extracellular and intracellular resistances can be approached using frequencies of, respectively, 5 kHz and 1000 kHz, but the use of 100 kHz leads to unacceptable errors. In the conventional treatment the overall relative variation of intracellular resistance is found to be relatively small.
NASA Astrophysics Data System (ADS)
Hill, J. Grant; Peterson, Kirk A.; Knizia, Gerald; Werner, Hans-Joachim
2009-11-01
Accurate extrapolation to the complete basis set (CBS) limit of valence correlation energies calculated with explicitly correlated MP2-F12 and CCSD(T)-F12b methods have been investigated using a Schwenke-style approach for molecules containing both first and second row atoms. Extrapolation coefficients that are optimal for molecular systems containing first row elements differ from those optimized for second row analogs, hence values optimized for a combined set of first and second row systems are also presented. The new coefficients are shown to produce excellent results in both Schwenke-style and equivalent power-law-based two-point CBS extrapolations, with the MP2-F12/cc-pV(D,T)Z-F12 extrapolations producing an average error of just 0.17 mEh with a maximum error of 0.49 for a collection of 23 small molecules. The use of larger basis sets, i.e., cc-pV(T,Q)Z-F12 and aug-cc-pV(Q,5)Z, in extrapolations of the MP2-F12 correlation energy leads to average errors that are smaller than the degree of confidence in the reference data (˜0.1 mEh). The latter were obtained through use of very large basis sets in MP2-F12 calculations on small molecules containing both first and second row elements. CBS limits obtained from optimized coefficients for conventional MP2 are only comparable to the accuracy of the MP2-F12/cc-pV(D,T)Z-F12 extrapolation when the aug-cc-pV(5+d)Z and aug-cc-pV(6+d)Z basis sets are used. The CCSD(T)-F12b correlation energy is extrapolated as two distinct parts: CCSD-F12b and (T). While the CCSD-F12b extrapolations with smaller basis sets are statistically less accurate than those of the MP2-F12 correlation energies, this is presumably due to the slower basis set convergence of the CCSD-F12b method compared to MP2-F12. The use of larger basis sets in the CCSD-F12b extrapolations produces correlation energies with accuracies exceeding the confidence in the reference data (also obtained in large basis set F12 calculations). It is demonstrated that the use of the 3C(D) Ansatz is preferred for MP2-F12 CBS extrapolations. Optimal values of the geminal Slater exponent are presented for the diagonal, fixed amplitude Ansatz in MP2-F12 calculations, and these are also recommended for CCSD-F12b calculations.
Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc
2013-06-01
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.
EXTRAPOLATION METHOD FOR MAXIMAL AND 24-H AVERAGE LTE TDD EXPOSURE ESTIMATION.
Franci, D; Grillo, E; Pavoncello, S; Coltellacci, S; Buccella, C; Aureli, T
2018-01-01
The Long-Term Evolution (LTE) system represents the evolution of the Universal Mobile Telecommunication System technology. This technology introduces two duplex modes: Frequency Division Duplex and Time Division Duplex (TDD). Despite having experienced a limited expansion in the European countries since the debut of the LTE technology, a renewed commercial interest for LTE TDD technology has recently been shown. Therefore, the development of extrapolation procedures optimised for TDD systems becomes crucial, especially for the regulatory authorities. This article presents an extrapolation method aimed to assess the exposure to LTE TDD sources, based on the detection of the Cell-Specific Reference Signal power level. The method introduces a βTDD parameter intended to quantify the fraction of the LTE TDD frame duration reserved for downlink transmission. The method has been validated by experimental measurements performed on signals generated by both a vector signal generator and a test Base Transceiver Station installed at Linkem S.p.A facility in Rome. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Gambino, D.; Sangiovanni, D. G.; Alling, B.; Abrikosov, I. A.
2017-09-01
We use the color diffusion (CD) algorithm in nonequilibrium (accelerated) ab initio molecular dynamics simulations to determine Ti monovacancy jump frequencies in NaCl-structure titanium nitride (TiN), at temperatures ranging from 2200 to 3000 K. Our results show that the CD method extended beyond the linear-fitting rate-versus-force regime [Sangiovanni et al., Phys. Rev. B 93, 094305 (2016), 10.1103/PhysRevB.93.094305] can efficiently determine metal vacancy migration rates in TiN, despite the low mobilities of lattice defects in this type of ceramic compound. We propose a computational method based on gamma-distribution statistics, which provides unambiguous definition of nonequilibrium and equilibrium (extrapolated) vacancy jump rates with corresponding statistical uncertainties. The acceleration-factor achieved in our implementation of nonequilibrium molecular dynamics increases dramatically for decreasing temperatures from 500 for T close to the melting point Tm, up to 33 000 for T ≈0.7 Tm .
Project SOLWIND: Space radiation exposure. [evaluation of particle fluxes
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.
1975-01-01
A special orbital radiation study was conducted for the SOLWIND project to evaluate mission-encountered energetic particle fluxes. Magnetic field calculations were performed with a current field model, extrapolated to the tentative spacecraft launch epoch with linear time terms. Orbital flux integrations for circular flight paths were performed with the latest proton and electron environment models, using new improved computational methods. Temporal variations in the ambient electron environment are considered and partially accounted for. Estimates of average energetic solar proton fluences are given for a one year mission duration at selected integral energies ranging from E greater than 10 to E greater than 100 MeV; the predicted annual fluence is found to relate to the period of maximum solar activity during the next solar cycle. The results are presented in graphical and tabular form; they are analyzed, explained, and discussed.
Density effects on the electronic contribution to hydrogen Lyman alpha Stark profiles
NASA Astrophysics Data System (ADS)
Motapon, O.
1998-01-01
The quantum unified theory of Stark broadening (Tran Minh et al. 1975, Feautrier et al. 1976) is used to study the density effects on the electronic contribution to the hydrogen Lyman alpha lineshape. The contribution of the first angular momenta to the total profile is obtained by an extrapolation method, and the results agree with other approaches. The comparison made with Vidal et al. (1973) shows a good agreement; and the electronic profile is found to be linear in density for | Delta lambda right | greater than 8 Angstroms for densities below 10(17) cm(-3) , while the density dependence becomes more complex for | Delta lambda right | less than 8 Angstroms. The wing profiles are calculated at various temperatures scaling from 2500 to 40000K and a polynomial fit of these profiles is given.
Radiation hazards to synchronous satellites: The IUE (SAS-D) mission
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.
1973-01-01
The ambient trapped particle fluxes incident on the IUE (SAS-D) satellite were studied. Several synchronous elliptical and circular flight paths were evaluated and the effect of inclination, eccentricity, and parking longitude on vehicle encountered intensities was investigated. Temporal variations in the electron environment were considered and partially accounted for. Magnetic field calculations were performed with a current field model extrapolated to a later epoch with linear time terms. Orbital flux integrations were performed with the latest proton and electron environment models using new improved computational methods. The results are presented in graphical and tabular form; they are analyzed, explained, and discussed. Estimates of energetic solar proton fluxes are given for a one year mission at selected integral energies ranging from 10 to 100 MeV, calculated for a year of maximum solar activity during the next solar cycle.
Simulations of the Neutron Gas in the Inner Crust of Neutron Stars
NASA Astrophysics Data System (ADS)
Vandegriff, Elizabeth; Horowitz, Charles; Caplan, Matthew
2017-09-01
Inside neutron stars, the structures known as `nuclear pasta' are found in the crust. This pasta forms near nuclear density as nucleons arrange in spaghetti- or lasagna-like structures to minimize their energy. We run classical molecular dynamics simulations to visualize the geometry of this pasta and study the distribution of nucleons. In the simulations, we observe that the pasta is embedded in a gas of neutrons, which we call the `sauce'. In this work, we developed two methods for determining the density of neutrons in the gas, one which is accurate at low temperatures and a second which justifies an extrapolation at high temperatures. Running simulations with no Coulomb interactions, we find that the neutron density increases linearly with temperature for every proton fraction we simulated. NSF REU Grant PHY-1460882 at Indiana University.
Multi-wavelength Observations and Modeling of Solar Flares: Magnetic Structures
NASA Astrophysics Data System (ADS)
Su, Y.
2017-12-01
We present a review of our recent investigations on multi-wavelength observations and magnetic field modeling of solar flares. High-resolution observations taken by NVST and BBSO/NST reveal unprecedented fine structures of the flaring regions. Observations by SDO, IRIS, and GOES provide the complementary information. The magnetic field models are constructed using either non-linear force free field extrapolations or flux rope insertion method. Our studies have shown that the flaring regions often consist of double or multiple flux ropes, which often exist at different heights. The fine flare ribbon structures may be due to the magnetic reconnection in the complex quasi separatrix layers. The magnetic field modeling of several large flares suggests that the so called hot-channel structure is corresponding to the erupting flux rope above the X-point in a magnetic configuration with Hyperbolic Flux Tube.
NASA Astrophysics Data System (ADS)
Zhou, Shiqi
2017-11-01
A new scheme is put forward to determine the wetting temperature (Tw) by utilizing the adaptation of arc-length continuation algorithm to classical density functional theory (DFT) used originally by Frink and Salinger, and its advantages are summarized into four points: (i) the new scheme is applicable whether the wetting occurs near a planar or a non-planar surface, whereas a zero contact angle method is considered only applicable to a perfectly flat solid surface, as demonstrated previously and in this work, and essentially not fit for non-planar surface. (ii) The new scheme is devoid of an uncertainty, which plagues a pre-wetting extrapolation method and originates from an unattainability of the infinitely thick film in the theoretical calculation. (iii) The new scheme can be similarly and easily applied to extreme instances characterized by lower temperatures and/or higher surface attraction force field, which, however, can not be dealt with by the pre-wetting extrapolation method because of the pre-wetting transition being mixed with many layering transitions and the difficulty in differentiating varieties of the surface phase transitions. (iv) The new scheme still works in instance wherein the wetting transition occurs close to the bulk critical temperature; however, this case completely can not be managed by the pre-wetting extrapolation method because near the bulk critical temperature the pre-wetting region is extremely narrow, and no enough pre-wetting data are available for use of the extrapolation procedure.
Detectors for Linear Colliders: Tracking and Vertexing (2/4)
Battaglia, Marco
2018-04-16
Efficient and precise determination of the flavour of partons in multi-hadron final states is essential to the anticipated LC physics program. This makes tracking in the vicinity of the interaction region of great importance. Tracking extrapolation and momentum resolution are specified by precise physics requirements. The R&D; towards detectors able to meet these specifications will be discussed, together with some of their application beyond particle physics.
Motion prediction in MRI-guided radiotherapy based on interleaved orthogonal cine-MRI
NASA Astrophysics Data System (ADS)
Seregni, M.; Paganelli, C.; Lee, D.; Greer, P. B.; Baroni, G.; Keall, P. J.; Riboldi, M.
2016-01-01
In-room cine-MRI guidance can provide non-invasive target localization during radiotherapy treatment. However, in order to cope with finite imaging frequency and system latencies between target localization and dose delivery, tumour motion prediction is required. This work proposes a framework for motion prediction dedicated to cine-MRI guidance, aiming at quantifying the geometric uncertainties introduced by this process for both tumour tracking and beam gating. The tumour position, identified through scale invariant features detected in cine-MRI slices, is estimated at high-frequency (25 Hz) using three independent predictors, one for each anatomical coordinate. Linear extrapolation, auto-regressive and support vector machine algorithms are compared against systems that use no prediction or surrogate-based motion estimation. Geometric uncertainties are reported as a function of image acquisition period and system latency. Average results show that the tracking error RMS can be decreased down to a [0.2; 1.2] mm range, for acquisition periods between 250 and 750 ms and system latencies between 50 and 300 ms. Except for the linear extrapolator, tracking and gating prediction errors were, on average, lower than those measured for surrogate-based motion estimation. This finding suggests that cine-MRI guidance, combined with appropriate prediction algorithms, could relevantly decrease geometric uncertainties in motion compensated treatments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerlach, Joerg; Kessler, Lutz; Paul, Udo
2007-05-17
The concept of forming limit curves (FLC) is widely used in industrial practice. The required data should be delivered for typical material properties (measured on coils with properties in a range of +/- of the standard deviation from the mean production values) by the material suppliers. In particular it should be noted that its use for the validation of forming robustness providing forming limit curves for the variety of scattering in the mechanical properties is impossible. Therefore a forecast of the expected limit strains without expensive cost and time-consuming experiments is necessary. In the paper the quality of a regressionmore » analysis for determining forming limit curves based on tensile test results is presented and discussed.Owing to the specific definition of limit strains with FLCs following linear strain paths, the significance of this failure definition is limited. To consider nonlinear strain path effects, different methods are given in literature. One simple method is the concept of limit stresses. It should be noted that the determined value of the critical stress is dependent on the extrapolation of the tensile test curve. When the yield curve extrapolation is very similar to an exponential function, the definition of the critical stress value is very complicated due to the low slope of the hardening function at large strains.A new method to determine general failure behavior in sheet metal forming is the common use and interpretation of three criteria: onset on material instability (comparable with FLC concept), value of critical shear fracture and the value of ductile fracture. This method seems to be particularly successful for newly developed high strength steel grades in connection with more complex strain paths for some specific material elements. Nevertheless the identification of the different failure material parameters or functions will increase and the user has to learn with the interpretation of the numerical results.« less
NASA Astrophysics Data System (ADS)
Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo
2018-04-01
In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Cigarette sales in pharmacies in the USA (2005-2009).
Seidenberg, Andrew B; Behm, Ilan; Rees, Vaughan W; Connolly, Gregory N
2012-09-01
Several US jurisdictions have adopted policies prohibiting pharmacies from selling tobacco products. Little is known about how pharmacies contribute to total cigarette sales. Pharmacy and total cigarette sales in the USA were tabulated from AC Nielsen and Euromonitor, respectively, for the years 2005-2009. Linear regression was used to characterise trends over time, with observed trends extrapolated to 2020. Between 2005 and 2009, pharmacy cigarette sales increased 22.72% (p=0.004), while total cigarette sales decreased 17.43% (p=0.015). In 2005, pharmacy cigarette sales represented 3.05% of total cigarette sales, increasing to 4.54% by 2009. Extrapolation of these findings resulted in estimated pharmacy cigarette sales of 14.59% of total US cigarette sales by 2020. Cigarette sales in American pharmacies have risen in recent years, while cigarette sales nationally have declined. If current trends continue, pharmacy cigarette market share will, by 2020, increase to more than four times the 2005 share.
Chenal, C; Legue, F; Nourgalieva, K; Brouazin-Jousseaume, V; Durel, S; Guitton, N
2000-01-01
In human radiation protection, the shape of the dose effects curve for low doses irradiation (LDI) is assumed to be linear, extrapolated from the clinical consequences of Hiroshima and Nagasaki nuclear explosions. This extrapolation probably overestimates the risk below 200 mSv. In many circumstances, the living species and cells can develop some mechanisms of adaptation. Classical epidemiological studies will not be able to answer the question and there is a need to assess more sensitive biological markers of the effects of LDI. The researches should be focused on DNA effects (strand breaks), radioinduced expression of new genes and proteins involved in the response to oxidative stress and DNA repair mechanisms. New experimental biomolecular techniques should be developed in parallel with more conventional ones. Such studies would permit to assess new biological markers of radiosensitivity, which could be of great interest in radiation protection and radio-oncology.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742
Nonparametric methods for drought severity estimation at ungauged sites
NASA Astrophysics Data System (ADS)
Sadri, S.; Burn, D. H.
2012-12-01
The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.
Numerical methods in acoustics
NASA Astrophysics Data System (ADS)
Candel, S. M.
This paper presents a survey of some computational techniques applicable to acoustic wave problems. Recent advances in wave extrapolation methods, spectral methods and boundary integral methods are discussed and illustrated by specific calculations.
Xia, Yan; Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre; Maier, Andreas
2017-01-01
We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method.
Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre
2017-01-01
We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method. PMID:28808441
NASA Astrophysics Data System (ADS)
Varandas, António J. C.
2018-04-01
Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.
Rossi, Sergio; Anfodillo, Tommaso; Čufar, Katarina; Cuny, Henri E.; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gričar, Jožica; Gruber, Andreas; King, Gregory M.; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B. K.
2013-01-01
Background and Aims Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Methods Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1–9 years per site from 1998 to 2011. Key Results The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern Conclusions The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions. PMID:24201138
NASA Astrophysics Data System (ADS)
Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe
2018-01-01
In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.
Yoshida, Kenta; Zhao, Ping; Zhang, Lei; Abernethy, Darrell R; Rekić, Dinko; Reynolds, Kellie S; Galetin, Aleksandra; Huang, Shiew-Mei
2017-09-01
Evaluation of drug-drug interaction (DDI) risk is vital to establish benefit-risk profiles of investigational new drugs during drug development. In vitro experiments are routinely conducted as an important first step to assess metabolism- and transporter-mediated DDI potential of investigational new drugs. Results from these experiments are interpreted, often with the aid of in vitro-in vivo extrapolation methods, to determine whether and how DDI should be evaluated clinically to provide the basis for proper DDI management strategies, including dosing recommendations, alternative therapies, or contraindications under various DDI scenarios and in different patient population. This article provides an overview of currently available in vitro experimental systems and basic in vitro-in vivo extrapolation methodologies for metabolism- and transporter-mediated DDIs. Published by Elsevier Inc.
Monte Carlo based approach to the LS-NaI 4πβ-γ anticoincidence extrapolation and uncertainty
Fitzgerald, R.
2016-01-01
The 4πβ-γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone. PMID:27358944
Measurements of the Absorption by Auditorium SEATING—A Model Study
NASA Astrophysics Data System (ADS)
BARRON, M.; COLEMAN, S.
2001-01-01
One of several problems with seat absorption is that only small numbers of seats can be tested in standard reverberation chambers. One method proposed for reverberation chamber measurements involves extrapolation when the absorption coefficient results are applied to actual auditoria. Model seat measurements in an effectively large model reverberation chamber have allowed the validity of this extrapolation to be checked. The alternative barrier method for reverberation chamber measurements was also tested and the two methods were compared. The effect on the absorption of row-row spacing as well as absorption by small numbers of seating rows was also investigated with model seats.
Application of the Weibull extrapolation to 137Cs geochronology in Tokyo Bay and Ise Bay, Japan.
Lu, Xueqiang
2004-01-01
Considerable doubt surrounds the nature of processes by which 137Cs is deposited in marine sediments, leading to a situation where 137Cs geochronology cannot be always applied suitably. Based on extrapolation with Weibull distribution, the maximum concentration of 137Cs derived from asymptotic values for cumulative specific inventory was used to re-establish 137Cs geochronology, instead of original 137Cs profiles. Corresponding dating results for cores in Tokyo Bay and Ise Bay, Japan, by means of this new method, are in much closer agreement with those calculated from 210Pb method than the previous method.
Functionalized anatomical models for EM-neuron Interaction modeling
NASA Astrophysics Data System (ADS)
Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang
2016-06-01
The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.
NASA Astrophysics Data System (ADS)
Chikvashvili, Ioseb
2011-10-01
In proposed Concept it is offered to use two ion beams directed coaxially at the same direction but with different velocities (center-of-mass collision energy should be sufficient for fusion), to direct oppositely the relativistic electron beam for only partial compensation of positive space charge and for allowing the combined beam's pinch capability, to apply the longitudinal electric field for compensation of alignment of velocities of reacting particles and also for compensation of energy losses of electrons via Bremsstrahlung. On base of Concept different types of reactor designs can be realized: Linear and Cyclic designs. In the simplest embodiment the Cyclic Reactor (design) may include: betatron type device (circular store of externally injected particles - induction accelerator), pulse high-current relativistic electron injector, pulse high-current slower ion injector, pulse high-current faster ion injector and reaction products extractor. Using present day technologies and materials (or a reasonable extrapolation of those) it is possible to reach: for induction linear injectors (ions&electrons) - currents of thousands A, repeatability - up to 10Hz, the same for high-current betatrons (FFAG, Stellatron, etc.). And it is possible to build the fusion reactor using the proposed Method just today.
[Comparison of red edge parameters of winter wheat canopy under late frost stress].
Wu, Yong-feng; Hu, Xin; Lü, Guo-hua; Ren, De-chao; Jiang, Wei-guo; Song, Ji-qing
2014-08-01
In the present study, late frost experiments were implemented under a range of subfreezing temperatures (-1 - -9 degrees C) by using a field movable climate chamber (FMCC) and a cold climate chamber, respectively. Based on the spectra of winter wheat canopy measured at noon on the first day after the frost experiments, red edge parameters REP, Dr, SDr, Dr(min), Dr/Dr(min) and Dr/SDr were extracted using maximum first derivative spectrum method (FD), linear four-point interpolation method (FPI), polynomial fitting method (POLY), inverted Gaussian fitting method (IG) and linear extrapolation technique (LE), respectively. The capacity of the red edge parameters to detect late frost stress was explicated from the aspects of the early, sensitivity and stability through correlation analysis, linear regression modeling and fluctuation analysis. The result indicates that except for REP calculated from FPI and IG method in Experiment 1, REP from the other methods was correlated with frost temperatures (P < 0.05). Thereinto, significant levels (P) of POLY and LE methods all reached 0.01. Except for POLY method in Experiment 2, Dr/SDr from the other methods were all significantly correlated with frost temperatures (P < 0.01). REP showed a trend to shift to short-wave band with decreasing temperatures. The lower the temperature, the more obvious the trend is. Of all the REP, REP calculated by LE method had the highest correlation with frost temperatures which indicated that LE method is the best for REP extraction. In Experiment 1 and 2, only Dr(min) and Dr/Dr(min), calculated by FD method simultaneously achieved the requirements for the early (their correlations with frost temperatures showed a significant level P < 0.01), sensitivity (abso- lute value of the slope of fluctuation coefficient is greater than 2.0) and stability (their correlations with frost temperatures al- ways keep a consistent direction). Dr/SDr calculated from FD and IG methods always had a low sensitivity in Experiment 2. In Experiment 1, the sensitivity of Dr/SDr from FD was moderate and IG was high. REP calculated from LE method had a lowest sensitivity in the two experiments. Totally, Dr(min) and Dr/Dr(min) calculated by FD method have the strongest detection capacity for frost temperature, which will be helpful to conducting the research on early diagnosis of late frost injury to winter wheat.
Patient-bounded extrapolation using low-dose priors for volume-of-interest imaging in C-arm CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Y.; Maier, A.; Berger, M.
2015-04-15
Purpose: Three-dimensional (3D) volume-of-interest (VOI) imaging with C-arm systems provides anatomical information in a predefined 3D target region at a considerably low x-ray dose. However, VOI imaging involves laterally truncated projections from which conventional reconstruction algorithms generally yield images with severe truncation artifacts. Heuristic based extrapolation methods, e.g., water cylinder extrapolation, typically rely on techniques that complete the truncated data by means of a continuity assumption and thus appear to be ad-hoc. It is our goal to improve the image quality of VOI imaging by exploiting existing patient-specific prior information in the workflow. Methods: A necessary initial step prior tomore » a 3D acquisition is to isocenter the patient with respect to the target to be scanned. To this end, low-dose fluoroscopic x-ray acquisitions are usually applied from anterior–posterior (AP) and medio-lateral (ML) views. Based on this, the patient is isocentered by repositioning the table. In this work, we present a patient-bounded extrapolation method that makes use of these noncollimated fluoroscopic images to improve image quality in 3D VOI reconstruction. The algorithm first extracts the 2D patient contours from the noncollimated AP and ML fluoroscopic images. These 2D contours are then combined to estimate a volumetric model of the patient. Forward-projecting the shape of the model at the eventually acquired C-arm rotation views gives the patient boundary information in the projection domain. In this manner, we are in the position to substantially improve image quality by enforcing the extrapolated line profiles to end at the known patient boundaries, derived from the 3D shape model estimate. Results: The proposed method was evaluated on eight clinical datasets with different degrees of truncation. The proposed algorithm achieved a relative root mean square error (rRMSE) of about 1.0% with respect to the reference reconstruction on nontruncated data, even in the presence of severe truncation, compared to a rRMSE of 8.0% when applying a state-of-the-art heuristic extrapolation technique. Conclusions: The method we proposed in this paper leads to a major improvement in image quality for 3D C-arm based VOI imaging. It involves no additional radiation when using fluoroscopic images that are acquired during the patient isocentering process. The model estimation can be readily integrated into the existing interventional workflow without additional hardware.« less
NASA Astrophysics Data System (ADS)
Cornelius, Reinold R.; Voight, Barry
1995-03-01
The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.
NASA Astrophysics Data System (ADS)
Duijster, Arno; van Groenestijn, Gert-Jan; van Neer, Paul; Blacquière, Gerrit; Volker, Arno
2018-04-01
The use of phased arrays is growing in the non-destructive testing industry and the trend is towards large 2D arrays, but due to limitations, it is currently not possible to record the signals from all elements, resulting in aliased data. In the past, we have presented a data interpolation scheme `beyond spatial aliasing' to overcome this aliasing. In this paper, we present a different approach: blending and deblending of data. On the hardware side, groups of receivers are blended (grouped) in only a few transmit/recording channels. This allows for transmission and recording with all elements, in a shorter acquisition time and with less channels. On the data processing side, this blended data is deblended (separated) by transforming it to a different domain and applying an iterative filtering and thresholding. Two different filtering methods are compared: f-k filtering and wavefield extrapolation filtering. The deblending and filtering methods are demonstrated on simulated experimental data. The wavefield extrapolation filtering proves to outperform f-k filtering. The wavefield extrapolation method can deal with groups of up to 24 receivers, in a phased array of 48 × 48 elements.
Application of the backward extrapolation method to pulsed neutron sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Application of the backward extrapolation method to pulsed neutron sources
Talamo, Alberto; Gohar, Yousry
2017-09-23
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Video error concealment using block matching and frequency selective extrapolation algorithms
NASA Astrophysics Data System (ADS)
P. K., Rajani; Khaparde, Arti
2017-06-01
Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.
Computational approach for deriving cancer progression roadmaps from static sample data
Yao, Jin; Yang, Le; Chen, Runpu; Nowak, Norma J.
2017-01-01
Abstract As with any biological process, cancer development is inherently dynamic. While major efforts continue to catalog the genomic events associated with human cancer, it remains difficult to interpret and extrapolate the accumulating data to provide insights into the dynamic aspects of the disease. Here, we present a computational strategy that enables the construction of a cancer progression model using static tumor sample data. The developed approach overcame many technical limitations of existing methods. Application of the approach to breast cancer data revealed a linear, branching model with two distinct trajectories for malignant progression. The validity of the constructed model was demonstrated in 27 independent breast cancer data sets, and through visualization of the data in the context of disease progression we were able to identify a number of potentially key molecular events in the advance of breast cancer to malignancy. PMID:28108658
Space radiation incident on SATS missions
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.
1973-01-01
A special orbital radiation study was conducted in order to evaluate mission encountered energetic particle fluxes. This information is to be supplied to the project subsystem engineers for their guidance in designing flight hardware to withstand the expected radiation levels. Flux calculations were performed for a set of 20 nominal trajectories placed at several altitudes and inclinations. Temporal variations in the ambient electron environment were considered and partially accounted for. Magnetic field calculations were performed with a current field model, extrapolated to the tentative SATS launch epoch with linear time terms. Orbital flux integrations ware performed with the latest proton and electron environment models, using new computational methods. The results are presented in graphical and tabular form. Estimates of energetic solar proton fluxes are given for a one year mission at selected integral energies ranging from 10 to 100 Mev, calculated for a year of maximum solar activity during the next solar cycle.
Density determination of nail polishes and paint chips using magnetic levitation
NASA Astrophysics Data System (ADS)
Huang, Peggy P.
Trace evidence is often small, easily overlooked, and difficult to analyze. This study describes a nondestructive method to separate and accurately determine the density of trace evidence samples, specifically nail polish and paint chip using magnetic levitation (MagLev). By determining the levitation height of each sample in the MagLev device, the density of the sample is back extrapolated using a standard density bead linear regression line. The results show that MagLev distinguishes among eight clear nail polishes, including samples from the same manufacturer; separates select colored nail polishes from the same manufacturer; can determine the density range of household paint chips; and shows limited levitation for unknown paint chips. MagLev provides a simple, affordable, and nondestructive means of determining density. The addition of co-solutes to the paramagnetic solution to expand the density range may result in greater discriminatory power and separation and lead to further applications of this technique.
High-field penning-malmberg trap: confinement properties and use in positron accumulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartley, J.H.
1997-09-01
This dissertation reports on the development of the 60 kG cryogenic positron trap at Lawrence Livermore National Laboratory, and compares the trap`s confinement properties with other nonneutral plasma devices. The device is designed for the accumulation of up to 2{times}10{sup 9} positrons from a linear-accelerator source. This positron plasma could then be used in Bhabha scattering experiments. Initial efforts at time-of-flight accumulation of positrons from the accelerator show rapid ({approximately}100 ms) deconfinement, inconsistent with the long electron lifetimes. Several possible deconfinement mechanisms have been explored, including annihilation on residual gas, injection heating, rf noise from the accelerator, magnet field curvature,more » and stray fields. Detailed studies of electron confinement demonstrate that the empirical scaling law used to design the trap cannot be extrapolated into the parameter regime of this device. Several possible methods for overcoming these limitations are presented.« less
NASA Astrophysics Data System (ADS)
Mihálka, Zsuzsanna É.; Surján, Péter R.
2017-12-01
The method of analytic continuation is applied to estimate eigenvalues of linear operators from finite order results of perturbation theory even in cases when the latter is divergent. Given a finite number of terms E(k ),k =1 ,2 ,⋯M resulting from a Rayleigh-Schrödinger perturbation calculation, scaling these numbers by μk (μ being the perturbation parameter) we form the sum E (μ ) =∑kμkE(k ) for small μ values for which the finite series is convergent to a certain numerical accuracy. Extrapolating the function E (μ ) to μ =1 yields an estimation of the exact solution of the problem. For divergent series, this procedure may serve as resummation tool provided the perturbation problem has a nonzero radius of convergence. As illustrations, we treat the anharmonic (quartic) oscillator and an example from the many-electron correlation problem.
Magnetic field restructuring associated with two successive solar eruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Rui; Liu, Ying D.; Yang, Zhongwei
2014-08-20
We examine two successive flare eruptions (X5.4 and X1.3) on 2012 March 7 in the NOAA active region 11429 and investigate the magnetic field reconfiguration associated with the two eruptions. Using an advanced non-linear force-free field extrapolation method based on the SDO/HMI vector magnetograms, we obtain a stepwise decrease in the magnetic free energy during the eruptions, which is roughly 20%-30% of the energy of the pre-flare phase. We also calculate the magnetic helicity and suggest that the changes of the sign of the helicity injection rate might be associated with the eruptions. Through the investigation of the magnetic fieldmore » evolution, we find that the appearance of the 'implosion' phenomenon has a strong relationship with the occurrence of the first X-class flare. Meanwhile, the magnetic field changes of the successive eruptions with implosion and without implosion were well observed.« less
Probabilistic risk assessment of exposure to leucomalachite green residues from fish products.
Chu, Yung-Lin; Chimeddulam, Dalaijamts; Sheen, Lee-Yan; Wu, Kuen-Yuh
2013-12-01
To assess the potential risk of human exposure to carcinogenic leucomalachite green (LMG) due to fish consumption, the probabilistic risk assessment was conducted for adolescent, adult and senior adult consumers in Taiwan. The residues of LMG with the mean concentration of 13.378±20.56 μg kg(-1) (BFDA, 2009) in fish was converted into dose, considering fish intake reported for three consumer groups by NAHSIT (1993-1996) and body weight of an average individual of the group. The lifetime average and high 95th percentile dietary intakes of LMG from fish consumption for Taiwanese consumers were estimated at up to 0.0135 and 0.0451 μg kg-bw(-1) day(-1), respectively. Human equivalent dose (HED) of 2.875 mg kg-bw(-1) day(-1) obtained from a lower-bound benchmark dose (BMDL10) in mice by interspecies extrapolation was linearly extrapolated to oral cancer slope factor (CSF) of 0.035 (mgkg-bw(-1)day(-1))(-1) for humans. Although, the assumptions and methods are different, the results of lifetime cancer risk varying from 3×10(-7) to 1.6×10(-6) were comparable to those of margin of exposures (MOEs) varying from 410,000 to 4,800,000. In conclusions, Taiwanese fish consumers with the 95th percentile LADD of LMG have greater risk of liver cancer and need to an action of risk management in Taiwan. Copyright © 2013 Elsevier Ltd. All rights reserved.
Local lymph node assay (LLNA) for detection of sensitization capacity of chemicals.
Gerberick, G Frank; Ryan, Cindy A; Dearman, Rebecca J; Kimber, Ian
2007-01-01
The local lymph node assay (LLNA) is a murine model developed to evaluate the skin sensitization potential of chemicals. The LLNA is an alternative approach to traditional guinea pig methods and in comparison provides important animal welfare benefits. The assay relies on measurement of events induced during the induction phase of skin sensitization, specifically lymphocyte proliferation in the draining lymph nodes which is a hallmark of a skin sensitization response. Since its introduction the LLNA has been the subject of extensive evaluation on a national and international scale, and has been successfully validated and incorporated worldwide into regulatory guidelines. Experience gained in recent years has demonstrated that adherence to published procedures and guidelines for the LLNA (e.g., with respect to dose and vehicle selection) is critical for the successful conduct and eventual interpretation of the data. In addition to providing a robust method for skin sensitization hazard identification, the LLNA has proven very useful in assessing the skin sensitizing potency of test chemicals, and this has provided invaluable information to risk assessors. The primary method to make comparisons of the relative potency of chemical sensitizers is to use linear interpolation to estimate the concentration of chemical required to induce a stimulation index of three relative to concurrent vehicle-treated controls (EC3). In certain situations where there are available less than optimal dose response data a log-linear extrapolation method can be used to estimate an EC3 value which can reduce significantly the need for repeat testing of chemicals. The LLNA, when conducted according to published guidelines, provides a robust method for skin sensitization testing that not only provides reliable hazard identification information but also data necessary for effective risk assessment and risk management.
Computer program for pulsed thermocouples with corrections for radiation effects
NASA Technical Reports Server (NTRS)
Will, H. A.
1981-01-01
A pulsed thermocouple was used for measuring gas temperatures above the melting point of common thermocouples. This was done by allowing the thermocouple to heat until it approaches its melting point and then turning on the protective cooling gas. This method required a computer to extrapolate the thermocouple data to the higher gas temperatures. A method that includes the effect of radiation in the extrapolation is described. Computations of gas temperature are provided, along with the estimate of the final thermocouple wire temperature. Results from tests on high temperature combustor research rigs are presented.
Tien, Christopher J; Winslow, James F; Hintenlang, David E
2011-01-31
In helical computed tomography (CT), reconstruction information from volumes adjacent to the clinical volume of interest (VOI) is required for proper reconstruction. Previous studies have relied upon either operator console readings or indirect extrapolation of measurements in order to determine the over-ranging length of a scan. This paper presents a methodology for the direct quantification of over-ranging dose contributions using real-time dosimetry. A Siemens SOMATOM Sensation 16 multislice helical CT scanner is used with a novel real-time "point" fiber-optic dosimeter system with 10 ms temporal resolution to measure over-ranging length, which is also expressed in dose-length-product (DLP). Film was used to benchmark the exact length of over-ranging. Over-ranging length varied from 4.38 cm at pitch of 0.5 to 6.72 cm at a pitch of 1.5, which corresponds to DLP of 131 to 202 mGy-cm. The dose-extrapolation method of Van der Molen et al. yielded results within 3%, while the console reading method of Tzedakis et al. yielded consistently larger over-ranging lengths. From film measurements, it was determined that Tzedakis et al. overestimated over-ranging lengths by one-half of beam collimation width. Over-ranging length measured as a function of reconstruction slice thicknesses produced two linear regions similar to previous publications. Over-ranging is quantified with both absolute length and DLP, which contributes about 60 mGy-cm or about 10% of DLP for a routine abdominal scan. This paper presents a direct physical measurement of over-ranging length within 10% of previous methodologies. Current uncertainties are less than 1%, in comparison with 5% in other methodologies. Clinical implantation can be increased by using only one dosimeter if codependence with console readings is acceptable, with an uncertainty of 1.1% This methodology will be applied to different vendors, models, and postprocessing methods--which have been shown to produce over-ranging lengths differing by 125%.
The cost of colorectal cancer according to the TNM stage.
Mar, Javier; Errasti, Jose; Soto-Gordoa, Myriam; Mar-Barrutia, Gilen; Martinez-Llorente, José Miguel; Domínguez, Severina; García-Albás, Juan José; Arrospide, Arantzazu
2017-02-01
The aim of this study was to measure the cost of treatment of colorectal cancer in the Basque public health system according to the clinical stage. We retrospectively collected demographic data, clinical data and resource use of a sample of 529 patients. For stagesi toiii the initial and follow-up costs were measured. The calculation of cost for stageiv combined generalized linear models to relate the cost to the duration of follow-up based on parametric survival analysis. Unit costs were obtained from the analytical accounting system of the Basque Health Service. The sample included 110 patients with stagei, 171 with stageii, 158 with stageiii and 90 with stageiv colorectal cancer. The initial total cost per patient was 8,644€ for stagei, 12,675€ for stageii and 13,034€ for stageiii. The main component was hospitalization cost. Calculated by extrapolation for stageiv mean survival was 1.27years. Its average annual cost was 22,403€, and 24,509€ to death. The total annual cost for colorectal cancer extrapolated to the whole Spanish health system was 623.9million€. The economic burden of colorectal cancer is important and should be taken into account in decision-making. The combination of generalized linear models and survival analysis allows estimation of the cost of metastatic stage. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...
The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr
NASA Astrophysics Data System (ADS)
Mittal, R.; Rao, P.; Kaur, P.
2018-01-01
Elemental evaluations in scanty powdered material have been made using energy dispersive X-ray fluorescence (EDXRF) measurements, for which formulations along with specific procedure for sample target preparation have been developed. Fractional amount evaluation involves an itinerary of steps; (i) collection of elemental characteristic X-ray counts in EDXRF spectra recorded with different weights of material, (ii) search for linearity between X-ray counts and material weights, (iii) calculation of elemental fractions from the linear fit, and (iv) again linear fitting of calculated fractions with sample weights and its extrapolation to zero weight. Thus, elemental fractions at zero weight are free from material self absorption effects for incident and emitted photons. The analytical procedure after its verification with known synthetic samples of macro-nutrients, potassium and calcium, was used for wheat plant/ soil samples obtained from a pot experiment.
A consistent two-mutation model of bone cancer for two data sets of radium-injected beagles.
Bijwaard, H; Brugmans, M J P; Leenhouts, H P
2002-09-01
A two-mutation carcinogenesis model has been applied to model osteosarcoma incidence in two data sets of beagles injected with 226Ra. Taking age-specific retention into account, the following results have been obtained: (1) a consistent and well-fitting solution for all age and dose groups, (2) mutation rates that are linearly dependent on dose rate, with an exponential decrease for the second mutation at high dose rates, (3) a linear-quadratic dose-effect relationship, which indicates that care should be taken when extrapolating linearly, (4) highest cumulative incidences for injection at young adult age, and highest risks for injection doses of a few kBq kg(-1) at these ages, and (5) when scaled appropriately, the beagle model compares fairly well with a description for radium dial painters, suggesting that a consistent model description of bone cancer induction in beagles and humans may be possible.
Ficaro, E P; Fessler, J A; Rogers, W L; Schwaiger, M
1994-04-01
This study compares the ability of 241Am and 99mTc to estimate 201Tl attenuation maps while minimizing the loss in the precision of the emission data. A triple-head SPECT system with either an 241Am or 99mTc line source opposite a fan-beam collimator was used to estimate attenuation maps of the thorax of an anthropomorphic phantom. Linear attenuation values at 75 keV for 201Tl were obtained by linear extrapolation of the measured values from 241Am and 99mTc. Lung and soft-tissue estimates from both isotopes showed excellent agreement to within 3% of the measured values for 201Tl. Linear extrapolation did not yield satisfactory estimates for bone from either 241Am (+11.7%) or 99mTc (-15.3%). Patient data were used to estimate the dependence of crosstalk on patient size. Contamination from 201Tl in the transmission window was 5-6 times greater for 241Am compared to 99mTc, while the contamination in the 201Tl data in the transmission-emission detector head (head 1) was 4-5 times greater for 99mTc compared to 241Am. No contamination was detected in the 201Tl emission data of heads 2 and 3 from 241Am, whereas the 99mTc produced a small crosstalk component giving a signal-to-crosstalk ratio near 20:1. Measurements with a fillable chest phantom estimated the mean error introduced into the data from the removal of the crosstalk. Based on the measured data, 241Am is a suitable transmission source for simultaneous transmission-emission tomography for 201Tl cardiac studies.
Grantz, Erin; Haggard, Brian; Scott, J Thad
2018-06-12
We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.
Larsen, Ross E.
2016-04-12
In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally,more » tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.« less
NASA Astrophysics Data System (ADS)
Sun, M. L.; Peng, H. B.; Duan, B. H.; Liu, F. F.; Du, X.; Yuan, W.; Zhang, B. T.; Zhang, X. Y.; Wang, T. S.
2018-03-01
Borosilicate glass has potential application for vitrification of high-level radioactive waste, which attracts extensive interest in studying its radiation durability. In this study, sodium borosilicate glass samples were irradiated with 4 MeV Kr17+ ion, 5 MeV Xe26+ ion and 0.3 MeV P+ ion, respectively. The hardness of irradiated borosilicate glass samples was measured with nanoindentation in continuous stiffness mode and quasi continuous stiffness mode, separately. Extrapolation method, mean value method, squared extrapolation method and selected point method are used to obtain hardness of irradiated glass and a comparison among these four methods is conducted. The extrapolation method is suggested to analyze the hardness of ion irradiated glass. With increasing irradiation dose, the values of hardness for samples irradiated with Kr, Xe and P ions dropped and then saturated at 0.02 dpa. Besides, both the maximum variations and decay constants for three kinds of ions with different energies are similar indicates the similarity behind the hardness variation in glasses after irradiation. Furthermore, the hardness variation of low energy P ion irradiated samples whose range is much smaller than those of high energy Kr and Xe ions, has the same trend as that of Kr and Xe ions. It suggested that electronic energy loss did not play a significant role in hardness decrease for irradiation of low energy ions.
Sommerfeld, Thomas; Ehara, Masahiro
2015-01-21
The energy of a temporary anion can be computed by adding a stabilizing potential to the molecular Hamiltonian, increasing the stabilization until the temporary state is turned into a bound state, and then further increasing the stabilization until enough bound state energies have been collected so that these can be extrapolated back to vanishing stabilization. The lifetime can be obtained from the same data, but only if the extrapolation is done through analytic continuation of the momentum as a function of the square root of a shifted stabilizing parameter. This method is known as analytic continuation of the coupling constant, and it requires--at least in principle--that the bound-state input data are computed with a short-range stabilizing potential. In the context of molecules and ab initio packages, long-range Coulomb stabilizing potentials are, however, far more convenient and have been used in the past with some success, although the error introduced by the long-rang nature of the stabilizing potential remains unknown. Here, we introduce a soft-Voronoi box potential that can serve as a short-range stabilizing potential. The difference between a Coulomb and the new stabilization is analyzed in detail for a one-dimensional model system as well as for the (2)Πu resonance of CO2(-), and in both cases, the extrapolation results are compared to independently computed resonance parameters, from complex scaling for the model, and from complex absorbing potential calculations for CO2(-). It is important to emphasize that for both the model and for CO2(-), all three sets of results have, respectively, been obtained with the same electronic structure method and basis set so that the theoretical description of the continuum can be directly compared. The new soft-Voronoi-box-based extrapolation is then used to study the influence of the size of diffuse and the valence basis sets on the computed resonance parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, J; Culberson, W; DeWerd, L
Purpose: To test the validity of a windowless extrapolation chamber used to measure surface dose rate from planar ophthalmic applicators and to compare different Monte Carlo based codes for deriving correction factors. Methods: Dose rate measurements were performed using a windowless, planar extrapolation chamber with a {sup 90}Sr/{sup 90}Y Tracerlab RA-1 ophthalmic applicator previously calibrated at the National Institute of Standards and Technology (NIST). Capacitance measurements were performed to estimate the initial air gap width between the source face and collecting electrode. Current was measured as a function of air gap, and Bragg-Gray cavity theory was used to calculate themore » absorbed dose rate to water. To determine correction factors for backscatter, divergence, and attenuation from the Mylar entrance window found in the NIST extrapolation chamber, both EGSnrc Monte Carlo user code and Monte Carlo N-Particle Transport Code (MCNP) were utilized. Simulation results were compared with experimental current readings from the windowless extrapolation chamber as a function of air gap. Additionally, measured dose rate values were compared with the expected result from the NIST source calibration to test the validity of the windowless chamber design. Results: Better agreement was seen between EGSnrc simulated dose results and experimental current readings at very small air gaps (<100 µm) for the windowless extrapolation chamber, while MCNP results demonstrated divergence at these small gap widths. Three separate dose rate measurements were performed with the RA-1 applicator. The average observed difference from the expected result based on the NIST calibration was −1.88% with a statistical standard deviation of 0.39% (k=1). Conclusion: EGSnrc user code will be used during future work to derive correction factors for extrapolation chamber measurements. Additionally, experiment results suggest that an entrance window is not needed in order for an extrapolation chamber to provide accurate dose rate measurements for a planar ophthalmic applicator.« less
Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development.
Burt, T; Yoshida, K; Lappin, G; Vuong, L; John, C; de Wildt, S N; Sugiyama, Y; Rowland, M
2016-04-01
A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications and design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. All phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.
NASA Astrophysics Data System (ADS)
Svoboda, Aaron A.; Forbes, Jeffrey M.; Miyahara, Saburo
2005-11-01
A self-consistent global tidal climatology, useful for comparing and interpreting radar observations from different locations around the globe, is created from space-based Upper Atmosphere Research Satellite (UARS) horizontal wind measurements. The climatology created includes tidal structures for horizontal winds, temperature and relative density, and is constructed by fitting local (in latitude and height) UARS wind data at 95 km to a set of basis functions called Hough mode extensions (HMEs). These basis functions are numerically computed modifications to Hough modes and are globally self-consistent in wind, temperature, and density. We first demonstrate this self-consistency with a proxy data set from the Kyushu University General Circulation Model, and then use a linear weighted superposition of the HMEs obtained from monthly fits to the UARS data to extrapolate the global, multi-variable tidal structure. A brief explanation of the HMEs’ origin is provided as well as information about a public website that has been set up to make the full extrapolated data sets available.
NASA Astrophysics Data System (ADS)
Chicrala, André; Dallaqua, Renato Sergio; Antunes Vieira, Luis Eduardo; Dal Lago, Alisson; Rodríguez Gómez, Jenny Marcela; Palacios, Judith; Coelho Stekel, Tardelli Ronan; Rezende Costa, Joaquim Eduardo; da Silva Rockenbach, Marlos
2017-10-01
The behavior of Active Regions (ARs) is directly related to the occurrence of some remarkable phenomena in the Sun such as solar flares or coronal mass ejections (CME). In this sense, changes in the magnetic field of the region can be used to uncover other relevant features like the evolution of the ARs magnetic structure and the plasma flow related to it. In this work we describe the evolution of the magnetic structure of the active region AR NOAA12443 observed from 2015/10/30 to 2015/11/10, which may be associated with several X-ray flares of classes C and M. The analysis is based on observations of the solar surface and atmosphere provided by HMI and AIA instruments on board of the SDO spacecraft. In order to investigate the magnetic energy buildup and release of the ARs, we shall employ potential and linear force free extrapolations based on the solar surface magnetic field distribution and the photospheric velocity fields.
Ground-state properties of 4He and 16O extrapolated from lattice QCD with pionless EFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Contessi, L.; Lovato, A.; Pederiva, F.
Here, we extend the prediction range of Pionless Effective Field Theory with an analysis of the ground state of 16O in leading order. To renormalize the theory, we use as input both experimental data and lattice QCD predictions of nuclear observables, which probe the sensitivity of nuclei to increased quark masses. The nuclear many-body Schrödinger equation is solved with the Auxiliary Field Diffusion Monte Carlo method. For the first time in a nuclear quantum Monte Carlo calculation, a linear optimization procedure, which allows us to devise an accurate trial wave function with a large number of variational parameters, is adopted.more » The method yields a binding energy of 4He which is in good agreement with experiment at physical pion mass and with lattice calculations at larger pion masses. At leading order we do not find any evidence of a 16O state which is stable against breakup into four 4He, although higher-order terms could bind 16O.« less
Ground-state properties of 4He and 16O extrapolated from lattice QCD with pionless EFT
Contessi, L.; Lovato, A.; Pederiva, F.; ...
2017-07-26
Here, we extend the prediction range of Pionless Effective Field Theory with an analysis of the ground state of 16O in leading order. To renormalize the theory, we use as input both experimental data and lattice QCD predictions of nuclear observables, which probe the sensitivity of nuclei to increased quark masses. The nuclear many-body Schrödinger equation is solved with the Auxiliary Field Diffusion Monte Carlo method. For the first time in a nuclear quantum Monte Carlo calculation, a linear optimization procedure, which allows us to devise an accurate trial wave function with a large number of variational parameters, is adopted.more » The method yields a binding energy of 4He which is in good agreement with experiment at physical pion mass and with lattice calculations at larger pion masses. At leading order we do not find any evidence of a 16O state which is stable against breakup into four 4He, although higher-order terms could bind 16O.« less
Computation of Steady and Unsteady Laminar Flames: Theory
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Radhakrishnan, Krishnan; Zhou, Ruhai
1999-01-01
In this paper we describe the numerical analysis underlying our efforts to develop an accurate and reliable code for simulating flame propagation using complex physical and chemical models. We discuss our spatial and temporal discretization schemes, which in our current implementations range in order from two to six. In space we use staggered meshes to define discrete divergence and gradient operators, allowing us to approximate complex diffusion operators while maintaining ellipticity. Our temporal discretization is based on the use of preconditioning to produce a highly efficient linearly implicit method with good stability properties. High order for time accurate simulations is obtained through the use of extrapolation or deferred correction procedures. We also discuss our techniques for computing stationary flames. The primary issue here is the automatic generation of initial approximations for the application of Newton's method. We use a novel time-stepping procedure, which allows the dynamic updating of the flame speed and forces the flame front towards a specified location. Numerical experiments are presented, primarily for the stationary flame problem. These illustrate the reliability of our techniques, and the dependence of the results on various code parameters.
Xie, Hai-Yang; Liu, Qian; Li, Jia-Hao; Fan, Liu-Yin; Cao, Cheng-Xi
2013-02-21
A novel moving redox reaction boundary (MRRB) model was developed for studying electrophoretic behaviors of analytes involving redox reaction on the principle of moving reaction boundary (MRB). Traditional potassium permanganate method was used to create the boundary model in agarose gel electrophoresis because of the rapid reaction rate associated with MnO(4)(-) ions and Fe(2+) ions. MRB velocity equation was proposed to describe the general functional relationship between velocity of moving redox reaction boundary (V(MRRB)) and concentration of reactant, and can be extrapolated to similar MRB techniques. Parameters affecting the redox reaction boundary were investigated in detail. Under the selected conditions, good linear relationship between boundary movement distance and time were obtained. The potential application of MRRB in electromigration redox reaction titration was performed in two different concentration levels. The precision of the V(MRRB) was studied and the relative standard deviations were below 8.1%, illustrating the good repeatability achieved in this experiment. The proposed MRRB model enriches the MRB theory and also provides a feasible realization of manual control of redox reaction process in electrophoretic analysis.
NASA Astrophysics Data System (ADS)
De Niel, J.; Demarée, G.; Willems, P.
2017-10-01
Governments, policy makers, and water managers are pushed by recent socioeconomic developments such as population growth and increased urbanization inclusive of occupation of floodplains to impose very stringent regulations on the design of hydrological structures. These structures need to withstand storms with return periods typically ranging between 1,250 and 10,000 years. Such quantification involves extrapolations of systematically measured instrumental data, possibly complemented by quantitative and/or qualitative historical data and paleoflood data. The accuracy of the extrapolations is, however, highly unclear in practice. In order to evaluate extreme river peak flow extrapolation and accuracy, we studied historical and instrumental data of the past 500 years along the Meuse River. We moreover propose an alternative method for the estimation of the extreme value distribution of river peak flows, based on weather types derived by sea level pressure reconstructions. This approach results in a more accurate estimation of the tail of the distribution, where current methods are underestimating the design levels related to extreme high return periods. The design flood for a 1,250 year return period is estimated at 4,800 m3 s-1 for the proposed method, compared with 3,450 and 3,900 m3 s-1 for a traditional method and a previous study.
Cao, Le; Wei, Bing
2014-08-25
Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.
A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.
Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei
2017-05-18
The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.
BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...
The purpose of this document is to provide guidance for the Agency on the application of the benchmark dose approach in determining the point of departure (POD) for health effects data, whether a linear or nonlinear low dose extrapolation is used. The guidance includes discussion on computation of benchmark doses and benchmark concentrations (BMDs and BMCs) and their lower confidence limits, data requirements, dose-response analysis, and reporting requirements. This guidance is based on today's knowledge and understanding, and on experience gained in using this approach.
NASA Astrophysics Data System (ADS)
Berkowitz, Evan; Nicholson, Amy; Chang, Chia Cheng; Rinaldi, Enrico; Clark, M. A.; Joó, Bálint; Kurth, Thorsten; Vranas, Pavlos; Walker-Loud, André
2018-03-01
There are many outstanding problems in nuclear physics which require input and guidance from lattice QCD calculations of few baryons systems. However, these calculations suffer from an exponentially bad signal-to-noise problem which has prevented a controlled extrapolation to the physical point. The variational method has been applied very successfully to two-meson systems, allowing for the extraction of the two-meson states very early in Euclidean time through the use of improved single hadron operators. The sheer numerical cost of using the same techniques in two-baryon systems has so far been prohibitive. We present an alternate strategy which offers some of the same advantages as the variational method while being significantly less numerically expensive. We first use the Matrix Prony method to form an optimal linear combination of single baryon interpolating fields generated from the same source and different sink interpolating fields. Very early in Euclidean time this optimal linear combination is numerically free of excited state contamination, so we coin it a calm baryon. This calm baryon operator is then used in the construction of the two-baryon correlation functions. To test this method, we perform calculations on the WM/JLab iso-clover gauge configurations at the SU(3) flavor symmetric point with mπ 800 MeV — the same configurations we have previously used for the calculation of two-nucleon correlation functions. We observe the calm baryon significantly removes the excited state contamination from the two-nucleon correlation function to as early a time as the single-nucleon is improved, provided non-local (displaced nucleon) sources are used. For the local two-nucleon correlation function (where both nucleons are created from the same space-time location) there is still improvement, but there is significant excited state contamination in the region the single calm baryon displays no excited state contamination.
Off disk-center potential field calculations using vector magnetograms
NASA Technical Reports Server (NTRS)
Venkatakrishnan, P.; Gary, G. Allen
1989-01-01
A potential field calculation for off disk-center vector magnetograms that uses all the three components of the measured field is investigated. There is neither any need for interpolation of grid points between the image plane and the heliographic plane nor for an extension or a truncation to a heliographic rectangle. Hence, the method provides the maximum information content from the photospheric field as well as the most consistent potential field independent of the viewing angle. The introduction of polarimetric noise produces a less tolerant extrapolation procedure than using the line-of-sight extrapolation, but the resultant standard deviation is still small enough for the practical utility of this method.
Incorporating contact angles in the surface tension force with the ACES interface curvature scheme
NASA Astrophysics Data System (ADS)
Owkes, Mark
2017-11-01
In simulations of gas-liquid flows interacting with solid boundaries, the contact line dynamics effect the interface motion and flow field through the surface tension force. The surface tension force is directly proportional to the interface curvature and the problem of accurately imposing a contact angle must be incorporated into the interface curvature calculation. Many commonly used algorithms to compute interface curvatures (e.g., height function method) require extrapolating the interface, with defined contact angle, into the solid to allow for the calculation of a curvature near a wall. Extrapolating can be an ill-posed problem, especially in three-dimensions or when multiple contact lines are near each other. We have developed an accurate methodology to compute interface curvatures that allows for contact angles to be easily incorporated while avoiding extrapolation and the associated challenges. The method, known as Adjustable Curvature Evaluation Scale (ACES), leverages a least squares fit of a polynomial to points computed on the volume-of-fluid (VOF) representation of the gas-liquid interface. The method is tested by simulating canonical test cases and then applied to simulate the injection and motion of water droplets in a channel (relevant to PEM fuel cells).
How Accurate Are Infrared Luminosities from Monochromatic Photometric Extrapolation?
NASA Astrophysics Data System (ADS)
Lin, Zesen; Fang, Guanwen; Kong, Xu
2016-12-01
Template-based extrapolations from only one photometric band can be a cost-effective method to estimate the total infrared (IR) luminosities ({L}{IR}) of galaxies. By utilizing multi-wavelength data that covers across 0.35-500 μm in GOODS-North and GOODS-South fields, we investigate the accuracy of this monochromatic extrapolated {L}{IR} based on three IR spectral energy distribution (SED) templates out to z˜ 3.5. We find that the Chary & Elbaz template provides the best estimate of {L}{IR} in Herschel/Photodetector Array Camera and Spectrometer (PACS) bands, while the Dale & Helou template performs best in Herschel/Spectral and Photometric Imaging Receiver (SPIRE) bands. To estimate {L}{IR}, we suggest that extrapolations from the available longest wavelength PACS band based on the Chary & Elbaz template can be a good estimator. Moreover, if the PACS measurement is unavailable, extrapolations from SPIRE observations but based on the Dale & Helou template can also provide a statistically unbiased estimate for galaxies at z≲ 2. The emission with a rest-frame 10-100 μm range of IR SED can be well described by all three templates, but only the Dale & Helou template shows a nearly unbiased estimate of the emission of the rest-frame submillimeter part.
Uncertainty factors in screening ecological risk assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duke, L.D.; Taggart, M.
2000-06-01
The hazard quotient (HQ) method is commonly used in screening ecological risk assessments (ERAs) to estimate risk to wildlife at contaminated sites. Many ERAs use uncertainty factors (UFs) in the HQ calculation to incorporate uncertainty associated with predicting wildlife responses to contaminant exposure using laboratory toxicity data. The overall objective was to evaluate the current UF methodology as applied to screening ERAs in California, USA. Specific objectives included characterizing current UF methodology, evaluating the degree of conservatism in UFs as applied, and identifying limitations to the current approach. Twenty-four of 29 evaluated ERAs used the HQ approach: 23 of thesemore » used UFs in the HQ calculation. All 24 made interspecies extrapolations, and 21 compensated for its uncertainty, most using allometric adjustments and some using RFs. Most also incorporated uncertainty for same-species extrapolations. Twenty-one ERAs used UFs extrapolating from lowest observed adverse effect level (LOAEL) to no observed adverse effect level (NOAEL), and 18 used UFs extrapolating from subchronic to chronic exposure. Values and application of all UF types were inconsistent. Maximum cumulative UFs ranged from 10 to 3,000. Results suggest UF methodology is widely used but inconsistently applied and is not uniformly conservative relative to UFs recommended in regulatory guidelines and academic literature. The method is limited by lack of consensus among scientists, regulators, and practitioners about magnitudes, types, and conceptual underpinnings of the UF methodology.« less
Predicting discovery rates of genomic features.
Gravel, Simon
2014-06-01
Successful sequencing experiments require judicious sample selection. However, this selection must often be performed on the basis of limited preliminary data. Predicting the statistical properties of the final sample based on preliminary data can be challenging, because numerous uncertain model assumptions may be involved. Here, we ask whether we can predict "omics" variation across many samples by sequencing only a fraction of them. In the infinite-genome limit, we find that a pilot study sequencing 5% of a population is sufficient to predict the number of genetic variants in the entire population within 6% of the correct value, using an estimator agnostic to demography, selection, or population structure. To reach similar accuracy in a finite genome with millions of polymorphisms, the pilot study would require ∼15% of the population. We present computationally efficient jackknife and linear programming methods that exhibit substantially less bias than the state of the art when applied to simulated data and subsampled 1000 Genomes Project data. Extrapolating based on the National Heart, Lung, and Blood Institute Exome Sequencing Project data, we predict that 7.2% of sites in the capture region would be variable in a sample of 50,000 African Americans and 8.8% in a European sample of equal size. Finally, we show how the linear programming method can also predict discovery rates of various genomic features, such as the number of transcription factor binding sites across different cell types. Copyright © 2014 by the Genetics Society of America.
Varandas, A J C
2009-02-01
The potential energy surface for the C(20)-He interaction is extrapolated for three representative cuts to the complete basis set limit using second-order Møller-Plesset perturbation calculations with correlation consistent basis sets up to the doubly augmented variety. The results both with and without counterpoise correction show consistency with each other, supporting that extrapolation without such a correction provides a reliable scheme to elude the basis-set-superposition error. Converged attributes are obtained for the C(20)-He interaction, which are used to predict the fullerene dimer ones. Time requirements show that the method can be drastically more economical than the counterpoise procedure and even competitive with Kohn-Sham density functional theory for the title system.
The design of L1-norm visco-acoustic wavefield extrapolators
NASA Astrophysics Data System (ADS)
Salam, Syed Abdul; Mousa, Wail A.
2018-04-01
Explicit depth frequency-space (f - x) prestack imaging is an attractive mechanism for seismic imaging. To date, the main focus of this method was data migration assuming an acoustic medium, but until now very little work assumed visco-acoustic media. Real seismic data usually suffer from attenuation and dispersion effects. To compensate for attenuation in a visco-acoustic medium, new operators are required. We propose using the L1-norm minimization technique to design visco-acoustic f - x extrapolators. To show the accuracy and compensation of the operators, prestack depth migration is performed on the challenging Marmousi model for both acoustic and visco-acoustic datasets. The final migrated images show that the proposed L1-norm extrapolation results in practically stable and improved resolution of the images.
Vehicle Speed and Length Estimation Using Data from Two Anisotropic Magneto-Resistive (AMR) Sensors
Markevicius, Vytautas; Navikas, Dangirutis; Valinevicius, Algimantas; Zilys, Mindaugas
2017-01-01
Methods for estimating a car’s length are presented in this paper, as well as the results achieved by using a self-designed system equipped with two anisotropic magneto-resistive (AMR) sensors, which were placed on a road lane. The purpose of the research was to compare the lengths of mid-size cars, i.e., family cars (hatchbacks), saloons (sedans), station wagons and SUVs. Four methods were used in the research: a simple threshold based method, a threshold method based on moving average and standard deviation, a two-extreme-peak detection method and a method based on the amplitude and time normalization using linear extrapolation (or interpolation). The results were achieved by analyzing changes in the magnitude and in the absolute z-component of the magnetic field as well. The tests, which were performed in four different Earth directions, show differences in the values of estimated lengths. The magnitude-based results in the case when cars drove from the South to the North direction were even up to 1.2 m higher than the other results achieved using the threshold methods. Smaller differences in lengths were observed when the distances were measured between two extreme peaks in the car magnetic signatures. The results were summarized in tables and the errors of estimated lengths were presented. The maximal errors, related to real lengths, were up to 22%. PMID:28771171
How to Appropriately Extrapolate Costs and Utilities in Cost-Effectiveness Analysis.
Bojke, Laura; Manca, Andrea; Asaria, Miqdad; Mahon, Ronan; Ren, Shijie; Palmer, Stephen
2017-08-01
Costs and utilities are key inputs into any cost-effectiveness analysis. Their estimates are typically derived from individual patient-level data collected as part of clinical studies the follow-up duration of which is often too short to allow a robust quantification of the likely costs and benefits a technology will yield over the patient's entire lifetime. In the absence of long-term data, some form of temporal extrapolation-to project short-term evidence over a longer time horizon-is required. Temporal extrapolation inevitably involves assumptions regarding the behaviour of the quantities of interest beyond the time horizon supported by the clinical evidence. Unfortunately, the implications for decisions made on the basis of evidence derived following this practice and the degree of uncertainty surrounding the validity of any assumptions made are often not fully appreciated. The issue is compounded by the absence of methodological guidance concerning the extrapolation of non-time-to-event outcomes such as costs and utilities. This paper considers current approaches to predict long-term costs and utilities, highlights some of the challenges with the existing methods, and provides recommendations for future applications. It finds that, typically, economic evaluation models employ a simplistic approach to temporal extrapolation of costs and utilities. For instance, their parameters (e.g. mean) are typically assumed to be homogeneous with respect to both time and patients' characteristics. Furthermore, costs and utilities have often been modelled to follow the dynamics of the associated time-to-event outcomes. However, cost and utility estimates may be more nuanced, and it is important to ensure extrapolation is carried out appropriately for these parameters.
Caries assessment: establishing mathematical link of clinical and benchtop method
NASA Astrophysics Data System (ADS)
Amaechi, Bennett T.
2009-02-01
It is well established that the development of new technologies for early detection and quantitative monitoring of dental caries at its early stage could provide health and economic benefits ranging from timely preventive interventions to reduction of the time required for clinical trials of anti-caries agents. However, the new technologies currently used in clinical setting cannot assess and monitor caries using the actual mineral concentration within the lesion, while a laboratory-based microcomputed tomography (MCT) has been shown to possess this capability. Thus we envision the establishment of mathematical equations relating the measurements of each of the clinical technologies to that of MCT will enable the mineral concentration of lesions detected and assessed in clinical practice to be extrapolated from the equation, and this will facilitate preventitive care in dentistry to lower treatment cost. We utilize MCT and the two prominent clinical caries assessment devices (Quantitative Light-induced Fluorescence [QLF] and Diagnodent) to longitudinally monitor the development of caries in a continuous flow mixed-organisms biofilm model (artificial mouth), and then used the collected data to establish mathematical equation relating the measurements of each of the clinical technologies to that of MCT. A linear correlation was observed between the measurements of MicroCT and that of QLF and Diagnodent. Thus mineral density in a carious lesion detected and measured using QLF or Diagnodent can be extrapolated using the developed equation. This highlights the usefulness of MCT for monitoring the progress of an early caries being treated with therapeutic agents in clinical practice or trials.
NASA Astrophysics Data System (ADS)
Hébert, H.; Schindelé, F.
2015-12-01
The 2004 Indian Ocean tsunami gave the opportunity to gather unprecedented tsunami observation databases for various coastlines. We present here an analysis of such databases gathered for 3 coastlines, among the most impacted in 2004 in the intermediate- and far field: Thailand-Myanmar, SE India-Sri Lanka, and SE Madagascar. Non-linear shallow water tsunami modeling performed on a single 4' coarse bathymetric grid is compared to these observations, in order to check to which extent a simple approach based on the usual energy conservation laws (either Green's or Synolakis laws) can explain the data. The idea is to fit tsunami data with numerical modeling carried out without any refined coastal bathymetry/topography. To this end several parameters are discussed, namely the bathymetric depth to which model results must be extrapolated (using the Green's law), or the mean bathymetric slope to consider near the studied coast (when using the Synolakis law). Using extrapolation depths from 1 to 10 m generally allows a good fit; however, a 0.1 m is required for some others, especially in the far field (Madagascar) possibly due to enhanced numerical dispersion. Such a method also allows describing the tsunami impact variability along a given coastline. Then, using a series of scenarios, we propose a preliminary statistical assessment of tsunami impact for a given earthquake magnitude along the Indonesian subduction. Conversely, the sources mostly contributing to a specific hazard can also be mapped onto the sources, providing a first order definition of which sources are threatening the 3 studied coastlines.
Long-term predictions using natural analogues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, R.C.
1995-09-01
One of the unique and scientifically most challenging aspects of nuclear waste isolation is the extrapolation of short-term laboratory data (hours to years) to the long time periods (10{sup 3}-10{sup 5} years) required by regulatory agencies for performance assessment. The direct validation of these extrapolations is not possible, but methods must be developed to demonstrate compliance with government regulations and to satisfy the lay public that there is a demonstrable and reasonable basis for accepting the long-term extrapolations. Natural systems (e.g., {open_quotes}natural analogues{close_quotes}) provide perhaps the only means of partial {open_quotes}validation,{close_quotes} as well as data that may be used directlymore » in the models that are used in the extrapolation. Natural systems provide data on very large spatial (nm to km) and temporal (10{sup 3}-10{sup 8} years) scales and in highly complex terranes in which unknown synergisms may affect radionuclide migration. This paper reviews the application (and most importantly, the limitations) of data from natural analogue systems to the {open_quotes}validation{close_quotes} of performance assessments.« less
Comparison of geodetic and glaciological mass-balance techniques, Gulkana Glacier, Alaska, U.S.A
Cox, L.H.; March, R.S.
2004-01-01
The net mass balance on Gulkana Glacier, Alaska, U.S.A., has been measured since 1966 by the glaciological method, in which seasonal balances are measured at three index sites and extrapolated over large areas of the glacier. Systematic errors can accumulate linearly with time in this method. Therefore, the geodetic balance, in which errors are less time-dependent, was calculated for comparison with the glaciological method. Digital elevation models of the glacier in 1974, 1993 and 1999 were prepared using aerial photographs, and geodetic balances were computed, giving - 6.0??0.7 m w.e. from 1974 to 1993 and - 11.8??0.7 m w.e. from 1974 to 1999. These balances are compared with the glaciological balances over the same intervals, which were - 5.8??0.9 and -11.2??1.0 m w.e. respectively; both balances show that the thinning rate tripled in the 1990s. These cumulative balances differ by <6%. For this close agreement, the glaciologically measured mass balance of Gulkana Glacier must be largely free of systematic errors and be based on a time-variable area-altitude distribution, and the photography used in the geodetic method must have enough contrast to enable accurate photogrammetry.
Kowalik, William S.; Marsh, Stuart E.; Lyon, Ronald J. P.
1982-01-01
A method for estimating the reflectance of ground sites from satellite radiance data is proposed and tested. The method uses the known ground reflectance from several sites and satellite data gathered over a wide range of solar zenith angles. The method was tested on each of 10 different Landsat images using 10 small sites in the Walker Lake, Nevada area. Plots of raw Landsat digital numbers (DNs) versus the cosine of the solar zenith angle (cos Z) for the the test areas are linear, and the average correlation coefficients of the data for Landsat bands 4, 5, 6, and 7 are 0.94, 0.93, 0.94, and 0.94, respectively. Ground reflectance values for the 10 sites are proportional to the slope of the DN versus cos Z relation at each site. The slope of the DN versus cos Z relation for seven additional sites in Nevada and California were used to estimate the ground reflectances of those sites. The estimates for nearby sites are in error by an average of 1.2% and more distant sites are in error by 5.1%. The method can successfully estimate the reflectance of sites outside the original scene, but extrapolation of the reflectance estimation equations to other areas may violate assumptions of atmospheric homogeneity.
Methods of measurement signal acquisition from the rotational flow meter for frequency analysis
NASA Astrophysics Data System (ADS)
Świsulski, Dariusz; Hanus, Robert; Zych, Marcin; Petryka, Leszek
One of the simplest and commonly used instruments for measuring the flow of homogeneous substances is the rotational flow meter. The main part of such a device is a rotor (vane or screw) rotating at a speed which is the function of the fluid or gas flow rate. A pulse signal with a frequency proportional to the speed of the rotor is obtained at the sensor output. For measurements in dynamic conditions, a variable interval between pulses prohibits the analysis of the measuring signal. Therefore, the authors of the article developed a method involving the determination of measured values on the basis of the last inter-pulse interval preceding the moment designated by the timing generator. For larger changes of the measured value at a predetermined time, the value can be determined by means of extrapolation of the two adjacent interpulse ranges, assuming a linear change in the flow. The proposed methods allow analysis which requires constant spacing between measurements, allowing for an analysis of the dynamics of changes in the test flow, eg. using a Fourier transform. To present the advantages of these methods simulations of flow measurement were carried out with a DRH-1140 rotor flow meter from the company Kobold.
NASA Technical Reports Server (NTRS)
Hada, M.; George, Kerry; Cucinotta, Francis A.
2011-01-01
The relationship between biological effects and low doses of absorbed radiation is still uncertain, especially for high LET radiation exposure. Estimates of risks from low-dose and low-dose-rates are often extrapolated using data from Japanese atomic bomb survivors with either linear or linear quadratic models of fit. In this study, chromosome aberrations were measured in human peripheral blood lymphocytes and normal skin fibroblasts cells after exposure to very low dose (1-20 cGy) of 170 MeV/u Si-28- ions or 600 MeV/u Fe-56-ions. Chromosomes were analyzed using the whole chromosome fluorescence in situ hybridization (FISH) technique during the first cell division after irradiation, and chromosome aberrations were identified as either simple exchanges (translocations and dicentrics) or complex exchanges (involving greater than 2 breaks in 2 or more chromosomes). The curves for doses above 10 cGy were fitted with linear or linear-quadratic functions. For Si-28- ions no dose response was observed in the 2-10 cGy dose range, suggesting a non-target effect in this range.
Can we explain atypical solar flares?
NASA Astrophysics Data System (ADS)
Dalmasse, K.; Chandra, R.; Schmieder, B.; Aulanier, G.
2015-02-01
Context. We used multiwavelength high-resolution data from ARIES, THEMIS, and SDO instruments to analyze a non-standard, C3.3 class flare produced within the active region NOAA 11589 on 2012 October 16. Magnetic flux emergence and cancellation were continuously detected within the active region, the latter leading to the formation of two filaments. Aims: Our aim is to identify the origins of the flare taking the complex dynamics of its close surroundings into account. Methods: We analyzed the magnetic topology of the active region using a linear force-free field extrapolation to derive its 3D magnetic configuration and the location of quasi-separatrix layers (QSLs), which are preferred sites for flaring activity. Because the active region's magnetic field was nonlinear force-free, we completed a parametric study using different linear force-free field extrapolations to demonstrate the robustness of the derived QSLs. Results: The topological analysis shows that the active region presented a complex magnetic configuration comprising several QSLs. The considered data set suggests that an emerging flux episode played a key role in triggering the flare. The emerging flux probably activated the complex system of QSLs, leading to multiple coronal magnetic reconnections within the QSLs. This scenario accounts for the observed signatures: the two extended flare ribbons developed at locations matched by the photospheric footprints of the QSLs and were accompanied with flare loops that formed above the two filaments, which played no important role in the flare dynamics. Conclusions: This is a typical example of a complex flare that can a priori show standard flare signatures that are nevertheless impossible to interpret with any standard model of eruptive or confined flare. We find that a topological analysis, however, permitted us to unveil the development of such complex sets of flare signatures. Movies associated to Figs. 1, 3, and 9 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A37
Mixed-venous oxygen tension by nitrogen rebreathing - A critical, theoretical analysis.
NASA Technical Reports Server (NTRS)
Kelman, G. R.
1972-01-01
There is dispute about the validity of the nitrogen rebreathing technique for determination of mixed-venous oxygen tension. This theoretical analysis examines the circumstances under which the technique is likely to be applicable. When the plateau method is used the probable error in mixed-venous oxygen tension is plus or minus 2.5 mm Hg at rest, and of the order of plus or minus 1 mm Hg during exercise. Provided, that the rebreathing bag size is reasonably chosen, Denison's (1967) extrapolation technique gives results at least as accurate as those obtained by the plateau method. At rest, however, extrapolation should be to 30 rather than to 20 sec.
NASA Astrophysics Data System (ADS)
Alam, Md. Mehboob; Deur, Killian; Knecht, Stefan; Fromager, Emmanuel
2017-11-01
The extrapolation technique of Savin [J. Chem. Phys. 140, 18A509 (2014)], which was initially applied to range-separated ground-state-density-functional Hamiltonians, is adapted in this work to ghost-interaction-corrected (GIC) range-separated ensemble density-functional theory (eDFT) for excited states. While standard extrapolations rely on energies that decay as μ-2 in the large range-separation-parameter μ limit, we show analytically that (approximate) range-separated GIC ensemble energies converge more rapidly (as μ-3) towards their pure wavefunction theory values (μ → +∞ limit), thus requiring a different extrapolation correction. The purpose of such a correction is to further improve on the convergence and, consequently, to obtain more accurate excitation energies for a finite (and, in practice, relatively small) μ value. As a proof of concept, we apply the extrapolation method to He and small molecular systems (viz., H2, HeH+, and LiH), thus considering different types of excitations such as Rydberg, charge transfer, and double excitations. Potential energy profiles of the first three and four singlet Σ+ excitation energies in HeH+ and H2, respectively, are studied with a particular focus on avoided crossings for the latter. Finally, the extraction of individual state energies from the ensemble energy is discussed in the context of range-separated eDFT, as a perspective.
NASA Astrophysics Data System (ADS)
Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan
2013-04-01
Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation-quality-index. Subsequently the probability and quality information of the forecast ensemble is available and flexible blending to numerical prediction model for each subarea is possible. Simultaneously with automatic processing the ensemble nowcasting product is visualized in a new innovative way which combines the intensity, probability and quality information for different subareas in one forecast image.
Development of a primary standard for absorbed dose from unsealed radionuclide solutions
NASA Astrophysics Data System (ADS)
Billas, I.; Shipley, D.; Galer, S.; Bass, G.; Sander, T.; Fenwick, A.; Smyth, V.
2016-12-01
Currently, the determination of the internal absorbed dose to tissue from an administered radionuclide solution relies on Monte Carlo (MC) calculations based on published nuclear decay data, such as emission probabilities and energies. In order to validate these methods with measurements, it is necessary to achieve the required traceability of the internal absorbed dose measurements of a radionuclide solution to a primary standard of absorbed dose. The purpose of this work was to develop a suitable primary standard. A comparison between measurements and calculations of absorbed dose allows the validation of the internal radiation dose assessment methods. The absorbed dose from an yttrium-90 chloride (90YCl) solution was measured with an extrapolation chamber. A phantom was developed at the National Physical Laboratory (NPL), the UK’s National Measurement Institute, to position the extrapolation chamber as closely as possible to the surface of the solution. The performance of the extrapolation chamber was characterised and a full uncertainty budget for the absorbed dose determination was obtained. Absorbed dose to air in the collecting volume of the chamber was converted to absorbed dose at the centre of the radionuclide solution by applying a MC calculated correction factor. This allowed a direct comparison of the analytically calculated and experimentally determined absorbed dose of an 90YCl solution. The relative standard uncertainty in the measurement of absorbed dose at the centre of an 90YCl solution with the extrapolation chamber was found to be 1.6% (k = 1). The calculated 90Y absorbed doses from published medical internal radiation dose (MIRD) and radiation dose assessment resource (RADAR) data agreed with measurements to within 1.5% and 1.4%, respectively. This study has shown that it is feasible to use an extrapolation chamber for performing primary standard absorbed dose measurements of an unsealed radionuclide solution. Internal radiation dose assessment methods based on MIRD and RADAR data for 90Y have been validated with experimental absorbed dose determination and they agree within the stated expanded uncertainty (k = 2).
Temperature extrapolation of multicomponent grand canonical free energy landscapes
NASA Astrophysics Data System (ADS)
Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.
2017-08-01
We derive a method for extrapolating the grand canonical free energy landscape of a multicomponent fluid system from one temperature to another. Previously, we introduced this statistical mechanical framework for the case where kinetic energy contributions to the classical partition function were neglected for simplicity [N. A. Mahynski et al., J. Chem. Phys. 146, 074101 (2017)]. Here, we generalize the derivation to admit these contributions in order to explicitly illustrate the differences that result. Specifically, we show how factoring out kinetic energy effects a priori, in order to consider only the configurational partition function, leads to simpler mathematical expressions that tend to produce more accurate extrapolations than when these effects are included. We demonstrate this by comparing and contrasting these two approaches for the simple cases of an ideal gas and a non-ideal, square-well fluid.
Garcia, Mariano; Saatchi, Sassan; Casas, Angeles; Koltunov, Alexander; Ustin, Susan; Ramirez, Carlos; Garcia-Gutierrez, Jorge; Balzter, Heiko
2017-02-01
Quantifying biomass consumption and carbon release is critical to understanding the role of fires in the carbon cycle and air quality. We present a methodology to estimate the biomass consumed and the carbon released by the California Rim fire by integrating postfire airborne LiDAR and multitemporal Landsat Operational Land Imager (OLI) imagery. First, a support vector regression (SVR) model was trained to estimate the aboveground biomass (AGB) from LiDAR-derived metrics over the unburned area. The selected model estimated AGB with an R 2 of 0.82 and RMSE of 59.98 Mg/ha. Second, LiDAR-based biomass estimates were extrapolated to the entire area before and after the fire, using Landsat OLI reflectance bands, Normalized Difference Infrared Index, and the elevation derived from LiDAR data. The extrapolation was performed using SVR models that resulted in R 2 of 0.73 and 0.79 and RMSE of 87.18 (Mg/ha) and 75.43 (Mg/ha) for the postfire and prefire images, respectively. After removing bias from the AGB extrapolations using a linear relationship between estimated and observed values, we estimated the biomass consumption from postfire LiDAR and prefire Landsat maps to be 6.58 ± 0.03 Tg (10 12 g), which translate into 12.06 ± 0.06 Tg CO2 e released to the atmosphere, equivalent to the annual emissions of 2.57 million cars.
Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development
Burt, T.; Yoshida, K.; Lappin, G.; ...
2016-02-26
A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications andmore » design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. Lastly, all phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.« less
Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burt, T.; Yoshida, K.; Lappin, G.
A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications andmore » design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. Lastly, all phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.« less
Saatchi, Sassan; Casas, Angeles; Koltunov, Alexander; Ustin, Susan; Ramirez, Carlos; Garcia‐Gutierrez, Jorge; Balzter, Heiko
2017-01-01
Abstract Quantifying biomass consumption and carbon release is critical to understanding the role of fires in the carbon cycle and air quality. We present a methodology to estimate the biomass consumed and the carbon released by the California Rim fire by integrating postfire airborne LiDAR and multitemporal Landsat Operational Land Imager (OLI) imagery. First, a support vector regression (SVR) model was trained to estimate the aboveground biomass (AGB) from LiDAR‐derived metrics over the unburned area. The selected model estimated AGB with an R 2 of 0.82 and RMSE of 59.98 Mg/ha. Second, LiDAR‐based biomass estimates were extrapolated to the entire area before and after the fire, using Landsat OLI reflectance bands, Normalized Difference Infrared Index, and the elevation derived from LiDAR data. The extrapolation was performed using SVR models that resulted in R 2 of 0.73 and 0.79 and RMSE of 87.18 (Mg/ha) and 75.43 (Mg/ha) for the postfire and prefire images, respectively. After removing bias from the AGB extrapolations using a linear relationship between estimated and observed values, we estimated the biomass consumption from postfire LiDAR and prefire Landsat maps to be 6.58 ± 0.03 Tg (1012 g), which translate into 12.06 ± 0.06 Tg CO2e released to the atmosphere, equivalent to the annual emissions of 2.57 million cars. PMID:28405539
Research on camera on orbit radial calibration based on black body and infrared calibration stars
NASA Astrophysics Data System (ADS)
Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng
2018-05-01
Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.
The Linear Bicharacteristic Scheme for Computational Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Chan, Siew-Loong
2000-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on electromagnetic wave propagation problems. This paper extends the Linear Bicharacteristic Scheme for computational electromagnetics to treat lossy dielectric and magnetic materials and perfect electrical conductors. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media, and treatment of perfect electrical conductors (PECs) are shown to follow directly in the limit of high conductivity. Heterogeneous media are treated through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for one-dimensional model problems on both uniform and nonuniform grids, and the FDTD algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has approximately one-third the phase velocity error. The LBS is also more accurate on nonuniform grids.
A Two-Dimensional Linear Bicharacteristic Scheme for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.
2002-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on one-dimensional electromagnetic wave propagation problems. This memorandum extends the Linear Bicharacteristic Scheme for computational electromagnetics to model lossy dielectric and magnetic materials and perfect electrical conductors in two dimensions. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media and for perfect electrical conductors. Both the Transverse Electric and Transverse Magnetic polarizations are considered. Computational requirements and a Fourier analysis are also discussed. Heterogeneous media are modeled through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for two-dimensional model problems on uniform grids, and the Finite Difference Time Domain (FDTD) algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the two-dimensional explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has less phase velocity error.
On the minimum quantum requirement of photosynthesis.
Zeinalov, Yuzeir
2009-01-01
An analysis of the shape of photosynthetic light curves is presented and the existence of the initial non-linear part is shown as a consequence of the operation of the non-cooperative (Kok's) mechanism of oxygen evolution or the effect of dark respiration. The effect of nonlinearity on the quantum efficiency (yield) and quantum requirement is reconsidered. The essential conclusions are: 1) The non-linearity of the light curves cannot be compensated using suspensions of algae or chloroplasts with high (>1.0) optical density or absorbance. 2) The values of the maxima of the quantum efficiency curves or the values of the minima of the quantum requirement curves cannot be used for estimation of the exact value of the maximum quantum efficiency and the minimum quantum requirement. The estimation of the maximum quantum efficiency or the minimum quantum requirement should be performed only after extrapolation of the linear part at higher light intensities of the quantum requirement curves to "0" light intensity.
1995 second modulator-klystron workshop: A modulator-klystron workshop for future linear colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-03-01
This second workshop examined the present state of modulator design and attempted an extrapolation for future electron-positron linear colliders. These colliders are currently viewed as multikilometer-long accelerators consisting of a thousand or more RF sources with 500 to 1,000, or more, pulsed power systems. The workshop opened with two introductory talks that presented the current approaches to designing these linear colliders, the anticipated RF sources, and the design constraints for pulse power. The cost of main AC power is a major economic consideration for a future collider, consequently the workshop investigated efficient modulator designs. Techniques that effectively apply the artmore » of power conversion, from the AC mains to the RF output, and specifically, designs that generate output pulses with very fast rise times as compared to the flattop. There were six sessions that involved one or more presentations based on problems specific to the design and production of thousands of modulator-klystron stations, followed by discussion and debate on the material.« less
NASA Astrophysics Data System (ADS)
Josey, C.; Forget, B.; Smith, K.
2017-12-01
This paper introduces two families of A-stable algorithms for the integration of y‧ = F (y , t) y: the extended predictor-corrector (EPC) and the exponential-linear (EL) methods. The structure of the algorithm families are described, and the method of derivation of the coefficients presented. The new algorithms are then tested on a simple deterministic problem and a Monte Carlo isotopic evolution problem. The EPC family is shown to be only second order for systems of ODEs. However, the EPC-RK45 algorithm had the highest accuracy on the Monte Carlo test, requiring at least a factor of 2 fewer function evaluations to achieve a given accuracy than a second order predictor-corrector method (center extrapolation / center midpoint method) with regards to Gd-157 concentration. Members of the EL family can be derived to at least fourth order. The EL3 and the EL4 algorithms presented are shown to be third and fourth order respectively on the systems of ODE test. In the Monte Carlo test, these methods did not overtake the accuracy of EPC methods before statistical uncertainty dominated the error. The statistical properties of the algorithms were also analyzed during the Monte Carlo problem. The new methods are shown to yield smaller standard deviations on final quantities as compared to the reference predictor-corrector method, by up to a factor of 1.4.
Pseudogap temperature T* of cuprate superconductors from the Nernst effect
NASA Astrophysics Data System (ADS)
Cyr-Choinière, O.; Daou, R.; Laliberté, F.; Collignon, C.; Badoux, S.; LeBoeuf, D.; Chang, J.; Ramshaw, B. J.; Bonn, D. A.; Hardy, W. N.; Liang, R.; Yan, J.-Q.; Cheng, J.-G.; Zhou, J.-S.; Goodenough, J. B.; Pyon, S.; Takayama, T.; Takagi, H.; Doiron-Leyraud, N.; Taillefer, Louis
2018-02-01
We use the Nernst effect to delineate the boundary of the pseudogap phase in the temperature-doping phase diagram of hole-doped cuprate superconductors. New data for the Nernst coefficient ν (T ) of YBa2Cu3Oy (YBCO), La1.8 -xEu0.2SrxCuO4 (Eu-LSCO), and La1.6 -xNd0.4SrxCuO4 (Nd-LSCO) are presented and compared with previously published data on YBCO, Eu-LSCO, Nd-LSCO, and La2 -xSrxCuO4 (LSCO). The temperature Tν at which ν /T deviates from its high-temperature linear behavior is found to coincide with the temperature at which the resistivity ρ (T ) deviates from its linear-T dependence, which we take as the definition of the pseudogap temperature T★—in agreement with the temperature at which the antinodal spectral gap detected in angle-resolved photoemission spectroscopy (ARPES) opens. We track T★ as a function of doping and find that it decreases linearly vs p in all four materials, having the same value in the three LSCO-based cuprates, irrespective of their different crystal structures. At low p ,T★ is higher than the onset temperature of the various orders observed in underdoped cuprates, suggesting that these orders are secondary instabilities of the pseudogap phase. A linear extrapolation of T★(p ) to p =0 yields T★(p →0 ) ≃TN (0), the Néel temperature for the onset of antiferromagnetic order at p =0 , suggesting that there is a link between pseudogap and antiferromagnetism. With increasing p ,T★(p ) extrapolates linearly to zero at p ≃pc 2 , the critical doping below which superconductivity emerges at high doping, suggesting that the conditions which favor pseudogap formation also favor pairing. We also use the Nernst effect to investigate how far superconducting fluctuations extend above the critical temperature Tc, as a function of doping, and find that a narrow fluctuation regime tracks Tc, and not T★. This confirms that the pseudogap phase is not a form of precursor superconductivity, and fluctuations in the phase of the superconducting order parameter are not what causes Tc to fall on the underdoped side of the Tc dome.
The Laguerre finite difference one-way equation solver
NASA Astrophysics Data System (ADS)
Terekhov, Andrew V.
2017-05-01
This paper presents a new finite difference algorithm for solving the 2D one-way wave equation with a preliminary approximation of a pseudo-differential operator by a system of partial differential equations. As opposed to the existing approaches, the integral Laguerre transform instead of Fourier transform is used. After carrying out the approximation of spatial variables it is possible to obtain systems of linear algebraic equations with better computing properties and to reduce computer costs for their solution. High accuracy of calculations is attained at the expense of employing finite difference approximations of higher accuracy order that are based on the dispersion-relationship-preserving method and the Richardson extrapolation in the downward continuation direction. The numerical experiments have verified that as compared to the spectral difference method based on Fourier transform, the new algorithm allows one to calculate wave fields with a higher degree of accuracy and a lower level of numerical noise and artifacts including those for non-smooth velocity models. In the context of solving the geophysical problem the post-stack migration for velocity models of the types Syncline and Sigsbee2A has been carried out. It is shown that the images obtained contain lesser noise and are considerably better focused as compared to those obtained by the known Fourier Finite Difference and Phase-Shift Plus Interpolation methods. There is an opinion that purely finite difference approaches do not allow carrying out the seismic migration procedure with sufficient accuracy, however the results obtained disprove this statement. For the supercomputer implementation it is proposed to use the parallel dichotomy algorithm when solving systems of linear algebraic equations with block-tridiagonal matrices.
NASA Astrophysics Data System (ADS)
Nigro, A.; De Bartolo, C.; Crivellini, A.; Bassi, F.
2017-12-01
In this paper we investigate the possibility of using the high-order accurate A (α) -stable Second Derivative (SD) schemes proposed by Enright for the implicit time integration of the Discontinuous Galerkin (DG) space-discretized Navier-Stokes equations. These multistep schemes are A-stable up to fourth-order, but their use results in a system matrix difficult to compute. Furthermore, the evaluation of the nonlinear function is computationally very demanding. We propose here a Matrix-Free (MF) implementation of Enright schemes that allows to obtain a method without the costs of forming, storing and factorizing the system matrix, which is much less computationally expensive than its matrix-explicit counterpart, and which performs competitively with other implicit schemes, such as the Modified Extended Backward Differentiation Formulae (MEBDF). The algorithm makes use of the preconditioned GMRES algorithm for solving the linear system of equations. The preconditioner is based on the ILU(0) factorization of an approximated but computationally cheaper form of the system matrix, and it has been reused for several time steps to improve the efficiency of the MF Newton-Krylov solver. We additionally employ a polynomial extrapolation technique to compute an accurate initial guess to the implicit nonlinear system. The stability properties of SD schemes have been analyzed by solving a linear model problem. For the analysis on the Navier-Stokes equations, two-dimensional inviscid and viscous test cases, both with a known analytical solution, are solved to assess the accuracy properties of the proposed time integration method for nonlinear autonomous and non-autonomous systems, respectively. The performance of the SD algorithm is compared with the ones obtained by using an MF-MEBDF solver, in order to evaluate its effectiveness, identifying its limitations and suggesting possible further improvements.
NASA Astrophysics Data System (ADS)
Avecilla, Fernando; Panebianco, Juan E.; Mendez, Mariano J.; Buschiazzo, Daniel E.
2018-06-01
The PM10 emission efficiency of soils has been determined through different methods. Although these methods imply important physical differences, their outputs have never been compared. In the present study the PM10 emission efficiency was determined for soils through a wide range of textures, using three typical methodologies: a rotary-chamber dust generator (EDG), a laboratory wind tunnel on a prepared soil bed, and field measurements on an experimental plot. Statistically significant linear correlation was found (p < 0.05) between the PM10 emission efficiency obtained from the EDG and wind tunnel experiments. A significant linear correlation (p < 0.05) was also found between the PM10 emission efficiency determined both with the wind tunnel and the EDG, and a soil texture index (%sand + %silt)/(%clay + %organic matter) that reflects the effect of texture on the cohesion of the aggregates. Soils with higher sand content showed proportionally less emission efficiency than fine-textured, aggregated soils. This indicated that both methodologies were able to detect similar trends regarding the correlation between the soil texture and the PM10 emission. The trends attributed to soil texture were also verified for two contrasting soils under field conditions. However, differing conditions during the laboratory-scale and the field-scale experiments produced significant differences in the magnitude of the emission efficiency values. The causes of these differences are discussed within the paper. Despite these differences, the results suggest that standardized laboratory and wind tunnel procedures are promissory methods, which could be calibrated in the future to obtain results comparable to field values, essentially through adjusting the simulation time. However, more studies are needed to extrapolate correctly these values to field-scale conditions.
NASA Astrophysics Data System (ADS)
Abdelmalak, M. M.; Bulois, C.; Mourgues, R.; Galland, O.; Legland, J.-B.; Gruber, C.
2016-08-01
Cohesion and friction coefficient are fundamental parameters for scaling brittle deformation in laboratory models of geological processes. However, they are commonly not experimental variable, whereas (1) rocks range from cohesion-less to strongly cohesive and from low friction to high friction and (2) strata exhibit substantial cohesion and friction contrasts. This brittle paradox implies that the effects of brittle properties on processes involving brittle deformation cannot be tested in laboratory models. Solving this paradox requires the use of dry granular materials of tunable and controllable brittle properties. In this paper, we describe dry mixtures of fine-grained cohesive, high friction silica powder (SP) and low-cohesion, low friction glass microspheres (GM) that fulfill this requirement. We systematically estimated the cohesions and friction coefficients of mixtures of variable proportions using two independent methods: (1) a classic Hubbert-type shear box to determine the extrapolated cohesion (C) and friction coefficient (μ), and (2) direct measurements of the tensile strength (T0) and the height (H) of open fractures to calculate the true cohesion (C0). The measured values of cohesion increase from 100 Pa for pure GM to 600 Pa for pure SP, with a sub-linear trend of the cohesion with the mixture GM content. The two independent cohesion measurement methods, from shear tests and tension/extensional tests, yield very similar results of extrapolated cohesion (C) and show that both are robust and can be used independently. The measured values of friction coefficients increase from 0.5 for pure GM to 1.05 for pure SP. The use of these granular material mixtures now allows testing (1) the effects of cohesion and friction coefficient in homogeneous laboratory models and (2) testing the effect of brittle layering on brittle deformation, as demonstrated by preliminary experiments. Therefore, the brittle properties become, at last, experimental variables.
Dynamic Response of Multiphase Porous Media
1993-06-16
34"--OIct 5oct, tf1 2fOct, a f s,t,R Linearly Set Parameters Interpolate s = 1.03 from Model Fit s,t,R t = R = 0.0 Parameters Figure 3.3 Extrapolation...nitrogen. To expedite the testing, the system was equipped with solenoid operated valves so that the tests could be conducted by a single operator...incident bar. Figure 6.6 shows the incident bar entering the pressure vessel that contains the test specimen. The hose and valves are for filling and 6-5 I
Recovery of compacted soils in Mojave Desert ghost towns.
Webb, R.H.; Steiger, J.W.; Wilshire, H.G.
1986-01-01
Residual compaction of soils was measured at seven sites in five Mojave Desert ghost towns. Soils in these Death Valley National Monument townsites were compacted by vehicles, animals, and human trampling, and the townsites had been completely abandoned and the buildings removed for 64 to 75 yr. Recovery times extrapolated using a linear recovery model ranged from 80 to 140 yr and averaged 100 yr. The recovery times were related to elevation, suggesting freeze-thaw loosening as an important factor in ameliorating soil compaction in the Mojave Desert. -from Authors
NASA Astrophysics Data System (ADS)
Beaufort, Aurélien; Lamouroux, Nicolas; Pella, Hervé; Datry, Thibault; Sauquet, Eric
2018-05-01
Headwater streams represent a substantial proportion of river systems and many of them have intermittent flows due to their upstream position in the network. These intermittent rivers and ephemeral streams have recently seen a marked increase in interest, especially to assess the impact of drying on aquatic ecosystems. The objective of this paper is to quantify how discrete (in space and time) field observations of flow intermittence help to extrapolate over time the daily probability of drying (defined at the regional scale). Two empirical models based on linear or logistic regressions have been developed to predict the daily probability of intermittence at the regional scale across France. Explanatory variables were derived from available daily discharge and groundwater-level data of a dense gauging/piezometer network, and models were calibrated using discrete series of field observations of flow intermittence. The robustness of the models was tested using an independent, dense regional dataset of intermittence observations and observations of the year 2017 excluded from the calibration. The resulting models were used to extrapolate the daily regional probability of drying in France: (i) over the period 2011-2017 to identify the regions most affected by flow intermittence; (ii) over the period 1989-2017, using a reduced input dataset, to analyse temporal variability of flow intermittence at the national level. The two empirical regression models performed equally well between 2011 and 2017. The accuracy of predictions depended on the number of continuous gauging/piezometer stations and intermittence observations available to calibrate the regressions. Regions with the highest performance were located in sedimentary plains, where the monitoring network was dense and where the regional probability of drying was the highest. Conversely, the worst performances were obtained in mountainous regions. Finally, temporal projections (1989-2016) suggested the highest probabilities of intermittence (> 35 %) in 1989-1991, 2003 and 2005. A high density of intermittence observations improved the information provided by gauging stations and piezometers to extrapolate the temporal variability of intermittent rivers and ephemeral streams.
A generalized sound extrapolation method for turbulent flows
NASA Astrophysics Data System (ADS)
Zhong, Siyang; Zhang, Xin
2018-02-01
Sound extrapolation methods are often used to compute acoustic far-field directivities using near-field flow data in aeroacoustics applications. The results may be erroneous if the volume integrals are neglected (to save computational cost), while non-acoustic fluctuations are collected on the integration surfaces. In this work, we develop a new sound extrapolation method based on an acoustic analogy using Taylor's hypothesis (Taylor 1938 Proc. R. Soc. Lon. A 164, 476-490. (doi:10.1098/rspa.1938.0032)). Typically, a convection operator is used to filter out the acoustically inefficient components in the turbulent flows, and an acoustics dominant indirect variable Dcp‧ is solved. The sound pressure p' at the far field is computed from Dcp‧ based on the asymptotic properties of the Green's function. Validations results for benchmark problems with well-defined sources match well with the exact solutions. For aeroacoustics applications: the sound predictions by the aerofoil-gust interaction are close to those by an earlier method specially developed to remove the effect of vortical fluctuations (Zhong & Zhang 2017 J. Fluid Mech. 820, 424-450. (doi:10.1017/jfm.2017.219)); for the case of vortex shedding noise from a cylinder, the off-body predictions by the proposed method match well with the on-body Ffowcs-Williams and Hawkings result; different integration surfaces yield close predictions (of both spectra and far-field directivities) for a co-flowing jet case using an established direct numerical simulation database. The results suggest that the method may be a potential candidate for sound projection in aeroacoustics applications.
Resolution enhancement in digital holography by self-extrapolation of holograms.
Latychevskaia, Tatiana; Fink, Hans-Werner
2013-03-25
It is generally believed that the resolution in digital holography is limited by the size of the captured holographic record. Here, we present a method to circumvent this limit by self-extrapolating experimental holograms beyond the area that is actually captured. This is done by first padding the surroundings of the hologram and then conducting an iterative reconstruction procedure. The wavefront beyond the experimentally detected area is thus retrieved and the hologram reconstruction shows enhanced resolution. To demonstrate the power of this concept, we apply it to simulated as well as experimental holograms.
Predicting structural properties of fluids by thermodynamic extrapolation
NASA Astrophysics Data System (ADS)
Mahynski, Nathan A.; Jiao, Sally; Hatch, Harold W.; Blanco, Marco A.; Shen, Vincent K.
2018-05-01
We describe a methodology for extrapolating the structural properties of multicomponent fluids from one thermodynamic state to another. These properties generally include features of a system that may be computed from an individual configuration such as radial distribution functions, cluster size distributions, or a polymer's radius of gyration. This approach is based on the principle of using fluctuations in a system's extensive thermodynamic variables, such as energy, to construct an appropriate Taylor series expansion for these structural properties in terms of intensive conjugate variables, such as temperature. Thus, one may extrapolate these properties from one state to another when the series is truncated to some finite order. We demonstrate this extrapolation for simple and coarse-grained fluids in both the canonical and grand canonical ensembles, in terms of both temperatures and the chemical potentials of different components. The results show that this method is able to reasonably approximate structural properties of such fluids over a broad range of conditions. Consequently, this methodology may be employed to increase the computational efficiency of molecular simulations used to measure the structural properties of certain fluid systems, especially those used in high-throughput or data-driven investigations.
Bachmann, Talis; Murd, Carolina; Põder, Endel
2012-09-01
One fundamental property of the perceptual and cognitive systems is their capacity for prediction in the dynamic environment; the flash-lag effect has been considered as a particularly suggestive example of this capacity (Nijhawan in nature 370:256-257, 1994, Behav brain sci 31:179-239, 2008). Thus, because of involvement of the mechanisms of extrapolation and visual prediction, the moving object is perceived ahead of the simultaneously flashed static object objectively aligned with the moving one. In the present study we introduce a new method and report experimental results inconsistent with at least some versions of the prediction/extrapolation theory. We show that a stimulus moving in the opposite direction to the reference stimulus by approaching it before the flash does not diminish the flash-lag effect, but rather augments it. In addition, alternative theories (in)capable of explaining this paradoxical result are discussed.
Toxicokinetic Model Development for the Insensitive Munitions Component 3-Nitro-1,2,4-Triazol-5-One.
Sweeney, Lisa M; Phillips, Elizabeth A; Goodwin, Michelle R; Bannon, Desmond I
2015-01-01
3-Nitro-1,2,4-triazol-5-one (NTO) is a component of insensitive munitions that are potential replacements for conventional explosives. Toxicokinetic data can aid in the interpretation of toxicity studies and interspecies extrapolation, but only limited data on the toxicokinetics and metabolism of NTO are available. To supplement these limited data, further in vivo studies of NTO in rats were conducted and blood concentrations were measured, tissue distribution of NTO was estimated using an in silico method, and physiologically based pharmacokinetic models of the disposition of NTO in rats and macaques were developed and extrapolated to humans. The model predictions can be used to extrapolate from designated points of departure identified from rat toxicology studies to provide a scientific basis for estimates of acceptable human exposure levels for NTO. © The Author(s) 2015.
Methods for converging correlation energies within the dielectric matrix formalism
NASA Astrophysics Data System (ADS)
Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario
2018-03-01
Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.
NASA Astrophysics Data System (ADS)
Shevenell, Lisa
1999-03-01
Values of evapotranspiration are required for a variety of water planning activities in arid and semi-arid climates, yet data requirements are often large, and it is costly to obtain this information. This work presents a method where a few, readily available data (temperature, elevation) are required to estimate potential evapotranspiration (PET). A method using measured temperature and the calculated ratio of total to vertical radiation (after the work of Behnke and Maxey, 1969) to estimate monthly PET was applied for the months of April-October and compared with pan evaporation measurements. The test area used in this work was in Nevada, which has 124 weather stations that record sufficient amounts of temperature data. The calculated PET values were found to be well correlated (R2=0·940-0·983, slopes near 1·0) with mean monthly pan evaporation measurements at eight weather stations.In order to extrapolate these calculated PET values to areas without temperature measurements and to sites at differing elevations, the state was divided into five regions based on latitude, and linear regressions of PET versus elevation were calculated for each of these regions. These extrapolated PET values generally compare well with the pan evaporation measurements (R2=0·926-0·988, slopes near 1·0). The estimated values are generally somewhat lower than the pan measurements, in part because the effects of wind are not explicitly considered in the calculations, and near-freezing temperatures result in a calculated PET of zero at higher elevations in the spring months. The calculated PET values for April-October are 84-100% of the measured pan evaporation values. Using digital elevation models in a geographical information system, calculated values were adjusted for slope and aspect, and the data were used to construct a series of maps of monthly PET. The resultant maps show a realistic distribution of regional variations in PET throughout Nevada which inversely mimics topography. The general methods described here could be used to estimate regional PET in other arid western states (e.g. New Mexico, Arizona, Utah) and arid regions world-wide (e.g. parts of Africa).
Scientific study of data analysis
NASA Technical Reports Server (NTRS)
Wu, S. T.
1990-01-01
We present a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized and the accuracy and numerical instability are discussed. On the basis of this investigation, we claim that the two methods do resemble each other qualitatively.
ERIC Educational Resources Information Center
Schroder, Peter C.
1994-01-01
Proposes the study of islands to develop a method of integrating sustainable development with sound resource management that can be extrapolated to more complex, highly populated continental coastal areas. (MDH)
NASA Astrophysics Data System (ADS)
Grein, C. H.; John, Sajeev
1989-01-01
The optical absorption coefficient for subgap electronic transitions in crystalline and disordered semiconductors is calculated by first-principles means with use of a variational principle based on the Feynman path-integral representation of the transition amplitude. This incorporates the synergetic interplay of static disorder and the nonadiabatic quantum dynamics of the coupled electron-phonon system. Over photon-energy ranges of experimental interest, this method predicts accurate linear exponential Urbach behavior of the absorption coefficient. At finite temperatures the nonlinear electron-phonon interaction gives rise to multiple phonon emission and absorption sidebands which accompany the optically induced electronic transition. These sidebands dominate the absorption in the Urbach regime and account for the temperature dependence of the Urbach slope and energy gap. The physical picture which emerges is that the phonons absorbed from the heat bath are then reemitted into a dynamical polaronlike potential well which localizes the electron. At zero temperature we recover the usual polaron theory. At high temperatures the calculated tail is qualitatively similar to that of a static Gaussian random potential. This leads to a linear relationship between the Urbach slope and the downshift of the extrapolated continuum band edge as well as a temperature-independent Urbach focus. At very low temperatures, deviations from these rules are predicted arising from the true quantum dynamics of the lattice. Excellent agreement is found with experimental data on c-Si, a-Si:H, a-As2Se3, and a-As2S3. Results are compared with a simple physical argument based on the most-probable-potential-well method.
Della Bona, Maria Luisa; Malvagia, Sabrina; Villanelli, Fabio; Giocaliere, Elisa; Ombrone, Daniela; Funghini, Silvia; Filippi, Luca; Cavallaro, Giacomo; Bagnoli, Paola; Guerrini, Renzo; la Marca, Giancarlo
2013-05-05
Propranolol, a non-selective beta blocker drug, is used in young infants and newborns for treating several heart diseases; its pharmacokinetics has been extensively evaluated in adult patients using extrapolation to treat pediatric population. The purpose of the present study was to develop and validate a method to measure propranolol levels in dried blood spots. The analysis was performed by using liquid chromatography/tandem mass spectrometry operating in multiple reaction monitoring mode. The calibration curve in matrix was linear in the concentration range of 2.5-200 μg/L with correlation coefficient r=0.9996. Intra-day and inter-day precisions and biases were less than 8.0% (n=10) and 11.5% (n=10) respectively. The recoveries ranged from 94 to 100% and the matrix effect did not result in a severe signal suppression. Propranolol on dried blood spot showed a good stability at three different temperatures for one month. This paper describes a micromethod for measuring propranolol levels on dried blood spot, which determines a great advantage in neonates or young infants during pharmacokinetic studies because of less invasive sampling and small blood volume required. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lemal, Philipp; Geers, Christoph; Monnier, Christophe A.; Crippa, Federica; Daum, Leopold; Urban, Dominic A.; Rothen-Rutishauser, Barbara; Bonmarin, Mathias; Petri-Fink, Alke; Moore, Thomas L.
2017-04-01
Lock-in thermography (LIT) is a sensitive imaging technique generally used in engineering and materials science (e.g. detecting defects in composite materials). However, it has recently been expanded for investigating the heating power of nanomaterials, such as superparamagnetic iron oxide nanoparticles (SPIONs). Here we implement LIT as a rapid and reproducible method that can evaluate the heating potential of various sizes of SPIONs under an alternating magnetic field (AMF), as well as the limits of detection for each particle size. SPIONs were synthesized via thermal decomposition and stabilized in water via a ligand transfer process. Thermographic measurements of SPIONs were made by stimulating particles of varying sizes and increasing concentrations under an AMF. Furthermore, a commercially available SPION sample was included as an external reference. While the size dependent heating efficiency of SPIONs has been previously described, our objective was to probe the sensitivity limits of LIT. For certain size regimes it was possible to detect signals at concentrations as low as 0.1 mg Fe/mL. Measuring at different concentrations enabled a linear regression analysis and extrapolation of the limit of detection for different size nanoparticles.
Extrapolating cosmic ray variations and impacts on life: Morlet wavelet analysis
NASA Astrophysics Data System (ADS)
Zarrouk, N.; Bennaceur, R.
2009-07-01
Exposure to cosmic rays may have both a direct and indirect effect on Earth's organisms. The radiation may lead to higher rates of genetic mutations in organisms, or interfere with their ability to repair DNA damage, potentially leading to diseases such as cancer. Increased cloud cover, which may cool the planet by blocking out more of the Sun's rays, is also associated with cosmic rays. They also interact with molecules in the atmosphere to create nitrogen oxide, a gas that eats away at our planet's ozone layer, which protects us from the Sun's harmful ultraviolet rays. On the ground, humans are protected from cosmic particles by the planet's atmosphere. In this paper we give estimated results of wavelet analysis from solar modulation and cosmic ray data incorporated in time-dependent cosmic ray variation. Since solar activity can be described as a non-linear chaotic dynamic system, methods such as neural networks and wavelet methods should be very suitable analytical tools. Thus we have computed our results using Morlet wavelets. Many have used wavelet techniques for studying solar activity. Here we have analysed and reconstructed cosmic ray variation, and we have better depicted periods or harmonics other than the 11-year solar modulation cycles.
Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer.
Le, Guigao; Oulaid, Othmane; Zhang, Junfeng
2015-03-01
In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiegelmann, T.; Solanki, S. K.; Barthol, P.
Magneto-static models may overcome some of the issues facing force-free magnetic field extrapolations. So far they have seen limited use and have faced problems when applied to quiet-Sun data. Here we present a first application to an active region. We use solar vector magnetic field measurements gathered by the IMaX polarimeter during the flight of the Sunrise balloon-borne solar observatory in 2013 June as boundary conditions for a magneto-static model of the higher solar atmosphere above an active region. The IMaX data are embedded in active region vector magnetograms observed with SDO /HMI. This work continues our magneto-static extrapolation approach,more » which was applied earlier to a quiet-Sun region observed with Sunrise I. In an active region the signal-to-noise-ratio in the measured Stokes parameters is considerably higher than in the quiet-Sun and consequently the IMaX measurements of the horizontal photospheric magnetic field allow us to specify the free parameters of the model in a special class of linear magneto-static equilibria. The high spatial resolution of IMaX (110–130 km, pixel size 40 km) enables us to model the non-force-free layer between the photosphere and the mid-chromosphere vertically by about 50 grid points. In our approach we can incorporate some aspects of the mixed beta layer of photosphere and chromosphere, e.g., taking a finite Lorentz force into account, which was not possible with lower-resolution photospheric measurements in the past. The linear model does not, however, permit us to model intrinsic nonlinear structures like strongly localized electric currents.« less
The Electrostatic Instability for Realistic Pair Distributions in Blazar/EBL Cascades
NASA Astrophysics Data System (ADS)
Vafin, S.; Rafighi, I.; Pohl, M.; Niemiec, J.
2018-04-01
This work revisits the electrostatic instability for blazar-induced pair beams propagating through the intergalactic medium (IGM) using linear analysis and PIC simulations. We study the impact of the realistic distribution function of pairs resulting from the interaction of high-energy gamma-rays with the extragalactic background light. We present analytical and numerical calculations of the linear growth rate of the instability for the arbitrary orientation of wave vectors. Our results explicitly demonstrate that the finite angular spread of the beam dramatically affects the growth rate of the waves, leading to the fastest growth for wave vectors quasi-parallel to the beam direction and a growth rate at oblique directions that is only a factor of 2–4 smaller compared to the maximum. To study the nonlinear beam relaxation, we performed PIC simulations that take into account a realistic wide-energy distribution of beam particles. The parameters of the simulated beam-plasma system provide an adequate physical picture that can be extrapolated to realistic blazar-induced pairs. In our simulations, the beam looses only 1% of its energy, and we analytically estimate that the beam would lose its total energy over about 100 simulation times. An analytical scaling is then used to extrapolate the parameters of realistic blazar-induced pair beams. We find that they can dissipate their energy slightly faster by the electrostatic instability than through inverse-Compton scattering. The uncertainties arising from, e.g., details of the primary gamma-ray spectrum are too large to make firm statements for individual blazars, and an analysis based on their specific properties is required.
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; ...
2016-02-18
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.
2016-01-01
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less
Cedergren, A
1974-06-01
A rapid and sensitive method using true potentiometric end-point detection has been developed and compared with the conventional amperometric method for Karl Fischer determination of water. The effect of the sulphur dioxide concentration on the shape of the titration curve is shown. By using kinetic data it was possible to calculate the course of titrations and make comparisons with those found experimentally. The results prove that the main reaction is the slow step, both in the amperometric and the potentiometric method. Results obtained in the standardization of the Karl Fischer reagent showed that the potentiometric method, including titration to a preselected potential, gave a standard deviation of 0.001(1) mg of water per ml, the amperometric method using extrapolation 0.002(4) mg of water per ml and the amperometric titration to a pre-selected diffusion current 0.004(7) mg of water per ml. Theories and results dealing with dilution effects are presented. The time of analysis was 1-1.5 min for the potentiometric and 4-5 min for the amperometric method using extrapolation.
A Unified Treatment of the Acoustic and Elastic Scattered Waves from Fluid-Elastic Media
NASA Astrophysics Data System (ADS)
Denis, Max Fernand
In this thesis, contributions are made to the numerical modeling of the scattering fields from fluid-filled poroelastic materials. Of particular interest are highly porous materials that demonstrate strong contrast to the saturating fluid. A Biot's analysis of porous medium serves as the starting point of the elastic-solid and pore-fluid governing equations of motion. The longitudinal scattering waves of the elastic-solid mode and the pore-fluid mode are modeled by the Kirchhoff-Helmholtz integral equation. The integral equation is evaluated using a series approximation, describing the successive perturbation of the material contrasts. To extended the series' validity into larger domains, rational fraction extrapolation methods are employed. The local Pade□ approximant procedure is a technique that allows one to extrapolate from a scattered field of small contrast into larger values, using Pade□ approximants. To ensure the accuracy of the numerical model, comparisons are made with the exact solution of scattering from a fluid sphere. Mean absolute error analyses, yield convergent and accurate results. In addition, the numerical model correctly predicts the Bragg peaks for a periodic lattice of fluid spheres. In the case of trabecular bones, the far-field scattering pressure attenuation is a superposition of the elastic-solid mode and the pore-fluid mode generated waves from the surrounding fluid and poroelastic boundaries. The attenuation is linearly dependent with frequency between 0.2 and 0.6MHz. The slope of the attenuation is nonlinear with porosity, and does not reflect the mechanical properties of the trabecular bone. The attenuation shows the anisotropic effects of the trabeculae structure. Thus, ultrasound can possibly be employed to non-invasively predict the principal structural orientation of trabecular bones.
Prognosis of the state of health of a person under spaceflight conditions
NASA Technical Reports Server (NTRS)
1977-01-01
Methods of predicting the state of health and human efficiency during space flight are discussed. Diversity of reactions to the same conditions, development of extrapolation methods of prediction, and isolation of informative physiological indexes are among the factors considered.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Error analysis regarding the calculation of nonlinear force-free field
NASA Astrophysics Data System (ADS)
Liu, S.; Zhang, H. Q.; Su, J. T.
2012-02-01
Magnetic field extrapolation is an alternative method to study chromospheric and coronal magnetic fields. In this paper, two semi-analytical solutions of force-free fields (Low and Lou in Astrophys. J. 352:343, 1990) have been used to study the errors of nonlinear force-free (NLFF) fields based on force-free factor α. Three NLFF fields are extrapolated by approximate vertical integration (AVI) Song et al. (Astrophys. J. 649:1084, 2006), boundary integral equation (BIE) Yan and Sakurai (Sol. Phys. 195:89, 2000) and optimization (Opt.) Wiegelmann (Sol. Phys. 219:87, 2004) methods. Compared with the first semi-analytical field, it is found that the mean values of absolute relative standard deviations (RSD) of α along field lines are about 0.96-1.19, 0.63-1.07 and 0.43-0.72 for AVI, BIE and Opt. fields, respectively. While for the second semi-analytical field, they are about 0.80-1.02, 0.67-1.34 and 0.33-0.55 for AVI, BIE and Opt. fields, respectively. As for the analytical field, the calculation error of <| RSD|> is about 0.1˜0.2. It is also found that RSD does not apparently depend on the length of field line. These provide the basic estimation on the deviation of extrapolated field obtained by proposed methods from the real force-free field.
Benhaim, Deborah; Grushka, Eli
2008-10-31
In this study, we show that the addition of n-octanol to the mobile phase improves the chromatographic determination of lipophilicity parameters of xenobiotics (neutral solutes, acidic, neutral and basic drugs) on a Phenomenex Gemini C18 column. The Gemini C18 column is a new generation hybrid silica-based column with an extended pH range capability. The wide pH range (2-12) afforded the examination of basic drugs and acidic drugs in their neutral form. Extrapolated retention factor values, [Formula: see text] , obtained on the above column with the n-octanol-modified mobile phase were very well correlated (1:1 correlation) with literature values of logP (logarithm of the partition coefficient in n-octanol/water) of neutral compounds and neutral drugs (69). In addition, we found good linear correlations between measured [Formula: see text] values and calculated values of the logarithm of the distribution coefficient at pH 7.0 (logD(7.0)) for ionized acidic and basic drugs (r(2)=0.95). The Gemini C18 phase was characterized using the linear solvation energy relationship (LSER) model of Abraham. The LSER system constants for the column were compared to the LSER constants of n-octanol/water extraction system using the Tanaka radar plots. The comparison shows that the two methods are nearly equivalent.
Nonlinear dynamics support a linear population code in a retinal target-tracking circuit.
Leonardo, Anthony; Meister, Markus
2013-10-23
A basic task faced by the visual system of many organisms is to accurately track the position of moving prey. The retina is the first stage in the processing of such stimuli; the nature of the transformation here, from photons to spike trains, constrains not only the ultimate fidelity of the tracking signal but also the ease with which it can be extracted by other brain regions. Here we demonstrate that a population of fast-OFF ganglion cells in the salamander retina, whose dynamics are governed by a nonlinear circuit, serve to compute the future position of the target over hundreds of milliseconds. The extrapolated position of the target is not found by stimulus reconstruction but is instead computed by a weighted sum of ganglion cell outputs, the population vector average (PVA). The magnitude of PVA extrapolation varies systematically with target size, speed, and acceleration, such that large targets are tracked most accurately at high speeds, and small targets at low speeds, just as is seen in the motion of real prey. Tracking precision reaches the resolution of single photoreceptors, and the PVA algorithm performs more robustly than several alternative algorithms. If the salamander brain uses the fast-OFF cell circuit for target extrapolation as we suggest, the circuit dynamics should leave a microstructure on the behavior that may be measured in future experiments. Our analysis highlights the utility of simple computations that, while not globally optimal, are efficiently implemented and have close to optimal performance over a limited but ethologically relevant range of stimuli.
Can we detect a nonlinear response to temperature in European plant phenology?
NASA Astrophysics Data System (ADS)
Jochner, Susanne; Sparks, Tim H.; Laube, Julia; Menzel, Annette
2016-10-01
Over a large temperature range, the statistical association between spring phenology and temperature is often regarded and treated as a linear function. There are suggestions that a sigmoidal relationship with definite upper and lower limits to leaf unfolding and flowering onset dates might be more realistic. We utilised European plant phenological records provided by the European phenology database PEP725 and gridded monthly mean temperature data for 1951-2012 calculated from the ENSEMBLES data set E-OBS (version 7.0). We analysed 568,456 observations of ten spring flowering or leafing phenophases derived from 3657 stations in 22 European countries in order to detect possible nonlinear responses to temperature. Linear response rates averaged for all stations ranged between -7.7 (flowering of hazel) and -2.7 days °C-1 (leaf unfolding of beech and oak). A lower sensitivity at the cooler end of the temperature range was detected for most phenophases. However, a similar lower sensitivity at the warmer end was not that evident. For only ˜14 % of the station time series (where a comparison between linear and nonlinear model was possible), nonlinear models described the relationship significantly better than linear models. Although in most cases simple linear models might be still sufficient to predict future changes, this linear relationship between phenology and temperature might not be appropriate when incorporating phenological data of very cold (and possibly very warm) environments. For these cases, extrapolations on the basis of linear models would introduce uncertainty in expected ecosystem changes.
MEGA16 - Computer program for analysis and extrapolation of stress-rupture data
NASA Technical Reports Server (NTRS)
Ensign, C. R.
1981-01-01
The computerized form of the minimum commitment method of interpolating and extrapolating stress versus time to failure data, MEGA16, is described. Examples are given of its many plots and tabular outputs for a typical set of data. The program assumes a specific model equation and then provides a family of predicted isothermals for any set of data with at least 12 stress-rupture results from three different temperatures spread over reasonable stress and time ranges. It is written in FORTRAN 4 using IBM plotting subroutines and its runs on an IBM 370 time sharing system.
The use of extrapolation concepts to augment the Frequency Separation Technique
NASA Astrophysics Data System (ADS)
Alexiou, Spiros
2015-03-01
The Frequency Separation Technique (FST) is a general method formulated to improve the speed and/or accuracy of lineshape calculations, including strong overlapping collisions, as is the case for ion dynamics. It should be most useful when combined with ultrafast methods, that, however have significant difficulties when the impact regime is approached. These difficulties are addressed by the Frequency Separation Technique, in which the impact limit is correctly recovered. The present work examines the possibility of combining the Frequency Separation Technique with the addition of extrapolation to improve results and minimize errors resulting from the neglect of fast-slow coupling and thus obtain the exact result with a minimum of extra effort. To this end the adequacy of one such ultrafast method, the Frequency Fluctuation Method (FFM) for treating the nonimpact part is examined. It is found that although the FFM is unable to reproduce the nonimpact profile correctly, its coupling with the FST correctly reproduces the total profile.
Landsat Thematic Mapper monitoring of turbid inland water quality
NASA Technical Reports Server (NTRS)
Lathrop, Richard G., Jr.
1992-01-01
This study reports on an investigation of water quality calibration algorithms under turbid inland water conditions using Landsat Thematic Mapper (TM) multispectral digital data. TM data and water quality observations (total suspended solids and Secchi disk depth) were obtained near-simultaneously and related using linear regression techniques. The relationships between reflectance and water quality for Green Bay and Lake Michigan were compared with results for Yellowstone and Jackson Lakes, Wyoming. Results show similarities in the water quality-reflectance relationships, however, the algorithms derived for Green Bay - Lake Michigan cannot be extrapolated to Yellowstone and Jackson Lake conditions.
Controlled experiments in cosmological gravitational clustering
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Shandarin, Sergei F.
1993-01-01
A systematic study is conducted of gravitational instability in 3D on the basis of power-law initial spectra with and without spectral cutoff, emphasizing nonlinear effects and measures of nonlinearity; effects due to short and long waves in the initial conditions are separated. The existence of second-general pancakes is confirmed, and it is noted that while these are inhomogeneous, they generate a visually strong signal of filamentarity. An explicit comparison of smoothed initial conditions with smoothed envelope models also reconfirms the need to smooth over a scale larger than any nonlinearity, in order to extrapolate directly by linear theory from Gaussian initial conditions.
Measurements and predictions of the 6s6p{sup 1,3}P{sub 1} lifetimes in the Hg isoelectronic sequence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, L. J.; Irving, R. E.; Henderson, M.
2001-04-01
Experimental and theoretical values for the lifetimes of the 6s6p{sup 1}P{sub 1} and {sup 3}P{sub 1} levels in the Hg isoelectronic sequence are examined in the context of a data-based isoelectronic systematization. New beam-foil measurements for lifetimes in Pb III and Bi IV are reported and included in a critical evaluation of the available database. These results are combined with ab initio theoretical calculations and linearizing parametrizations to make predictive extrapolations for ions with 84{<=}Z{le}92.
Sputtering of cobalt and chromium by argon and xenon ions near the threshold energy region
NASA Technical Reports Server (NTRS)
Handoo, A. K.; Ray, P. K.
1993-01-01
Sputtering yields of cobalt and chromium by argon and xenon ions with energies below 50 eV are reported. The targets were electroplated on copper substrates. Measurable sputtering yields were obtained from cobalt with ion energies as low as 10 eV. The ion beams were produced by an ion gun. A radioactive tracer technique was used for the quantitative measurement of the sputtering yield. Co-57 and Cr-51 were used as tracers. The yield-energy curves are observed to be concave, which brings into question the practice of finding threshold energies by linear extrapolation.
Galvão, B R L; Rodrigues, S P J; Varandas, A J C
2008-07-28
A global ab initio potential energy surface is proposed for the water molecule by energy-switching/merging a highly accurate isotope-dependent local potential function reported by Polyansky et al. [Science 299, 539 (2003)] with a global form of the many-body expansion type suitably adapted to account explicitly for the dynamical correlation and parametrized from extensive accurate multireference configuration interaction energies extrapolated to the complete basis set limit. The new function mimics also the complicated Sigma/Pi crossing that arises at linear geometries of the water molecule.
Method and System for Temporal Filtering in Video Compression Systems
NASA Technical Reports Server (NTRS)
Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim
2011-01-01
Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.
Effective orthorhombic anisotropic models for wavefield extrapolation
NASA Astrophysics Data System (ADS)
Ibanez-Jacome, Wilson; Alkhalifah, Tariq; Waheed, Umair bin
2014-09-01
Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth's subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the first-arrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.
Challenges of accelerated aging techniques for elastomer lifetime predictions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillen, Kenneth T.; Bernstein, R.; Celina, M.
Elastomers are often degraded when exposed to air or high humidity for extended times (years to decades). Lifetime estimates normally involve extrapolating accelerated aging results made at higher than ambient environments. Several potential problems associated with such studies are reviewed, and experimental and theoretical methods to address them are provided. The importance of verifying time–temperature superposition of degradation data is emphasized as evidence that the overall nature of the degradation process remains unchanged versus acceleration temperature. The confounding effects that occur when diffusion-limited oxidation (DLO) contributes under accelerated conditions are described, and it is shown that the DLO magnitude canmore » be modeled by measurements or estimates of the oxygen permeability coefficient (P Ox) and oxygen consumption rate (Φ). P Ox and Φ measurements can be influenced by DLO, and it is demonstrated how confident values can be derived. In addition, several experimental profiling techniques that screen for DLO effects are discussed. Values of Φ taken from high temperature to temperatures approaching ambient can be used to more confidently extrapolate accelerated aging results for air-aged materials, and many studies now show that Arrhenius extrapolations bend to lower activation energies as aging temperatures are lowered. Furthermore, best approaches for accelerated aging extrapolations of humidity-exposed materials are also offered.« less
Gamalo-Siebers, Margaret; Savic, Jasmina; Basu, Cynthia; Zhao, Xin; Gopalakrishnan, Mathangi; Gao, Aijun; Song, Guochen; Baygani, Simin; Thompson, Laura; Xia, H Amy; Price, Karen; Tiwari, Ram; Carlin, Bradley P
2017-07-01
Children represent a large underserved population of "therapeutic orphans," as an estimated 80% of children are treated off-label. However, pediatric drug development often faces substantial challenges, including economic, logistical, technical, and ethical barriers, among others. Among many efforts trying to remove these barriers, increased recent attention has been paid to extrapolation; that is, the leveraging of available data from adults or older age groups to draw conclusions for the pediatric population. The Bayesian statistical paradigm is natural in this setting, as it permits the combining (or "borrowing") of information across disparate sources, such as the adult and pediatric data. In this paper, authored by the pediatric subteam of the Drug Information Association Bayesian Scientific Working Group and Adaptive Design Working Group, we develop, illustrate, and provide suggestions on Bayesian statistical methods that could be used to design improved pediatric development programs that use all available information in the most efficient manner. A variety of relevant Bayesian approaches are described, several of which are illustrated through 2 case studies: extrapolating adult efficacy data to expand the labeling for Remicade to include pediatric ulcerative colitis and extrapolating adult exposure-response information for antiepileptic drugs to pediatrics. Copyright © 2017 John Wiley & Sons, Ltd.
Challenges of accelerated aging techniques for elastomer lifetime predictions
Gillen, Kenneth T.; Bernstein, R.; Celina, M.
2015-03-01
Elastomers are often degraded when exposed to air or high humidity for extended times (years to decades). Lifetime estimates normally involve extrapolating accelerated aging results made at higher than ambient environments. Several potential problems associated with such studies are reviewed, and experimental and theoretical methods to address them are provided. The importance of verifying time–temperature superposition of degradation data is emphasized as evidence that the overall nature of the degradation process remains unchanged versus acceleration temperature. The confounding effects that occur when diffusion-limited oxidation (DLO) contributes under accelerated conditions are described, and it is shown that the DLO magnitude canmore » be modeled by measurements or estimates of the oxygen permeability coefficient (P Ox) and oxygen consumption rate (Φ). P Ox and Φ measurements can be influenced by DLO, and it is demonstrated how confident values can be derived. In addition, several experimental profiling techniques that screen for DLO effects are discussed. Values of Φ taken from high temperature to temperatures approaching ambient can be used to more confidently extrapolate accelerated aging results for air-aged materials, and many studies now show that Arrhenius extrapolations bend to lower activation energies as aging temperatures are lowered. Furthermore, best approaches for accelerated aging extrapolations of humidity-exposed materials are also offered.« less
Umari, P; Marzari, Nicola
2009-09-07
We calculate the linear and nonlinear susceptibilities of periodic longitudinal chains of hydrogen dimers with different bond-length alternations using a diffusion quantum Monte Carlo approach. These quantities are derived from the changes in electronic polarization as a function of applied finite electric field--an approach we recently introduced and made possible by the use of a Berry-phase, many-body electric-enthalpy functional. Calculated susceptibilities and hypersusceptibilities are found to be in excellent agreement with the best estimates available from quantum chemistry--usually extrapolations to the infinite-chain limit of calculations for chains of finite length. It is found that while exchange effects dominate the proper description of the susceptibilities, second hypersusceptibilities are greatly affected by electronic correlations. We also assess how different approximations to the nodal surface of the many-body wave function affect the accuracy of the calculated susceptibilities.
Tidal evolution of the Galilean satellites - A linearized theory
NASA Technical Reports Server (NTRS)
Greenberg, R.
1981-01-01
The Laplace resonance among the Galilean satellites Io, Europa, and Ganymede is traditionally reduced to a pendulum-like dynamical problem by neglecting short-period variations of several orbital elements. However, some of these variations that can now be neglected may once have had longer periods, comparable to the 'pendulum' period, if the system was formerly in deep resonance (pairs of periods even closer to the ratio 2:1 than they are now). In that case, the dynamical system cannot be reduced to fewer than nine dimensions. The nine-dimensional system is linearized here in order to study small variations about equilibrium. When tidal effects are included, the resulting evolution is substantially the same as was indicated by the pendulum approach, except that evolution out of deep resonance is found to be somewhat slower than suggested by extrapolation of the pendulum results. This slower rate helps support the hypothesis that the system may have evolved from deep resonance.
Hysteresis between coral reef calcification and the seawater aragonite saturation state
NASA Astrophysics Data System (ADS)
McMahon, Ashly; Santos, Isaac R.; Cyronak, Tyler; Eyre, Bradley D.
2013-09-01
predictions of how ocean acidification (OA) will affect coral reefs assume a linear functional relationship between the ambient seawater aragonite saturation state (Ωa) and net ecosystem calcification (NEC). We quantified NEC in a healthy coral reef lagoon in the Great Barrier Reef during different times of the day. Our observations revealed a diel hysteresis pattern in the NEC versus Ωa relationship, with peak NEC rates occurring before the Ωa peak and relatively steady nighttime NEC in spite of variable Ωa. Net ecosystem production had stronger correlations with NEC than light, temperature, nutrients, pH, and Ωa. The observed hysteresis may represent an overlooked challenge for predicting the effects of OA on coral reefs. If widespread, the hysteresis could prevent the use of a linear extrapolation to determine critical Ωa threshold levels required to shift coral reefs from a net calcifying to a net dissolving state.
[Medical and biological consequences of nuclear disasters].
Stalpers, Lukas J A; van Dullemen, Simon; Franken, N A P Klaas
2012-01-01
Medical risks of radiation exaggerated; psychological risks underestimated. The discussion about atomic energy has become topical again following the nuclear accident in Fukushima. There is some argument about the gravity of medical and biological consequences of prolonged exposure to radiation. The risk of cancer following a low dose of radiation is usually estimated by linear extrapolation of the incidence of cancer among survivors of the atomic bombs dropped on Hiroshima and Nagasaki in 1945. The radiobiological linear-quadratic model (LQ-model) gives a more accurate description of observed data, is radiobiologically more plausible and is better supported by experimental and clinical data. On the basis of this model there is less risk of cancer being induced following radiation exposure. The gravest consequence of Chernobyl and Fukushima is not the medical and biological damage, but the psychological and economical impact on rescue workers and former inhabitants.
Applying Occam's Razor To The Proton Radius Puzzle
NASA Astrophysics Data System (ADS)
Higinbotham, Douglas
2016-09-01
Over the past five decades, ever more complex mathematical functions have been used to extract the radius of the proton from electron scattering data. For example, in 1963 the proton radius was extracted with linear and quadratic fits of low Q2 data (< 3 fm-2) and by 2014 a non-linear regression of two tenth order power series functions with thirty-one normalization parameters and data out to 25 fm-2 was used. But for electron scattering, the radius of the proton is determined by extracting the slope of the charge form factor at a Q2 of zero. By using higher precision data than was available in 1963 and focusing on the low Q2 data from 1974 to today, we find extrapolating functions consistently produce a proton radius of around 0.84 fm. A result that is in agreement with modern Lamb shift measurements.
Image enhancement by non-linear extrapolation in frequency space
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Greenspan, Hayit K. (Inventor)
1998-01-01
An input image is enhanced to include spatial frequency components having frequencies higher than those in an input image. To this end, an edge map is generated from the input image using a high band pass filtering technique. An enhancing map is subsequently generated from the edge map, with the enhanced map having spatial frequencies exceeding an initial maximum spatial frequency of the input image. The enhanced map is generated by applying a non-linear operator to the edge map in a manner which preserves the phase transitions of the edges of the input image. The enhanced map is added to the input image to achieve a resulting image having spatial frequencies greater than those in the input image. Simplicity of computations and ease of implementation allow for image sharpening after enlargement and for real-time applications such as videophones, advanced definition television, zooming, and restoration of old motion pictures.
Visual memory transformations in dyslexia.
Barnes, James; Hinkley, Lisa; Masters, Stuart; Boubert, Laura
2007-06-01
Representational Momentum refers to observers' distortion of recognition memory for pictures that imply motion because of an automatic mental process which extrapolates along the implied trajectory of the picture. Neuroimaging evidence suggests that activity in the magnocellular visual pathway is necessary for representational momentum to occur. It has been proposed that individuals with dyslexia have a magnocellular deficit, so it was hypothesised that these individuals would show reduced or absent representational momentum. In this study, 30 adults with dyslexia and 30 age-matched controls were compared on two tasks, one linear and one rotation, which had previously elicited the representational momentum effect. Analysis indicated significant differences in the performance of the two groups, with the dyslexia group having a reduced susceptibility to representational momentum in both linear and rotational directions. The findings highlight that deficits in temporal spatial processing may contribute to the perceptual profile of dyslexia.
NASA Technical Reports Server (NTRS)
Banyukevich, A.; Ziolkovski, K.
1975-01-01
A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.
Calculation of Temperature Rise in Calorimetry.
ERIC Educational Resources Information Center
Canagaratna, Sebastian G.; Witt, Jerry
1988-01-01
Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)
Linear and Non-Linear Dielectric Response of Periodic Systems from Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Umari, Paolo
2006-03-01
We present a novel approach that allows to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wavefunction, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence. The polarization is sampled through forward-walking. This approach has been validated for the case of the polarizability of an isolated hydrogen atom, and then applied to a periodic system. We then calculate the linear susceptibility and second-order hyper-susceptibility of molecular-hydrogen chains whith different bond-length alternations, and assess the quality of nodal surfaces derived from density-functional theory or from Hartree-Fock. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.P. Umari, A.J. Williamson, G. Galli, and N. MarzariPhys. Rev. Lett. 95, 207602 (2005).
NASA Astrophysics Data System (ADS)
Bližňák, Vojtěch; Sokol, Zbyněk; Zacharov, Petr
2017-02-01
An evaluation of convective cloud forecasts performed with the numerical weather prediction (NWP) model COSMO and extrapolation of cloud fields is presented using observed data derived from the geostationary satellite Meteosat Second Generation (MSG). The present study focuses on the nowcasting range (1-5 h) for five severe convective storms in their developing stage that occurred during the warm season in the years 2012-2013. Radar reflectivity and extrapolated radar reflectivity data were assimilated for at least 6 h depending on the time of occurrence of convection. Synthetic satellite imageries were calculated using radiative transfer model RTTOV v10.2, which was implemented into the COSMO model. NWP model simulations of IR10.8 μm and WV06.2 μm brightness temperatures (BTs) with a horizontal resolution of 2.8 km were interpolated into the satellite projection and objectively verified against observations using Root Mean Square Error (RMSE), correlation coefficient (CORR) and Fractions Skill Score (FSS) values. Naturally, the extrapolation of cloud fields yielded an approximately 25% lower RMSE, 20% higher CORR and 15% higher FSS at the beginning of the second forecasted hour compared to the NWP model forecasts. On the other hand, comparable scores were observed for the third hour, whereas the NWP forecasts outperformed the extrapolation by 10% for RMSE, 15% for CORR and up to 15% for FSS during the fourth forecasted hour and 15% for RMSE, 27% for CORR and up to 15% for FSS during the fifth forecasted hour. The analysis was completed by a verification of the precipitation forecasts yielding approximately 8% higher RMSE, 15% higher CORR and up to 45% higher FSS when the NWP model simulation is used compared to the extrapolation for the first hour. Both the methods yielded unsatisfactory level of precipitation forecast accuracy from the fourth forecasted hour onward.
ERIC Educational Resources Information Center
Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael
2017-01-01
The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…
NASA Astrophysics Data System (ADS)
Yuan, Shihao; Fuji, Nobuaki; Singh, Satish; Borisov, Dmitry
2017-06-01
We present a methodology to invert seismic data for a localized area by combining source-side wavefield injection and receiver-side extrapolation method. Despite the high resolving power of seismic full waveform inversion, the computational cost for practical scale elastic or viscoelastic waveform inversion remains a heavy burden. This can be much more severe for time-lapse surveys, which require real-time seismic imaging on a daily or weekly basis. Besides, changes of the structure during time-lapse surveys are likely to occur in a small area rather than the whole region of seismic experiments, such as oil and gas reservoir or CO2 injection wells. We thus propose an approach that allows to image effectively and quantitatively the localized structure changes far deep from both source and receiver arrays. In our method, we perform both forward and back propagation only inside the target region. First, we look for the equivalent source expression enclosing the region of interest by using the wavefield injection method. Second, we extrapolate wavefield from physical receivers located near the Earth's surface or on the ocean bottom to an array of virtual receivers in the subsurface by using correlation-type representation theorem. In this study, we present various 2-D elastic numerical examples of the proposed method and quantitatively evaluate errors in obtained models, in comparison to those of conventional full-model inversions. The results show that the proposed localized waveform inversion is not only efficient and robust but also accurate even under the existence of errors in both initial models and observed data.
Casting the Coronal Magnetic Field Reconstruction Tools in 3D Using the MHD Bifrost Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleishman, Gregory D.; Loukitcheva, Maria; Anfinogentov, Sergey
Quantifying the coronal magnetic field remains a central problem in solar physics. Nowadays, the coronal magnetic field is often modeled using nonlinear force-free field (NLFFF) reconstructions, whose accuracy has not yet been comprehensively assessed. Here we perform a detailed casting of the NLFFF reconstruction tools, such as π -disambiguation, photospheric field preprocessing, and volume reconstruction methods, using a 3D snapshot of the publicly available full-fledged radiative MHD model. Specifically, from the MHD model, we know the magnetic field vector in the entire 3D domain, which enables us to perform a “voxel-by-voxel” comparison of the restored and the true magnetic fieldsmore » in the 3D model volume. Our tests show that the available π -disambiguation methods often fail in the quiet-Sun areas dominated by small-scale magnetic elements, while they work well in the active region (AR) photosphere and (even better) chromosphere. The preprocessing of the photospheric magnetic field, although it does produce a more force-free boundary condition, also results in some effective “elevation” of the magnetic field components. This “elevation” height is different for the longitudinal and transverse components, which results in a systematic error in absolute heights in the reconstructed magnetic data cube. The extrapolations performed starting from the actual AR photospheric magnetogram are free from this systematic error, while other metrics are comparable with those for extrapolations from the preprocessed magnetograms. This finding favors the use of extrapolations from the original photospheric magnetogram without preprocessing. Our tests further suggest that extrapolations from a force-free chromospheric boundary produce measurably better results than those from a photospheric boundary.« less
Acute toxicity value extrapolation with fish and aquatic invertebrates
Buckler, Denny R.; Mayer, Foster L.; Ellersieck, Mark R.; Asfaw, Amha
2005-01-01
Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled “Ecological Risk Analysis” (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more sensitive to contaminants than fish are.
Signal-Processing Algorithm Development for the ACLAIM Sensor
NASA Technical Reports Server (NTRS)
vonLaven, Scott
1995-01-01
Methods for further minimizing the risk by making use of previous lidar observations were investigated. EOFs are likely to play an important role in these methods, and a procedure for extracting EOFs from data has been implemented, The new processing methods involving EOFs could range from extrapolation, as discussed, to more complicated statistical procedures for maintaining low unstart risk.
The forecast for RAC extrapolation: mostly cloudy.
Goldman, Elizabeth; Jacobs, Robert; Scott, Ellen; Scott, Bonnie
2011-09-01
The current statutory and regulatory guidance for recovery audit contractor (RAC) extrapolation leaves providers with minimal protection against the process and a limited ability to challenge overpayment demands. Providers not only should understand the statutory and regulatory basis for extrapolation forecast, but also should be able to assess their extrapolation risk and their recourse through regulatory safeguards against contractor error. Providers also should aggressively appeal all incorrect RAC denials to minimize the potential impact of extrapolation.
Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation
NASA Astrophysics Data System (ADS)
Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong
2018-04-01
The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.
Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc
2013-12-01
The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latychevskaia, Tatiana; Fink, Hans-Werner
Previously reported crystalline structures obtained by an iterative phase retrieval reconstruction of their diffraction patterns seem to be free from displaying any irregularities or defects in the lattice, which appears to be unrealistic. We demonstrate here that the structure of a nanocrystal including its atomic defects can unambiguously be recovered from its diffraction pattern alone by applying a direct phase retrieval procedure not relying on prior information of the object shape. Individual point defects in the atomic lattice are clearly apparent. Conventional phase retrieval routines assume isotropic scattering. We show that when dealing with electrons, the quantitatively correct transmission functionmore » of the sample cannot be retrieved due to anisotropic, strong forward scattering specific to electrons. We summarize the conditions for this phase retrieval method and show that the diffraction pattern can be extrapolated beyond the original record to even reveal formerly not visible Bragg peaks. Such extrapolated wave field pattern leads to enhanced spatial resolution in the reconstruction.« less
Cardiac Iron Determines Cardiac T2*, T2, and T1 in the Gerbil Model of Iron Cardiomyopathy
Wood, John C.; Otto-Duessel, Maya; Aguilar, Michelle; Nick, Hanspeter; Nelson, Marvin D.; Coates, Thomas D.; Pollack, Harvey; Moats, Rex
2010-01-01
Background Transfusional therapy for thalassemia major and sickle cell disease can lead to iron deposition and damage to the heart, liver, and endocrine organs. Iron causes the MRI parameters T1, T2, and T2* to shorten in these organs, which creates a potential mechanism for iron quantification. However, because of the danger and variability of cardiac biopsy, tissue validation of cardiac iron estimates by MRI has not been performed. In this study, we demonstrate that iron produces similar T1, T2, and T2* changes in the heart and liver using a gerbil iron-overload model. Methods and Results Twelve gerbils underwent iron dextran loading (200 mg · kg−1 · wk−1) from 2 to 14 weeks; 5 age-matched controls were studied as well. Animals had in vivo assessment of cardiac T2* and hepatic T2 and T2* and postmortem assessment of cardiac and hepatic T1 and T2. Relaxation measurements were performed in a clinical 1.5-T magnet and a 60-MHz nuclear magnetic resonance relaxometer. Cardiac and liver iron concentrations rose linearly with administered dose. Cardiac 1/T2*, 1/T2, and 1/T1 rose linearly with cardiac iron concentration. Liver 1/T2*, 1/T2, and 1/T1 also rose linearly, proportional to hepatic iron concentration. Liver and heart calibrations were similar on a dry-weight basis. Conclusions MRI measurements of cardiac T2 and T2* can be used to quantify cardiac iron. The similarity of liver and cardiac iron calibration curves in the gerbil suggests that extrapolation of human liver calibration curves to heart may be a rational approximation in humans. PMID:16027257
Approach for extrapolating in vitro metabolism data to refine bioconcentration factor estimates.
Cowan-Ellsberry, Christina E; Dyer, Scott D; Erhardt, Susan; Bernhard, Mary Jo; Roe, Amy L; Dowty, Martin E; Weisbrod, Annie V
2008-02-01
National and international chemical management programs are assessing thousands of chemicals for their persistence, bioaccumulative and environmental toxic properties; however, data for evaluating the bioaccumulation potential for fish are limited. Computer based models that account for the uptake and elimination processes that contribute to bioaccumulation may help to meet the need for reliable estimates. One critical elimination process of chemicals is metabolic transformation. It has been suggested that in vitro metabolic transformation tests using fish liver hepatocytes or S9 fractions can provide rapid and cost-effective measurements of fish metabolic potential, which could be used to refine bioconcentration factor (BCF) computer model estimates. Therefore, recent activity has focused on developing in vitro methods to measure metabolic transformation in cellular and subcellular fish liver fractions. A method to extrapolate in vitro test data to the whole body metabolic transformation rates is presented that could be used to refine BCF computer model estimates. This extrapolation approach is based on concepts used to determine the fate and distribution of drugs within the human body which have successfully supported the development of new pharmaceuticals for years. In addition, this approach has already been applied in physiologically-based toxicokinetic models for fish. The validity of the in vitro to in vivo extrapolation is illustrated using the rate of loss of parent chemical measured in two independent in vitro test systems: (1) subcellular enzymatic test using the trout liver S9 fraction, and (2) primary hepatocytes isolated from the common carp. The test chemicals evaluated have high quality in vivo BCF values and a range of logK(ow) from 3.5 to 6.7. The results show very good agreement between the measured BCF and estimated BCF values when the extrapolated whole body metabolism rates are included, thus suggesting that in vitro biotransformation data could effectively be used to reduce in vivo BCF testing and refine BCF model estimates. However, additional fish physiological data for parameterization and validation for a wider range of chemicals are needed.
3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer
NASA Technical Reports Server (NTRS)
Lane, John
2012-01-01
Determining the Z-R relationship (where Z is the radar reflectivity factor and R is rainfall rate) from disdrometer data has been and is a common goal of cloud physicists and radar meteorology researchers. The usefulness of this quantity has traditionally been limited since radar represents a volume measurement, while a disdrometer corresponds to a point measurement. To solve that problem, a 3D-DSD (drop-size distribution) method of determining an equivalent 3D Z-R was developed at the University of Central Florida and tested at the Kennedy Space Center, FL. Unfortunately, that method required a minimum of three disdrometers clustered together within a microscale network (.1-km separation). Since most commercial disdrometers used by the radar meteorology/cloud physics community are high-cost instruments, three disdrometers located within a microscale area is generally not a practical strategy due to the limitations of these kinds of research budgets. A relatively simple modification to the 3D-DSD algorithm provides an estimate of the 3D-DSD and therefore, a 3D Z-R measurement using a single disdrometer. The basis of the horizontal extrapolation is mass conservation of a drop size increment, employing the mass conservation equation. For vertical extrapolation, convolution of a drop size increment using raindrop terminal velocity is used. Together, these two independent extrapolation techniques provide a complete 3DDSD estimate in a volume around and above a single disdrometer. The estimation error is lowest along a vertical plane intersecting the disdrometer position in the direction of wind advection. This work demonstrates that multiple sensors are not required for successful implementation of the 3D interpolation/extrapolation algorithm. This is a great benefit since it is seldom that multiple sensors in the required spatial arrangement are available for this type of analysis. The original software (developed at the University of Central Florida, 1998.- 2000) has also been modified to read standardized disdrometer data format (Joss-Waldvogel format). Other modifications to the software involve accounting for vertical ambient wind motion, as well as evaporation of the raindrop during its flight time.
Evaluation of Electrochemical Methods for Electrolyte Characterization
NASA Technical Reports Server (NTRS)
Heidersbach, Robert H.
2001-01-01
This report documents summer research efforts in an attempt to develop an electrochemical method of characterizing electrolytes. The ultimate objective of the characterization would be to determine the composition and corrosivity of Martian soil. Results are presented using potentiodynamic scans, Tafel extrapolations, and resistivity tests in a variety of water-based electrolytes.
Chemical biotransformation represents the single largest source of uncertainty in chemical bioaccumulation assessments for fish. In vitro methods employing isolated hepatocytes and liver subcellular fractions (S9) can be used to estimate whole-body rates of chemical metabolism, ...
Pei, Jiquan; Han, Steve; Liao, Haijun; Li, Tao
2014-01-22
A highly efficient and simple-to-implement Monte Carlo algorithm is proposed for the evaluation of the Rényi entanglement entropy (REE) of the quantum dimer model (QDM) at the Rokhsar-Kivelson (RK) point. It makes possible the evaluation of REE at the RK point to the thermodynamic limit for a general QDM. We apply the algorithm to a QDM defined on the triangular and the square lattice in two dimensions and the simple and the face centered cubic (fcc) lattice in three dimensions. We find the REE on all these lattices follows perfect linear scaling in the thermodynamic limit, apart from an even-odd oscillation in the case of the square lattice. We also evaluate the topological entanglement entropy (TEE) with both a subtraction and an extrapolation procedure. We find the QDMs on both the triangular and the fcc lattice exhibit robust Z2 topological order. The expected TEE of ln2 is clearly demonstrated in both cases. Our large scale simulation also proves the recently proposed extrapolation procedure in cylindrical geometry to be a highly reliable way to extract the TEE of a topologically ordered system.
Dielectric relaxation spectrum of undiluted poly(4-chlorostyrene), T≳Tg
NASA Astrophysics Data System (ADS)
Yoshihara, M.; Work, R. N.
1980-06-01
Dielectric relaxation characteristics of undiluted, atactic poly(4-chlorostyrene), P4CS, have been determined at temperatures 406 K⩽T⩽446 K from measurements made at frequencies 0.2 Hz⩽f⩽0.2 MHz. After effects of electrical conductivity are subtracted, it is found that the normalized complex dielectric constant K*=K'-i K″ can be represented quantitatively by the Havriliak-Negami (H-N) equation K*=[1+(iωτ0)1-α]-β, 0⩽α, β⩽1, except for a small, high frequency tail that appears in measurements made near the glass transition temperature, Tg. The parameter β is nearly constant, and α depends linearly on log τ0, where τ0 is a characteristic relaxation time. The parameters α and β extrapolate through values obtained from published data from P4CS solutions, and extrapolation to α=0 yields a value of τ0 which compares favorably with a published value for crankshaft motions of an equivalent isolated chain segment. These observations suggest that β may characterize effects of chain connectivity and α may describe effects of interactions of the surroundings with the chain. Experimental results are compared with alternative empirical and model-based representations of dielectric relaxation in polymers.
NASA Astrophysics Data System (ADS)
Sasaki, K.; Kikuchi, S.
2014-10-01
In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.
Effect of scrape-off-layer current on reconstructed tokamak equilibrium
King, J. R.; Kruger, S. E.; Groebner, R. J.; ...
2017-01-13
Methods are described that extend fields from reconstructed equilibria to include scrape-off-layer current through extrapolated parametrized and experimental fits. The extrapolation includes both the effects of the toroidal-field and pressure gradients which produce scrape-off-layer current after recomputation of the Grad-Shafranov solution. To quantify the degree that inclusion of scrape-off-layer current modifies the equilibrium, the χ-squared goodness-of-fit parameter is calculated for cases with and without scrape-off-layer current. The change in χ-squared is found to be minor when scrape-off-layer current is included however flux surfaces are shifted by up to 3 cm. Here the impact on edge modes of these scrape-off-layer modificationsmore » is also found to be small and the importance of these methods to nonlinear computation is discussed.« less
NASA Astrophysics Data System (ADS)
Chowdhury, S.; Sharma, A.
2005-12-01
Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.
Improving CTIPe neutral density response and recovery during geomagnetic storms
NASA Astrophysics Data System (ADS)
Fedrizzi, M.; Fuller-Rowell, T. J.; Codrescu, M.; Mlynczak, M. G.; Marsh, D. R.
2013-12-01
The temperature of the Earth's thermosphere can be substantially increased during geomagnetic storms mainly due to high-latitude Joule heating induced by magnetospheric convection and auroral particle precipitation. Thermospheric heating increases atmospheric density and the drag on low-Earth orbiting satellites. The main cooling mechanism controlling the recovery of neutral temperature and density following geomagnetic activity is infrared emission from nitric oxide (NO) at 5.3 micrometers. NO is produced by both solar and auroral activity, the first due to solar EUV and X-rays the second due to dissociation of N2 by particle precipitation, and has a typical lifetime of 12 to 24 hours in the mid and lower thermosphere. NO cooling in the thermosphere peaks between 150 and 200 km altitude. In this study, a global, three-dimensional, time-dependent, non-linear coupled model of the thermosphere, ionosphere, plasmasphere, and electrodynamics (CTIPe) is used to simulate the response and recovery timescales of the upper atmosphere following geomagnetic activity. CTIPe uses time-dependent estimates of NO obtained from Marsh et al. [2004] empirical model based on Student Nitric Oxide Explorer (SNOE) satellite data rather than solving for minor species photochemistry self-consistently. This empirical model is based solely on SNOE observations, when Kp rarely exceeded 5. During conditions between Kp 5 and 9, a linear extrapolation has been used. In order to improve the accuracy of the extrapolation algorithm, CTIPe model estimates of global NO cooling have been compared with the NASA TIMED/SABER satellite measurements of radiative power at 5.3 micrometers. The comparisons have enabled improvement in the timescale for neutral density response and recovery during geomagnetic storms. CTIPe neutral density response and recovery rates are verified by comparison CHAMP satellite observations.
Predicting the future trend of popularity by network diffusion.
Zeng, An; Yeung, Chi Ho
2016-06-01
Conventional approaches to predict the future popularity of products are mainly based on extrapolation of their current popularity, which overlooks the hidden microscopic information under the macroscopic trend. Here, we study diffusion processes on consumer-product and citation networks to exploit the hidden microscopic information and connect consumers to their potential purchase, publications to their potential citers to obtain a prediction for future item popularity. By using the data obtained from the largest online retailers including Netflix and Amazon as well as the American Physical Society citation networks, we found that our method outperforms the accurate short-term extrapolation and identifies the potentially popular items long before they become prominent.
Predicting the future trend of popularity by network diffusion
NASA Astrophysics Data System (ADS)
Zeng, An; Yeung, Chi Ho
2016-06-01
Conventional approaches to predict the future popularity of products are mainly based on extrapolation of their current popularity, which overlooks the hidden microscopic information under the macroscopic trend. Here, we study diffusion processes on consumer-product and citation networks to exploit the hidden microscopic information and connect consumers to their potential purchase, publications to their potential citers to obtain a prediction for future item popularity. By using the data obtained from the largest online retailers including Netflix and Amazon as well as the American Physical Society citation networks, we found that our method outperforms the accurate short-term extrapolation and identifies the potentially popular items long before they become prominent.
Extrapolation of rotating sound fields.
Carley, Michael
2018-03-01
A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.
Cathode fall measurement in a dielectric barrier discharge in helium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Yanpeng; Zheng, Bin; Liu, Yaoge
2013-11-15
A method based on the “zero-length voltage” extrapolation is proposed to measure cathode fall in a dielectric barrier discharge. Starting, stable, and discharge-maintaining voltages were measured to obtain the extrapolation zero-length voltage. Under our experimental conditions, the “zero-length voltage” gave a cathode fall of about 185 V. Based on the known thickness of the cathode fall region, the spatial distribution of the electric field strength in dielectric barrier discharge in atmospheric helium is determined. The strong cathode fall with a maximum field value of approximately 9.25 kV/cm was typical for the glow mode of the discharge.
MMOC- MODIFIED METHOD OF CHARACTERISTICS SONIC BOOM EXTRAPOLATION
NASA Technical Reports Server (NTRS)
Darden, C. M.
1994-01-01
The Modified Method of Characteristics Sonic Boom Extrapolation program (MMOC) is a sonic boom propagation method which includes shock coalescence and incorporates the effects of asymmetry due to volume and lift. MMOC numerically integrates nonlinear equations from data at a finite distance from an airplane configuration at flight altitude to yield the sonic boom pressure signature at ground level. MMOC accounts for variations in entropy, enthalpy, and gravity for nonlinear effects near the aircraft, allowing extrapolation to begin nearer the body than in previous methods. This feature permits wind tunnel sonic boom models of up to three feet in length, enabling more detailed, realistic models than the previous six-inch sizes. It has been shown that elongated airplanes flying at high altitude and high Mach numbers can produce an acceptably low sonic boom. Shock coalescence in MMOC includes three-dimensional effects. The method is based on an axisymmetric solution with asymmetric effects determined by circumferential derivatives of the standard shock equations. Bow shocks and embedded shocks can be included in the near-field. The method of characteristics approach in MMOC allows large computational steps in the radial direction without loss of accuracy. MMOC is a propagation method rather than a predictive program. Thus input data (the flow field on a cylindrical surface at approximately one body length from the axis) must be supplied from calculations or experimental results. The MMOC package contains a uniform atmosphere pressure field program and interpolation routines for computing the required flow field data. Other user supplied input to MMOC includes Mach number, flow angles, and temperature. MMOC output tabulates locations of bow shocks and embedded shocks. When the calculations reach ground level, the overpressure and distance are printed, allowing the user to plot the pressure signature. MMOC is written in FORTRAN IV for batch execution and has been implemented on a CDC 170 series computer operating under NOS with a central memory requirement of approximately 223K of 60 bit words. This program was developed in 1983.
How fast does water flow in carbon nanotubes?
Kannam, Sridhar Kumar; Todd, B D; Hansen, J S; Daivis, Peter J
2013-03-07
The purpose of this paper is threefold. First, we review the existing literature on flow rates of water in carbon nanotubes. Data for the slip length which characterizes the flow rate are scattered over 5 orders of magnitude for nanotubes of diameter 0.81-10 nm. Second, we precisely compute the slip length using equilibrium molecular dynamics (EMD) simulations, from which the interfacial friction between water and carbon nanotubes can be found, and also via external field driven non-equilibrium molecular dynamics simulations (NEMD). We discuss some of the issues in simulation studies which may be reasons for the large disagreements reported. By using the EMD method friction coefficient to determine the slip length, we overcome the limitations of NEMD simulations. In NEMD simulations, for each tube we apply a range of external fields to check the linear response of the fluid to the field and reliably extrapolate the results for the slip length to values of the field corresponding to experimentally accessible pressure gradients. Finally, we comment on several issues concerning water flow rates in carbon nanotubes which may lead to some future research directions in this area.
High-level ab initio studies of NO(X2Π)-O2(X3Σg -) van der Waals complexes in quartet states
NASA Astrophysics Data System (ADS)
Grein, Friedrich
2018-05-01
Geometry optimisations were performed on nine different structures of NO(X2Π)-O2(X3Σg-) van der Waals complexes in their quartet states, using the explicitly correlated RCCSD(T)-F12b method with basis sets up to the cc-pVQZ-F12 level. For the most stable configurations, counterpoise-corrected optimisations as well as extrapolations to the complete basis set (CBS) were performed. The X structure in the 4A‧ state was found to be most stable, with a CBS binding energy of -157 cm-1. The slipped tilted structures with N closer to O2 (Slipt-N), as well as the slipped parallel structure with O of NO closer to O2 (Slipp-O) in 4A″ states have binding energies of about -130 cm-1. C2v and linear complexes are less stable. According to calculated harmonic frequencies, the X isomer is bound. Isotropic hyperfine coupling constants of the complex are compared with those of the monomers.
NASA Astrophysics Data System (ADS)
Chen, Dong; Sun, Dihua; Zhao, Min; Zhou, Tong; Cheng, Senlin
2018-07-01
In fact, driving process is a typical cyber physical process which couples tightly the cyber factor of traffic information with the physical components of the vehicles. Meanwhile, the drivers have situation awareness in driving process, which is not only ascribed to the current traffic states, but also extrapolates the changing trend. In this paper, an extended car-following model is proposed to account for drivers' situation awareness. The stability criterion of the proposed model is derived via linear stability analysis. The results show that the stable region of proposed model will be enlarged on the phase diagram compared with previous models. By employing the reductive perturbation method, the modified Korteweg de Vries (mKdV) equation is obtained. The kink-antikink soliton of mKdV equation reveals theoretically the evolution of traffic jams. Numerical simulations are conducted to verify the analytical results. Two typical traffic Scenarios are investigated. The simulation results demonstrate that drivers' situation awareness plays a key role in traffic flow oscillations and the congestion transition.
Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction
NASA Astrophysics Data System (ADS)
Su, X.
2017-12-01
A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.
Deposition and persistence of beachcast seabird carcasses
van Pelt, Thomas I.; Piatt, John F.
1995-01-01
Following a massive wreck of guillemots (Uria aalge) in late winter and spring of 1993, we monitored the deposition and subsequent disappearance of 398 beachcast guillemot carcasses on two beaches in Resurrection Bay, Alaska, during a 100 day period. Deposition of carcasses declined logarithmically with time after the original event. Since fresh carcasses were more likely to be removed between counts than older carcasses, persistence rates increased logarithmically over time. Scavenging appeared to be the primary cause of carcass removal, followed by burial in beach debris and sand. Along-shore transport was negligible. We present an equation which estimates the number of carcasses deposited at time zero from beach surveys conducted some time later, using non-linear persistence rates that are a function of time. We use deposition rates to model the accumulation of beached carcasses, accounting for further deposition subsequent to the original event. Finally, we present a general method for extrapolating from a single count the number of carcasses cumulatively deposited on surveyed beaches, and discuss how our results can be used to assess the magnitude of mass seabird mortality events from beach surveys.
Function approximation and documentation of sampling data using artificial neural networks.
Zhang, Wenjun; Barrion, Albert
2006-11-01
Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.
Photoelectron spectroscopy of color centers in negatively charged cesium iodide nanocrystals
NASA Astrophysics Data System (ADS)
Sarkas, Harry W.; Kidder, Linda H.; Bowen, Kit H.
1995-01-01
We present the photoelectron spectra of negatively charged cesium iodide nanocrystals recorded using 2.540 eV photons. The species examined were produced using an inert gas condensation cluster ion source, and they ranged in size from (CsI)-n=13 to nanocrystal anions comprised of 330 atoms. Nanocrystals showing two distinct types of photoemission behavior were observed. For (CsI)-n=13 and (CsI)-n=36-165, a plot of cluster anion photodetachment threshold energies vs n-1/3 gives a straight line extrapolating (at n-1/3=0, i.e., n=∞) to 2.2 eV, the photoelectric threshold energy for F centers in bulk cesium iodide. The linear extrapolation of the cluster anion data to the corresponding bulk property implies that the electron localization in these gas-phase nanocrystals is qualitatively similar to that of F centers in extended alkali halide crystals. These negatively charged cesium iodide nanocrystals are thus shown to support embryonic forms of F centers, which mature with increasing cluster size toward condensed phase impurity centers. Under an alternative set of source conditions, nanocrystals were produced which showed significantly lower photodetachment thresholds than the aforementioned F-center cluster anions. For these species, containing 83-131 atoms, a plot of their cluster anion photodetachment threshold energies versus n-1/3 gives a straight line which extrapolates to 1.4 eV. This value is in accord with the expected photoelectric threshold energy for F' centers in bulk cesium iodide, i.e., color centers with two excess electrons in a single defect site. These nanocrystals are interpreted to be the embryonic F'-center containing species, Cs(CsI)-n=41-65.
Apparent-Strain Correction for Combined Thermal and Mechanical Testing
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; O'Neil, Teresa L.
2007-01-01
Combined thermal and mechanical testing requires that the total strain be corrected for the coefficient of thermal expansion mismatch between the strain gage and the specimen or apparent strain when the temperature varies while a mechanical load is being applied. Collecting data for an apparent strain test becomes problematic as the specimen size increases. If the test specimen cannot be placed in a variable temperature test chamber to generate apparent strain data with no mechanical loads, coupons can be used to generate the required data. The coupons, however, must have the same strain gage type, coefficient of thermal expansion, and constraints as the specimen to be useful. Obtaining apparent-strain data at temperatures lower than -320 F is challenging due to the difficulty to maintain steady-state and uniform temperatures on a given specimen. Equations to correct for apparent strain in a real-time fashion and data from apparent-strain tests for composite and metallic specimens over a temperature range from -450 F to +250 F are presented in this paper. Three approaches to extrapolate apparent-strain data from -320 F to -430 F are presented and compared to the measured apparent-strain data. The first two approaches use a subset of the apparent-strain curves between -320 F and 100 F to extrapolate to -430 F, while the third approach extrapolates the apparent-strain curve over the temperature range of -320 F to +250 F to -430 F. The first two approaches are superior to the third approach but the use of either of the first two approaches is contingent upon the degree of non-linearity of the apparent-strain curve.
Lampón, Natalia; Tutor-Crespo, María J; Romero, Rafael; Tutor, José C
2011-07-01
Recently, the use of the truncated area under the curve from 0 to 2 h (AUC(0-2)) of mycophenolic acid (MPA) has been proposed for therapeutic monitoring in liver transplant recipients. The aim of our study was the evaluation of the clinical usefulness of truncated AUC(0-2) in kidney transplant patients. Plasma MPA was measured in samples taken before the morning dose of mycophenolate mofetil, and one-half and 2 h post-dose, completing 63 MPA concentration-time profiles from 40 adult kidney transplant recipients. The AUC from 0 to 12 h (AUC(0-12)) was calculated using the validated algorithm of Pawinski et al. The truncated AUC(0-2) was calculated using the linear trapezoidal rule, and extrapolated to 0-12 h (trapezoidal extrapolated AUC(0-12)) as previously described. Algorithm calculated and trapezoidal extrapolated AUC(0-12) values showed high correlation (r=0.995) and acceptable dispersion (ma68=0.71 μg·h/mL), median prediction error (6.6%) and median absolute prediction error (12.6%). The truncated AUC(0-2) had acceptable diagnostic efficiency (87%) in the classification of subtherapeutic, therapeutic or supratherapeutic values with respect to AUC(0-12). However, due to the high inter-individual variation of the drug absorption-rate, the dispersion between both pharmacokinetic variables (ma68=6.9 μg·h/mL) was unacceptable. The substantial dispersion between truncated AUC(0-2) and AUC(0-12) values may be a serious objection for the routine use of MPA AUC(0-2) in clinical practice.
Can we detect a nonlinear response to temperature in European plant phenology?
Jochner, Susanne; Sparks, Tim H; Laube, Julia; Menzel, Annette
2016-10-01
Over a large temperature range, the statistical association between spring phenology and temperature is often regarded and treated as a linear function. There are suggestions that a sigmoidal relationship with definite upper and lower limits to leaf unfolding and flowering onset dates might be more realistic. We utilised European plant phenological records provided by the European phenology database PEP725 and gridded monthly mean temperature data for 1951-2012 calculated from the ENSEMBLES data set E-OBS (version 7.0). We analysed 568,456 observations of ten spring flowering or leafing phenophases derived from 3657 stations in 22 European countries in order to detect possible nonlinear responses to temperature. Linear response rates averaged for all stations ranged between -7.7 (flowering of hazel) and -2.7 days °C -1 (leaf unfolding of beech and oak). A lower sensitivity at the cooler end of the temperature range was detected for most phenophases. However, a similar lower sensitivity at the warmer end was not that evident. For only ∼14 % of the station time series (where a comparison between linear and nonlinear model was possible), nonlinear models described the relationship significantly better than linear models. Although in most cases simple linear models might be still sufficient to predict future changes, this linear relationship between phenology and temperature might not be appropriate when incorporating phenological data of very cold (and possibly very warm) environments. For these cases, extrapolations on the basis of linear models would introduce uncertainty in expected ecosystem changes.
Anionic water pentamer and hexamer clusters: An extensive study of structures and energetics
NASA Astrophysics Data System (ADS)
Ünal, Aslı; Bozkaya, Uǧur
2018-03-01
An extensive study of structures and energetics for anionic pentamer and hexamer clusters is performed employing high level ab initio quantum chemical methods, such as the density-fitted orbital-optimized linearized coupled-cluster doubles (DF-OLCCD), coupled-cluster singles and doubles (CCSD), and coupled-cluster singles and doubles with perturbative triples [CCSD(T)] methods. In this study, sixteen anionic pentamer clusters and eighteen anionic hexamer clusters are reported. Relative, binding, and vertical detachment energies (VDE) are presented at the complete basis set limit (CBS), extrapolating energies of aug4-cc-pVTZ and aug4-cc-pVQZ custom basis sets. The largest VDE values obtained at the CCSD(T)/CBS level are 9.9 and 11.2 kcal mol-1 for pentamers and hexamers, respectively, which are in very good agreement with the experimental values of 9.5 and 11.1 kcal mol-1. Our binding energy results, at the CCSD(T)/CBS level, indicate strong bindings in anionic clusters due to hydrogen bond interactions. The average binding energy per water molecules is -5.0 and -5.3 kcal mol-1 for pentamers and hexamers, respectively. Furthermore, our results demonstrate that the DF-OLCCD method approaches to the CCSD(T) quality for anionic clusters. The inexpensive analytic gradients of DF-OLCCD compared to CCSD or CCSD(T) make it very attractive for high-accuracy studies.
Anionic water pentamer and hexamer clusters: An extensive study of structures and energetics.
Ünal, Aslı; Bozkaya, Uğur
2018-03-28
An extensive study of structures and energetics for anionic pentamer and hexamer clusters is performed employing high level ab initio quantum chemical methods, such as the density-fitted orbital-optimized linearized coupled-cluster doubles (DF-OLCCD), coupled-cluster singles and doubles (CCSD), and coupled-cluster singles and doubles with perturbative triples [CCSD(T)] methods. In this study, sixteen anionic pentamer clusters and eighteen anionic hexamer clusters are reported. Relative, binding, and vertical detachment energies (VDE) are presented at the complete basis set limit (CBS), extrapolating energies of aug4-cc-pVTZ and aug4-cc-pVQZ custom basis sets. The largest VDE values obtained at the CCSD(T)/CBS level are 9.9 and 11.2 kcal mol -1 for pentamers and hexamers, respectively, which are in very good agreement with the experimental values of 9.5 and 11.1 kcal mol -1 . Our binding energy results, at the CCSD(T)/CBS level, indicate strong bindings in anionic clusters due to hydrogen bond interactions. The average binding energy per water molecules is -5.0 and -5.3 kcal mol -1 for pentamers and hexamers, respectively. Furthermore, our results demonstrate that the DF-OLCCD method approaches to the CCSD(T) quality for anionic clusters. The inexpensive analytic gradients of DF-OLCCD compared to CCSD or CCSD(T) make it very attractive for high-accuracy studies.
NASA Astrophysics Data System (ADS)
Yang, Kai; Longcope, Dana; Guo, Yang; Ding, Mingde
2017-08-01
Numerous proposed coronal heating mechanisms have invoked magnetic reconnection in some role. Testing such a mechanism requires a method of measuring magnetic reconnection coupled with a prediction of the heat delivered by reconnection at the observed rate. In the absence of coronal reconnection, field line footpoints move at the same velocity as the plasma they find themselves in. The rate of coronal reconnection is therefore related to any discrepancy observed between footpoint motion and that of the local plasma — so-called slipping motion. We propose a novel method to measure this velocity discrepancy by combining a sequence of non-linear force-free field extrapolations with maps of photospheric velocity. We obtain both from a sequence of vector magnetograms of an active region (AR). We then propose a method of computing the coronal heating produced under the assumption the observed slipping velocity was due entirely to coronal reconnection. This heating rate is used to predict density and temperature at points along an equilibrium loop. This, in turn, is used to synthesize emission in EUV and SXR bands. We perform this analysis using a sequence of HMI vector magnetograms of a particular AR and compare synthesized images to observations of the same AR made by SDO. We also compare differential emission measure inferred from those observations to that of the modeled corona.
Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR
Mobli, Mehdi; Hoch, Jeffrey C.
2017-01-01
Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. PMID:25456315
Sonic Boom Prediction and Minimization of the Douglas Reference OPT5 Configuration
NASA Technical Reports Server (NTRS)
Siclari, Michael J.
1999-01-01
Conventional CFD methods and grids do not yield adequate resolution of the complex shock flow pattern generated by a real aircraft geometry. As a result, a unique grid topology and supersonic flow solver was developed at Northrop Grumman based on the characteristic behavior of supersonic wave patterns emanating from the aircraft. Using this approach, it was possible to compute flow fields with adequate resolution several body lengths below the aircraft. In this region, three-dimensional effects are diminished and conventional two-dimensional modified linear theory (MLT) can be applied to estimate ground pressure signatures or sonic booms. To accommodate real aircraft geometries and alleviate the burdensome grid generation task, an implicit marching multi-block, multi-grid finite-volume Euler code was developed as the basis for the sonic boom prediction methodology. The Thomas two-dimensional extrapolation method is built into the Euler code so that ground signatures can be obtained quickly and efficiently with minimum computational effort suitable to the aircraft design environment. The loudness levels of these signatures can then be determined using a NASA generated noise code. Since the Euler code is a three-dimensional flow field solver, the complete circumferential region below the aircraft is computed. The extrapolation of all this field data from a cylinder of constant radius leads to the definition of the entire boom corridor occurring directly below and off to the side of the aircraft's flight path yielding an estimate for the entire noise "annoyance" corridor in miles as well as its magnitude. An automated multidisciplinary sonic boom design optimization software system was developed during the latter part of HSR Phase 1. Using this system, it was found that sonic boom signatures could be reduced through optimization of a variety of geometric aircraft parameters. This system uses a gradient based nonlinear optimizer as the driver in conjunction with a computationally efficient Euler CFD solver (NIIM3DSB) for computing the three-dimensional near-field characteristics of the aircraft. The intent of the design system is to identify and optimize geometric design variables that have a beneficial impact on the ground sonic boom. The system uses a simple wave drag data format to specify the aircraft geometry. The geometry is internally enhanced and analytic methods are used to generate marching grids suitable for the multi-block Euler solver. The Thomas extrapolation method is integrated into this system, and hence, the aircraft's centerline ground sonic boom signature is also automatically computed for a specified cruise altitude and yields the parameters necessary to evaluate the design function. The entire design system has been automated since the gradient based optimization software requires many flow analyses in order to obtain the required sensitivity derivatives for each design variable in order to converge on an optimal solution. Hence, once the problem is defined which includes defining the objective function and geometric and aerodynamic constraints, the system will automatically regenerate the perturbed geometry, the necessary grids, the Euler solution, and finally the ground sonic boom signature at the request of the optimizer.
Line-of-sight extrapolation noise in dust polarization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poh, Jason; Dodelson, Scott
The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g., 350 GHz) is due solely to dust and then extrapolate the signal down to lower frequency (e.g., 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typicalmore » Galactic thermal dust temperatures of about 20K, these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r < 0.0015 .« less
NASA Astrophysics Data System (ADS)
Joung, Wukchul; Park, Jihye; Pearce, Jonathan V.
2018-06-01
In this work, the liquidus temperature of tin was determined by melting the sample using the pressure-controlled loop heat pipe. Square wave-type pressure steps generated periodic 0.7 °C temperature steps in the isothermal region in the vicinity of the tin sample, and the tin was melted with controllable heat pulses from the generated temperature changes. The melting temperatures at specific melted fractions were measured, and they were extrapolated to the melted fraction of unity to determine the liquidus temperature of tin. To investigate the influence of the impurity distribution on the melting behavior, a molten tin sample was solidified by an outward slow freezing or by quenching to segregate the impurities inside the sample with concentrations increasing outwards or to spread the impurities uniformly, respectively. The measured melting temperatures followed the local solidus temperature variations well in the case of the segregated sample and stayed near the solidus temperature in the quenched sample due to the microscopic melting behavior. The extrapolated melting temperatures of the segregated and quenched samples were 0.95 mK and 0.49 mK higher than the outside-nucleated freezing temperature of tin (with uncertainties of 0.15 mK and 0.16 mK, at approximately 95% level of confidence), respectively. The extrapolated melting temperature of the segregated sample was supposed to be a closer approximation to the liquidus temperature of tin, whereas the quenched sample yielded the possibility of a misleading extrapolation to the solidus temperature. Therefore, the determination of the liquidus temperature could result in different extrapolated melting temperatures depending on the way the impurities were distributed within the sample, which has implications for the contemporary methodology for realizing temperature fixed points of the International Temperature Scale of 1990 (ITS-90).
Physiologically based pharmacokinetic model for quinocetone in pigs and extrapolation to mequindox.
Zhu, Xudong; Huang, Lingli; Xu, Yamei; Xie, Shuyu; Pan, Yuanhu; Chen, Dongmei; Liu, Zhenli; Yuan, Zonghui
2017-02-01
Physiologically based pharmacokinetic (PBPK) models are scientific methods used to predict veterinary drug residues that may occur in food-producing animals, and which have powerful extrapolation ability. Quinocetone (QCT) and mequindox (MEQ) are widely used in China for the prevention of bacterial infections and promoting animal growth, but their abuse causes a potential threat to human health. In this study, a flow-limited PBPK model was developed to simulate simultaneously residue depletion of QCT and its marker residue dideoxyquinocetone (DQCT) in pigs. The model included compartments for blood, liver, kidney, muscle and fat and an extra compartment representing the other tissues. Physiological parameters were obtained from the literature. Plasma protein binding rates, renal clearances and tissue/plasma partition coefficients were determined by in vitro and in vivo experiments. The model was calibrated and validated with several pharmacokinetic and residue-depletion datasets from the literature. Sensitivity analysis and Monte Carlo simulations were incorporated into the PBPK model to estimate individual variation of residual concentrations. The PBPK model for MEQ, the congener compound of QCT, was built through cross-compound extrapolation based on the model for QCT. The QCT model accurately predicted the concentrations of QCT and DQCT in various tissues at most time points, especially the later time points. Correlation coefficients between predicted and measured values for all tissues were greater than 0.9. Monte Carlo simulations showed excellent consistency between estimated concentration distributions and measured data points. The extrapolation model also showed good predictive power. The present models contribute to improve the residue monitoring systems of QCT and MEQ, and provide evidence of the usefulness of PBPK model extrapolation for the same kinds of compounds.
Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane
2017-07-12
The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less
Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I
2014-01-01
Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300
An extrapolation method for compressive strength prediction of hydraulic cement products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siqueira Tango, C.E. de
1998-07-01
The basis for the AMEBA Method is presented. A strength-time function is used to extrapolate the predicted cementitious material strength for a late (ALTA) age, based on two earlier age strengths--medium (MEDIA) and low (BAIXA) ages. The experimental basis for the method is data from the IPT-Brazil laboratory and the field, including a long-term study on concrete, research on limestone, slag, and fly-ash additions, and quality control data from a cement factory, a shotcrete tunnel lining, and a grout for structural repair. The method applicability was also verified for high-performance concrete with silica fume. The formula for predicting late agemore » (e.g., 28 days) strength, for a given set of involved ages (e.g., 28,7, and 2 days) is normally a function only of the two earlier ages` (e.g., 7 and 2 days) strengths. This equation has been shown to be independent on materials variations, including cement brand, and is easy to use also graphically. Using the AMEBA method, and only needing to know the type of cement used, it has been possible to predict strengths satisfactorily, even without the preliminary tests which are required in other methods.« less
A comparison of LOD and UT1-UTC forecasts by different combined prediction techniques
NASA Astrophysics Data System (ADS)
Kosek, W.; Kalarus, M.; Johnson, T. J.; Wooden, W. H.; McCarthy, D. D.; Popiński, W.
Stochastic prediction techniques including autocovariance, autoregressive, autoregressive moving average, and neural networks were applied to the UT1-UTC and Length of Day (LOD) International Earth Rotation and Reference Systems Servive (IERS) EOPC04 time series to evaluate the capabilities of each method. All known effects such as leap seconds and solid Earth zonal tides were first removed from the observed values of UT1-UTC and LOD. Two combination procedures were applied to predict the resulting LODR time series: 1) the combination of the least-squares (LS) extrapolation with a stochastic predition method, and 2) the combination of the discrete wavelet transform (DWT) filtering and a stochastic prediction method. The results of the combination of the LS extrapolation with different stochastic prediction techniques were compared with the results of the UT1-UTC prediction method currently used by the IERS Rapid Service/Prediction Centre (RS/PC). It was found that the prediction accuracy depends on the starting prediction epochs, and for the combined forecast methods, the mean prediction errors for 1 to about 70 days in the future are of the same order as those of the method used by the IERS RS/PC.
Lee, L.; Helsel, D.
2007-01-01
Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.
Aerosol optical properties in ultraviolet ranges and respiratory diseases in Thailand
NASA Astrophysics Data System (ADS)
Kumharn, Wilawan; Hanprasert, Kasarin
2016-10-01
This study investigated the values of Angstrom parameters (α,β) in ultraviolet (UV) ranges by using AERONET Aerosol Optical Depth (AOD) data. A second-order polynomial was applied to the AERONET data in order to extrapolate to 320 nm from 2003 to 2013 at seven sites in Thailand. The α,β were derived by applying the Volz Method (VM) and Linear Method (LM) at 320-380 nm at seven monitoring sites in Thailand. Aerosol particles were categorized in both coarse and fine modes, depending on regions. Aerosol loadings were related to dry weather, forest fires, sea salt and most importantly, biomass burning in the North, and South of Thailand. Aerosol particles in the Central region contain coarse and fine modes, mainly emitted from vehicles. The β values obtained were associated with turbid and very turbid skies in Northern and Central regions except Bangkok, while β results are associated with clean skies in South. Higher values of the β at all sites were found in the winter and summer compared with the rainy season, in contrast to South where the highest AOD was observed in June. The β values were likely to increase during 2003-2013. These values correlate with worsening health situations as evident from increasing respiratory diseases reported.
ERIC Educational Resources Information Center
Rogers, Richard
2004-01-01
Objective: The overriding objective is a critical examination of Munchausen syndrome by proxy (MSBP) and its closely-related alternative, factitious disorder by proxy (FDBP). Beyond issues of diagnostic validity, assessment methods and potential detection strategies are explored. Methods: A painstaking analysis was conducted of the MSBP and FDBP…
Educational Forecasting Methodologies: State of the Art, Trends, and Highlights.
ERIC Educational Resources Information Center
Hudson, Barclay; Bruno, James
This overview of both quantitative and qualitative methods of educational forecasting is introduced by a discussion of a general typology of forecasting methods. In each of the following sections, discussion follows the same general format: a number of basic approaches are identified (e.g. extrapolation, correlation, systems modelling), and each…
Soil carbon changes: comparing flux monitoring and mass balance in a box lysimeter experiment.
S.M. Nay; B.T. Bormann
2000-01-01
Direct measures of soil-surface respiration are needed to evaluate belowground biological processes, forest productivity, and ecosystem responses to global change. Although infra-red gas analyzer {IRGA) methods track reference CO2 flows in lab studies, questions remain for extrapolating IRGA methods to field conditions. We constructed 10 box...
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. En...
Heat flux measurements on ceramics with thin film thermocouples
NASA Technical Reports Server (NTRS)
Holanda, Raymond; Anderson, Robert C.; Liebert, Curt H.
1993-01-01
Two methods were devised to measure heat flux through a thick ceramic using thin film thermocouples. The thermocouples were deposited on the front and back face of a flat ceramic substrate. The heat flux was applied to the front surface of the ceramic using an arc lamp Heat Flux Calibration Facility. Silicon nitride and mullite ceramics were used; two thicknesses of each material was tested, with ceramic temperatures to 1500 C. Heat flux ranged from 0.05-2.5 MW/m2(sup 2). One method for heat flux determination used an approximation technique to calculate instantaneous values of heat flux vs time; the other method used an extrapolation technique to determine the steady state heat flux from a record of transient data. Neither method measures heat flux in real time but the techniques may easily be adapted for quasi-real time measurement. In cases where a significant portion of the transient heat flux data is available, the calculated transient heat flux is seen to approach the extrapolated steady state heat flux value as expected.
NASA Astrophysics Data System (ADS)
Meot-Ner (Mautner), Michael; Somogyi, Árpád
2007-11-01
The internal energies of dissociating ions, activated chemically or collisionally, can be estimated using the kinetics of thermal dissociation. The thermal Arrhenius parameters can be combined with the observed dissociation rate of the activated ions using kdiss = Athermalexp(-Ea,thermal/RTeff). This Arrhenius-type relation yields the effective temperature, Teff, at which the ions would dissociate thermally at the same rate, or yield the same product distributions, as the activated ions. In turn, Teff is used to calculate the internal energy of the ions and the energy deposited by the activation process. The method yields an energy deposition efficiency of 10% for a chemical ionization proton transfer reaction and 8-26% for the surface collisions of various peptide ions. Internal energies of ions activated by chemical ionization or by gas phase collisions, and of ions produced by desorption methods such as fast atom bombardment, can be also evaluated. Thermal extrapolation is especially useful for ion-molecule reaction products and for biological ions, where other methods to evaluate internal energies are laborious or unavailable.
Sweetman, Adam; Stannard, Andrew
2014-01-01
In principle, non-contact atomic force microscopy (NC-AFM) now readily allows for the measurement of forces with sub-nanonewton precision on the atomic scale. In practice, however, the extraction of the often desired 'short-range' force from the experimental observable (frequency shift) is often far from trivial. In most cases there is a significant contribution to the total tip-sample force due to non-site-specific van der Waals and electrostatic forces. Typically, the contribution from these forces must be removed before the results of the experiment can be successfully interpreted, often by comparison to density functional theory calculations. In this paper we compare the 'on-minus-off' method for extracting site-specific forces to a commonly used extrapolation method modelling the long-range forces using a simple power law. By examining the behaviour of the fitting method in the case of two radically different interaction potentials we show that significant uncertainties in the final extracted forces may result from use of the extrapolation method.
Rossi, Sergio; Anfodillo, Tommaso; Cufar, Katarina; Cuny, Henri E; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gricar, Jozica; Gruber, Andreas; King, Gregory M; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B K
2013-12-01
Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1-9 years per site from 1998 to 2011. The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern. The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions.
High throughput method to characterize acid-base properties of insoluble drug candidates in water.
Benito, D E; Acquaviva, A; Castells, C B; Gagliardi, L G
2018-05-30
In drug design experimental characterization of acidic groups in candidate molecules is one of the more important steps prior to the in-vivo studies. Potentiometry combined with Yasuda-Shedlovsky extrapolation is one of the more important strategy to study drug candidates with low solubility in water, although, it requires a large number of sequences to determine pK a values at different solvent-mixture compositions to, finally, obtain the pK a in water (pwwK a ) by extrapolation. We have recently proposed a method which requires only two sequences of additions to study the effect of organic solvent content in liquid chromatography mobile phases on the acidity of the buffer compounds usually dissolved in it along wide ranges of compositions. In this work we propose to apply this method to study thermodynamic pwwK a of drug candidates with low solubilities in pure water. Using methanol/water solvent mixtures we study six pharmaceutical drugs at 25 °C. Four of them: ibuprofen, salicylic acid, atenolol and labetalol, were chosen as members of carboxylic, amine and phenol families, respectively. Since these compounds have known pwwK a values, they were used to validate the procedure, the accuracy of Yasuda-Shedlovsky and other empirical models to fit the behaviors, and to obtain pwwK a by extrapolation. Finally, the method is applied to determine unknown thermodynamic pwwK a values of two pharmaceutical drugs: atorvastatin calcium and the two dissociation constants of ethambutol. The procedure proved to be simple, very fast and accurate in all of the studied cases. Copyright © 2018 Elsevier B.V. All rights reserved.
Sabin, Keith; Zhao, Jinkou; Garcia Calleja, Jesus Maria; Sheng, Yaou; Arias Garcia, Sonia; Reinisch, Annette; Komatsu, Ryuichi
2016-01-01
Objective To assess the availability and quality of population size estimations of female sex workers (FSW), men who have sex with men (MSM), people who inject drug (PWID) and transgender women. Methods Size estimation data since 2010 were retrieved from global reporting databases, Global Fund grant application documents, and the peer-reviewed and grey literature. Overall quality and availability were assessed against a defined set of criteria, including estimation methods, geographic coverage, and extrapolation approaches. Estimates were compositely categorized into ‘nationally adequate’, ‘nationally inadequate but locally adequate’, ‘documented but inadequate methods’, ‘undocumented or untimely’ and ‘no data.’ Findings Of 140 countries assessed, 41 did not report any estimates since 2010. Among 99 countries with at least one estimate, 38 were categorized as having nationally adequate estimates and 30 as having nationally inadequate but locally adequate estimates. Multiplier, capture-recapture, census and enumeration, and programmatic mapping were the most commonly used methods. Most countries relied on only one estimate for a given population while about half of all reports included national estimates. A variety of approaches were applied to extrapolate from sites-level numbers to national estimates in two-thirds of countries. Conclusions Size estimates for FSW, MSM, PWID and transgender women are increasingly available but quality varies widely. The different approaches present challenges for data use in design, implementation and evaluation of programs for these populations in half of the countries assessed. Guidance should be further developed to recommend: a) applying multiple estimation methods; b) estimating size for a minimum number of sites; and, c) documenting extrapolation approaches. PMID:27163256
Liang, Chao; Qiao, Jun-Qin; Lian, Hong-Zhen
2017-12-15
Reversed-phase liquid chromatography (RPLC) based octanol-water partition coefficient (logP) or distribution coefficient (logD) determination methods were revisited and assessed comprehensively. Classic isocratic and some gradient RPLC methods were conducted and evaluated for neutral, weak acid and basic compounds. Different lipophilicity indexes in logP or logD determination were discussed in detail, including the retention factor logk w corresponding to neat water as mobile phase extrapolated via linear solvent strength (LSS) model from isocratic runs and calculated with software from gradient runs, the chromatographic hydrophobicity index (CHI), apparent gradient capacity factor (k g ') and gradient retention time (t g ). Among the lipophilicity indexes discussed, logk w from whether isocratic or gradient elution methods best correlated with logP or logD. Therefore logk w is recommended as the preferred lipophilicity index for logP or logD determination. logk w easily calculated from methanol gradient runs might be the main candidate to replace logk w calculated from classic isocratic run as the ideal lipophilicity index. These revisited RPLC methods were not applicable for strongly ionized compounds that are hardly ion-suppressed. A previously reported imperfect ion-pair RPLC method was attempted and further explored for studying distribution coefficients (logD) of sulfonic acids that totally ionized in the mobile phase. Notably, experimental logD values of sulfonic acids were given for the first time. The IP-RPLC method provided a distinct way to explore logD values of ionized compounds. Copyright © 2017 Elsevier B.V. All rights reserved.
Pennington, David; Crettaz, Pierre; Tauxe, Annick; Rhomberg, Lorenz; Brand, Kevin; Jolliet, Olivier
2002-10-01
In Part 1 of this article we developed an approach for the calculation of cancer effect measures for life cycle assessment (LCA). In this article, we propose and evaluate the method for the screening of noncancer toxicological health effects. This approach draws on the noncancer health risk assessment concept of benchmark dose, while noting important differences with regulatory applications in the objectives of an LCA study. We adopt the centraltendency estimate of the toxicological effect dose inducing a 10% response over background, ED10, to provide a consistent point of departure for default linear low-dose response estimates (betaED10). This explicit estimation of low-dose risks, while necessary in LCA, is in marked contrast to many traditional procedures for noncancer assessments. For pragmatic reasons, mechanistic thresholds and nonlinear low-dose response curves were not implemented in the presented framework. In essence, for the comparative needs of LCA, we propose that one initially screens alternative activities or products on the degree to which the associated chemical emissions erode their margins of exposure, which may or may not be manifested as increases in disease incidence. We illustrate the method here by deriving the betaED10 slope factors from bioassay data for 12 chemicals and outline some of the possibilities for extrapolation from other more readily available measures, such as the no observable adverse effect levels (NOAEL), avoiding uncertainty factors that lead to inconsistent degrees of conservatism from chemical to chemical. These extrapolations facilitated the initial calculation of slope factors for an additional 403 compounds; ranging from 10(-6) to 10(3) (risk per mg/kg-day dose). The potential consequences of the effects are taken into account in a preliminary approach by combining the betaED10 with the severity measure disability adjusted life years (DALY), providing a screening-level estimate of the potential consequences associated with exposures, integrated over time and space, to a given mass of chemical released into the environment for use in LCA.
NASA Astrophysics Data System (ADS)
Jorand, Rachel; Fehr, Annick; Koch, Andreas; Clauser, Christoph
2011-08-01
In this paper, we present a method that allows one to correct thermal conductivity measurements for the effect of water loss when extrapolating laboratory data to in situ conditions. The water loss in shales and unconsolidated rocks is a serious problem that can introduce errors in the characterization of reservoirs. For this study, we measure the thermal conductivity of four sandstones with and without clay minerals according to different water saturation levels using an optical scanner. Thermal conductivity does not decrease linearly with water saturation. At high saturation and very low saturation, thermal conductivity decreases more quickly because of spontaneous liquid displacement and capillarity effects. Apart from these two effects, thermal conductivity decreases quasi-linearly. We also notice that the samples containing clay minerals are not completely drained, and thermal conductivity reaches a minimum value. In order to fit the variation of thermal conductivity with the water saturation as a whole, we used modified models commonly presented in thermal conductivity studies: harmonic and arithmetic mean and geometric models. These models take into account different types of porosity, especially those attributable to the abundance of clay, using measurements obtained from nuclear magnetic resonance (NMR). For argillaceous sandstones, a modified arithmetic-harmonic model fits the data best. For clean quartz sandstones under low water saturation, the closest fit to the data is obtained with the modified arithmetic-harmonic model, while for high water saturation, a modified geometric mean model proves to be the best.
NASA Astrophysics Data System (ADS)
Baasch, Benjamin; Müller, Hendrik; von Dobeneck, Tilo; Oberle, Ferdinand K. J.
2017-05-01
The electric conductivity and magnetic susceptibility of sediments are fundamental parameters in environmental geophysics. Both can be derived from marine electromagnetic profiling, a novel, fast and non-invasive seafloor mapping technique. Here we present statistical evidence that electric conductivity and magnetic susceptibility can help to determine physical grain-size characteristics (size, sorting and mud content) of marine surficial sediments. Electromagnetic data acquired with the bottom-towed electromagnetic profiler MARUM NERIDIS III were analysed and compared with grain size data from 33 samples across the NW Iberian continental shelf. A negative correlation between mean grain size and conductivity (R=-0.79) as well as mean grain size and susceptibility (R=-0.78) was found. Simple and multiple linear regression analyses were carried out to predict mean grain size, mud content and the standard deviation of the grain-size distribution from conductivity and susceptibility. The comparison of both methods showed that multiple linear regression models predict the grain-size distribution characteristics better than the simple models. This exemplary study demonstrates that electromagnetic benthic profiling is capable to estimate mean grain size, sorting and mud content of marine surficial sediments at a very high significance level. Transfer functions can be calibrated using grains-size data from a few reference samples and extrapolated along shelf-wide survey lines. This study suggests that electromagnetic benthic profiling should play a larger role for coastal zone management, seafloor contamination and sediment provenance studies in worldwide continental shelf systems.
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-01-01
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-07-12
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.
Head-on collisions of unequal mass black holes in D=5 dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witek, Helvi; Cardoso, Vitor; Department of Physics and Astronomy, University of Mississippi, University, Mississippi 38677
We study head-on collisions of unequal mass black hole binaries in D=5 spacetime dimensions, with mass ratios between 1:1 and 1:4. Information about gravitational radiation is extracted by using the Kodama-Ishibashi gauge-invariant formalism and details of the apparent horizon of the final black hole. We present waveforms, total integrated energy and momentum for this process. Our results show surprisingly good agreement, within 5% or less, with those extrapolated from linearized, point-particle calculations. Our results also show that consistency with the area theorem bound requires that the same process in a large number of spacetime dimensions must display new features.
Lattice QCD results for the HVP contribution to the anomalous magnetic moments of leptons
NASA Astrophysics Data System (ADS)
2018-03-01
We present lattice QCD results by the Budapest-Marseille-Wuppertal (BMW) Collaboration for the leading-order contribution of the hadron vacuum polarization (LOHVP) to the anomalous magnetic moments of all charged leptons. Calculations are performed with u, d, s and c quarks at their physical masses, in volumes of linear extent larger than 6 fm, and at six values of the lattice spacing, allowing for controlled continuum extrapolations. All connected and disconnected contributions are calculated for not only the muon but also the electron and tau anomalous magnetic moments. Systematic uncertainties are thoroughly discussed and comparisons with other calculations and phenomenological estimates are made.
Image restoration by minimizing zero norm of wavelet frame coefficients
NASA Astrophysics Data System (ADS)
Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue
2016-11-01
In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Subeihi, Ala' A.A., E-mail: ala.alsubeihi@wur.nl; BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority; Spenkelink, Bert
2012-05-01
This study defines a physiologically based kinetic (PBK) model for methyleugenol (ME) in human based on in vitro and in silico derived parameters. With the model obtained, bioactivation and detoxification of methyleugenol (ME) at different doses levels could be investigated. The outcomes of the current model were compared with those of a previously developed PBK model for methyleugenol (ME) in male rat. The results obtained reveal that formation of 1′-hydroxymethyleugenol glucuronide (1′HMEG), a major metabolic pathway in male rat liver, appears to represent a minor metabolic pathway in human liver whereas in human liver a significantly higher formation of 1′-oxomethyleugenolmore » (1′OME) compared with male rat liver is observed. Furthermore, formation of 1′-sulfooxymethyleugenol (1′HMES), which readily undergoes desulfonation to a reactive carbonium ion (CA) that can form DNA or protein adducts (DA), is predicted to be the same in the liver of both human and male rat at oral doses of 0.0034 and 300 mg/kg bw. Altogether despite a significant difference in especially the metabolic pathways of the proximate carcinogenic metabolite 1′-hydroxymethyleugenol (1′HME) between human and male rat, the influence of species differences on the ultimate overall bioactivation of methyleugenol (ME) to 1′-sulfooxymethyleugenol (1′HMES) appears to be negligible. Moreover, the PBK model predicted the formation of 1′-sulfooxymethyleugenol (1′HMES) in the liver of human and rat to be linear from doses as high as the benchmark dose (BMD{sub 10}) down to as low as the virtual safe dose (VSD). This study shows that kinetic data do not provide a reason to argue against linear extrapolation from the rat tumor data to the human situation. -- Highlights: ► A PBK model is made for bioactivation and detoxification of methyleugenol in human. ► Comparison to the PBK model in male rat revealed species differences. ► PBK results support linear extrapolation from high to low dose and from rat to human.« less
Toxicokinetic Triage for Environmental Chemicals
Toxicokinetic (TK) models are essential for linking administered doses to blood and tissue concentrations. In vitro-to-in vivo extrapolation (IVIVE) methods have been developed to determine TK from limited in vitro measurements and chemical structure-based property predictions, p...
Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario
2017-06-07
By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.
Dead time corrections using the backward extrapolation method
NASA Astrophysics Data System (ADS)
Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.
2017-05-01
Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Jun; Wang, Han, E-mail: wang-han@iapcm.ac.cn; CAEP Software Center for High Performance Numerical Simulation, Beijing
2016-06-28
Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn–Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps ormore » more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.« less
Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2.
Solomonik, Victor G; Smirnov, Alexander N; Navarkin, Ilya S
2016-04-14
The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.
Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2
NASA Astrophysics Data System (ADS)
Solomonik, Victor G.; Smirnov, Alexander N.; Navarkin, Ilya S.
2016-04-01
The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.
Long-Term Prediction of the Arctic Ionospheric TEC Based on Time-Varying Periodograms
Liu, Jingbin; Chen, Ruizhi; Wang, Zemin; An, Jiachun; Hyyppä, Juha
2014-01-01
Knowledge of the polar ionospheric total electron content (TEC) and its future variations is of scientific and engineering relevance. In this study, a new method is developed to predict Arctic mean TEC on the scale of a solar cycle using previous data covering 14 years. The Arctic TEC is derived from global positioning system measurements using the spherical cap harmonic analysis mapping method. The study indicates that the variability of the Arctic TEC results in highly time-varying periodograms, which are utilized for prediction in the proposed method. The TEC time series is divided into two components of periodic oscillations and the average TEC. The newly developed method of TEC prediction is based on an extrapolation method that requires no input of physical observations of the time interval of prediction, and it is performed in both temporally backward and forward directions by summing the extrapolation of the two components. The backward prediction indicates that the Arctic TEC variability includes a 9 years period for the study duration, in addition to the well-established periods. The long-term prediction has an uncertainty of 4.8–5.6 TECU for different period sets. PMID:25369066
Predicting low-temperature free energy landscapes with flat-histogram Monte Carlo methods
NASA Astrophysics Data System (ADS)
Mahynski, Nathan A.; Blanco, Marco A.; Errington, Jeffrey R.; Shen, Vincent K.
2017-02-01
We present a method for predicting the free energy landscape of fluids at low temperatures from flat-histogram grand canonical Monte Carlo simulations performed at higher ones. We illustrate our approach for both pure and multicomponent systems using two different sampling methods as a demonstration. This allows us to predict the thermodynamic behavior of systems which undergo both first order and continuous phase transitions upon cooling using simulations performed only at higher temperatures. After surveying a variety of different systems, we identify a range of temperature differences over which the extrapolation of high temperature simulations tends to quantitatively predict the thermodynamic properties of fluids at lower ones. Beyond this range, extrapolation still provides a reasonably well-informed estimate of the free energy landscape; this prediction then requires less computational effort to refine with an additional simulation at the desired temperature than reconstruction of the surface without any initial estimate. In either case, this method significantly increases the computational efficiency of these flat-histogram methods when investigating thermodynamic properties of fluids over a wide range of temperatures. For example, we demonstrate how a binary fluid phase diagram may be quantitatively predicted for many temperatures using only information obtained from a single supercritical state.
Modeling an exhumed basin: A method for estimating eroded overburden
Poelchau, H.S.
2001-01-01
The Alberta Deep Basin in western Canada has undergone a large amount of erosion following deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin model. Erosion can be estimated using shale compaction trends. For instance, the widely used Magara method attempts to establish a sonic log gradient for shales and uses the extrapolation to a theoretical uncompacted shale value as a first indication of overcompaction and estimation of the amount of erosion. Because such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic A; values of one suitable shale formation are calibrated with maximum depth of burial estimates from sonic log extrapolation for several wells. This resulting regression equation then can be used to estimate and map maximum depth of burial or amount of erosion for all wells in which this formation has been logged. The example from the Alberta Deep Basin shows that the magnitude of erosion calculated by this method is conservative and comparable to independent estimates using vitrinite reflectance gradient methods. ?? 2001 International Association for Mathematical Geology.
Correlation of Resonance Charge Exchange Cross-Section Data in the Low-Energy Range
NASA Technical Reports Server (NTRS)
Sheldon, John W.
1962-01-01
During the course of a literature survey concerning resonance charge exchange, an unusual degree of agreement was noted between an extrapolation of the data reported by Kushnir, Palyukh, and Sena and the data reported by Ziegler. The data of Kushnir et al. are for ion-atom relative energies from 10 to 1000 ev, while the data of Ziegler are for a relative energy of about 1 ev. Extrapolation of the data of Kushnir et al. was made in accordance with Holstein's theory, 3 which is a combination of time-dependent perturbation methods and classical orbit theory. The results of this theory may be discussed in terms of a critical impact parameter b(sub c).
Poppe, L.J.; Eliason, A.H.; Hastings, M.E.
2004-01-01
Measures that describe and summarize sediment grain-size distributions are important to geologists because of the large amount of information contained in textural data sets. Statistical methods are usually employed to simplify the necessary comparisons among samples and quantify the observed differences. The two statistical methods most commonly used by sedimentologists to describe particle distributions are mathematical moments (Krumbein and Pettijohn, 1938) and inclusive graphics (Folk, 1974). The choice of which of these statistical measures to use is typically governed by the amount of data available (Royse, 1970). If the entire distribution is known, the method of moments may be used; if the next to last accumulated percent is greater than 95, inclusive graphics statistics can be generated. Unfortunately, earlier programs designed to describe sediment grain-size distributions statistically do not run in a Windows environment, do not allow extrapolation of the distribution's tails, or do not generate both moment and graphic statistics (Kane and Hubert, 1963; Collias et al., 1963; Schlee and Webster, 1967; Poppe et al., 2000)1.Owing to analytical limitations, electro-resistance multichannel particle-size analyzers, such as Coulter Counters, commonly truncate the tails of the fine-fraction part of grain-size distributions. These devices do not detect fine clay in the 0.6–0.1 μm range (part of the 11-phi and all of the 12-phi and 13-phi fractions). Although size analyses performed down to 0.6 μm microns are adequate for most freshwater and near shore marine sediments, samples from many deeper water marine environments (e.g. rise and abyssal plain) may contain significant material in the fine clay fraction, and these analyses benefit from extrapolation.The program (GSSTAT) described herein generates statistics to characterize sediment grain-size distributions and can extrapolate the fine-grained end of the particle distribution. It is written in Microsoft Visual Basic 6.0 and provides a window to facilitate program execution. The input for the sediment fractions is weight percentages in whole-phi notation (Krumbein, 1934; Inman, 1952), and the program permits the user to select output in either method of moments or inclusive graphics statistics (Fig. 1). Users select options primarily with mouse-click events, or through interactive dialogue boxes.
In this study we have developed a novel method to estimate in vivo rates of metabolism in unanesthetized fish. This method provides a basis for evaluating the accuracy of in vitro-in vivo metabolism extrapolations. As such, this research will lead to improved risk assessments f...
Hydrological predictions at a watershed scale are commonly based on extrapolation and upscaling of hydrological behavior at plot and hillslope scales. Yet, dominant hydrological drivers at a hillslope may not be as dominant at the watershed scale because of the heterogeneity of w...
The purpose of this one-day short course is to train students on methods used to measure in vitro metabolism in fish and extrapolate this information to the intact animal. This talk is one of four presentations given by course instructors. The first part of this talk provides a...
ERIC Educational Resources Information Center
Clark, Joseph Warren
2012-01-01
In turbulent business environments, change is rapid, continuous, and unpredictable. Turbulence undermines those adaptive problem solving methods that generate solutions by extrapolating from what worked (or did not work) in the past. To cope with this challenge, organizations utilize trial-based problem solving (TBPS) approaches in which they…
26 CFR 1.263A-7 - Changing a method of accounting under section 263A.
Code of Federal Regulations, 2010 CFR
2010-04-01
... extrapolation, rather than based on the facts and circumstances of a particular year's data. All three methods... analyze the production and resale data for that particular year and apply the rules and principles of... books and records, actual financial and accounting data which is required to apply the capitalization...
26 CFR 1.263A-7 - Changing a method of accounting under section 263A.
Code of Federal Regulations, 2011 CFR
2011-04-01
... extrapolation, rather than based on the facts and circumstances of a particular year's data. All three methods... analyze the production and resale data for that particular year and apply the rules and principles of... books and records, actual financial and accounting data which is required to apply the capitalization...
NASA Technical Reports Server (NTRS)
Hada, Megumi; George, Kerry A.; Cucinotta, F. A.
2011-01-01
The relationship between biological effects and low doses of absorbed radiation is still uncertain, especially for high LET radiation exposure. Estimates of risks from low-dose and low-dose-rates are often extrapolated using data from Japanese atomic bomb survivor with either linear or linear quadratic models of fit. In this study, chromosome aberrations were measured in human peripheral blood lymphocytes and normal skin fibroblasts cells after exposure to very low dose (.01 - 0.2 Gy) of 170 MeV/u Si-28-ions or 600 MeV/u Fe-56-ions. Chromosomes were analyzed using the whole chromosome fluorescence in situ hybridization (FISH) technique during the first cell division after irradiation, and chromosome aberrations were identified as either simple exchanges (translocations and dicentrics) or complex exchanges (involving >2 breaks in 2 or more chromosomes). The curves for doses above 0.1 Gy were more than one ion traverses a cell showed linear dose responses. However, for doses less than 0.1 Gy, Si-28-ions showed no dose response, suggesting a non-targeted effect when less than one ion traversal occurs. Additional findings for Fe-56 will be discussed.
The solution of the point kinetics equations via converged accelerated Taylor series (CATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.; Picca, P.; Previti, A.
This paper deals with finding accurate solutions of the point kinetics equations including non-linear feedback, in a fast, efficient and straightforward way. A truncated Taylor series is coupled to continuous analytical continuation to provide the recurrence relations to solve the ordinary differential equations of point kinetics. Non-linear (Wynn-epsilon) and linear (Romberg) convergence accelerations are employed to provide highly accurate results for the evaluation of Taylor series expansions and extrapolated values of neutron and precursor densities at desired edits. The proposed Converged Accelerated Taylor Series, or CATS, algorithm automatically performs successive mesh refinements until the desired accuracy is obtained, making usemore » of the intermediate results for converged initial values at each interval. Numerical performance is evaluated using case studies available from the literature. Nearly perfect agreement is found with the literature results generally considered most accurate. Benchmark quality results are reported for several cases of interest including step, ramp, zigzag and sinusoidal prescribed insertions and insertions with adiabatic Doppler feedback. A larger than usual (9) number of digits is included to encourage honest benchmarking. The benchmark is then applied to the enhanced piecewise constant algorithm (EPCA) currently being developed by the second author. (authors)« less
Interspecies extrapolation encompasses two related but distinct topic areas that are germane to quantitative extrapolation and hence computational toxicology-dose scaling and parameter scaling. Dose scaling is the process of converting a dose determined in an experimental animal ...
Jiang, Ze-Jun; Cao, Xiao-Lin; Li, Hui; Zhang, Chan; Abd El-Aty, A M; Jin, Fen; Shao, Hua; Jin, Mao-Jun; Wang, Shan-Shan; She, Yong-Xin; Wang, Jing
2017-11-24
In the present study, a quick and sensitive method was developed for simultaneous determination of nonylphenol ethoxylates (NPxEOs) and octylphenol ethoxylates (OPxEOs) (x=2-20) in three leafy vegetables, including cabbage, lettuce, and spinach using a modified "QuEChERS" method and ultra-high performance supercritical fluid chromatography-tandem mass spectrometry (UHPSFC-MS/MS) with scheduled multiple reaction monitoring (MRM). Under optimized conditions, the 38 target analytes were analyzed within a short period of time (5 min). The linearities of the matrix-matched standard calibrations were satisfactory with coefficients of determination (R 2 )>0.99 and the limits of detection (LOD) and quantification (LOQ) were in between 0.02-0.27 and 0.18-1.75μgkg -1 , respectively. The recovery of all target analytes spiked at three (low, medium, and high) fortification levels in various leafy vegetables were ranged from 72.8-122.6% with relative standard deviation (RSD) ≤18.3%. The method was successfully applied to market samples and the target analytes were found in all monitored samples, with total concentrations of 0-8.67μgkg -1 and 15.75-95.75μgkg -1 for OPxEOs and NPxEOs (x=2-20), respectively. In conclusion, the newly developed UHPSFC-ESI-MS/MS method is rapid and versatile and could be extrapolated for qualitative and quantitative analysis of APxEOs in other leafy vegetables. Copyright © 2017 Elsevier B.V. All rights reserved.
Fit Point-Wise AB Initio Calculation Potential Energies to a Multi-Dimension Long-Range Model
NASA Astrophysics Data System (ADS)
Zhai, Yu; Li, Hui; Le Roy, Robert J.
2016-06-01
A potential energy surface (PES) is a fundamental tool and source of understanding for theoretical spectroscopy and for dynamical simulations. Making correct assignments for high-resolution rovibrational spectra of floppy polyatomic and van der Waals molecules often relies heavily on predictions generated from a high quality ab initio potential energy surface. Moreover, having an effective analytic model to represent such surfaces can be as important as the ab initio results themselves. For the one-dimensional potentials of diatomic molecules, the most successful such model to date is arguably the ``Morse/Long-Range'' (MLR) function developed by R. J. Le Roy and coworkers. It is very flexible, is everywhere differentiable to all orders. It incorporates correct predicted long-range behaviour, extrapolates sensibly at both large and small distances, and two of its defining parameters are always the physically meaningful well depth {D}_e and equilibrium distance r_e. Extensions of this model, called the Multi-Dimension Morse/Long-Range (MD-MLR) function, linear molecule-linear molecule systems and atom-non-linear molecule system. have been applied successfully to atom-plus-linear molecule, linear molecule-linear molecule and atom-non-linear molecule systems. However, there are several technical challenges faced in modelling the interactions of general molecule-molecule systems, such as the absence of radial minima for some relative alignments, difficulties in fitting short-range potential energies, and challenges in determining relative-orientation dependent long-range coefficients. This talk will illustrate some of these challenges and describe our ongoing work in addressing them. Mol. Phys. 105, 663 (2007); J. Chem. Phys. 131, 204309 (2009); Mol. Phys. 109, 435 (2011). Phys. Chem. Chem. Phys. 10, 4128 (2008); J. Chem. Phys. 130, 144305 (2009) J. Chem. Phys. 132, 214309 (2010) J. Chem. Phys. 140, 214309 (2010)
Scaling behavior of ground-state energy cluster expansion for linear polyenes
NASA Astrophysics Data System (ADS)
Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.
Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.
The P-factor and atomic mass systematics: Application to medium mass nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brenner, D.S.; Haustein, P.E.; Casten, R.F.
1988-01-01
The P formalism was applied to atomic mass systematics for medium and heavy nuclei. The P-factor linearizes the structure-dependent part of the nuclear mass in those regions which are free from subshell effects indicating that the attractive quadrupole p-n force plays an important role in determining the binding of valence nucleons. Where marked non-linearities occur, the P-factor provides a means for recognizing subshell closures and/or other structural features not embodied in the simple assumptions of abrupt shell or subshell changes. These are thought to be regions where the monopole part of the p-n interaction is highly orbit dependent and altersmore » the underlying single-particle structure as a function of A, N or Z. Finally, in those regions where the systematics are smooth and subshells are absent, the P-factor provides a means for predicting masses of some nuclei far-from-stability by interpolation rather than by extrapolation. 5 figs.« less
Unveiling saturation effects from nuclear structure function measurements at the EIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marquet, Cyrille; Moldes, Manoel R.; Zurita, Pia
Here, we analyze the possibility of extracting a clear signal of non-linear parton saturation effects from future measurements of nuclear structure functions at the Electron–Ion Collider (EIC), in the small-x region. Our approach consists in generating pseudodata for electron-gold collisions, using the running-coupling Balitsky–Kovchegov evolution equation, and in assessing the compatibility of these saturated pseudodata with existing sets of nuclear parton distribution functions (nPDFs), extrapolated if necessary. The level of disagreement between the two is quantified by applying a Bayesian reweighting technique. This allows to infer the parton distributions needed in order to describe the pseudodata, which we find quitemore » different from the actual distributions, especially for sea quarks and gluons. This tension suggests that, should saturation effects impact the future nuclear structure function data as predicted, a successful refitting of the nPDFs may not be achievable, which would unambiguously signal the presence of non-linear effects.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, L.J.
1986-02-01
The 5s/sup 2/ /sup 1/S/sub 0/-5s5p/sup 1,3/P/sub J/ energy intervals in the Cd isoelectronic sequence have been investigated through a semiempirical systematization of recent measurements and through the performance of ab initio multiconfiguration Dirac-Fock calculations. Screening-parameter reductions of the spin-orbit and exchange energies both for the observed data and for the theoretically computed values establish the existence of empirical linearities similar to those exploited earlier for the Be, Mg, and Zn sequences. This permits extrapolative isoelectronic predictions of the relative energies of the 5s5p levels, which can be connected to 5s/sup 2/ using intersinglet intervals obtained from empirically corrected abmore » initio calculations. These linearities have also been examined homologously for the Zn, Cd, and Hg sequences, and common relationships have been found that accurately describe all three of these sequences.« less
A high order accurate finite element algorithm for high Reynolds number flow prediction
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.
Unveiling saturation effects from nuclear structure function measurements at the EIC
Marquet, Cyrille; Moldes, Manoel R.; Zurita, Pia
2017-07-21
Here, we analyze the possibility of extracting a clear signal of non-linear parton saturation effects from future measurements of nuclear structure functions at the Electron–Ion Collider (EIC), in the small-x region. Our approach consists in generating pseudodata for electron-gold collisions, using the running-coupling Balitsky–Kovchegov evolution equation, and in assessing the compatibility of these saturated pseudodata with existing sets of nuclear parton distribution functions (nPDFs), extrapolated if necessary. The level of disagreement between the two is quantified by applying a Bayesian reweighting technique. This allows to infer the parton distributions needed in order to describe the pseudodata, which we find quitemore » different from the actual distributions, especially for sea quarks and gluons. This tension suggests that, should saturation effects impact the future nuclear structure function data as predicted, a successful refitting of the nPDFs may not be achievable, which would unambiguously signal the presence of non-linear effects.« less
Dormitory Solar-Energy-System Economics
NASA Technical Reports Server (NTRS)
1982-01-01
102-page report analyzes long-term economic performance of a prepackaged solar energy assembly system at a dormitory installation and extrapolates to four additional sites about the U.S. Method of evaluation is f-chart procedure for solar-heating and domestic hotwater systems.
Biologically-based pharmacokinetic models are being increasingly used in the risk assessment of environmental chemicals. These models are based on biological, mathematical, statistical and engineering principles. Their potential uses in risk assessment include extrapolation betwe...
Frequency Comparison of [Formula: see text] Ion Optical Clocks at PTB and NPL via GPS PPP.
Leute, J; Huntemann, N; Lipphardt, B; Tamm, Christian; Nisbet-Jones, P B R; King, S A; Godun, R M; Jones, J M; Margolis, H S; Whibberley, P B; Wallin, A; Merimaa, M; Gill, P; Peik, E
2016-07-01
We used precise point positioning, a well-established GPS carrier-phase frequency transfer method to perform a direct remote comparison of two optical frequency standards based on single laser-cooled [Formula: see text] ions operated at the National Physical Laboratory (NPL), U.K. and the Physikalisch-Technische Bundesanstalt (PTB), Germany. At both institutes, an active hydrogen maser serves as a flywheel oscillator which is connected to a GPS receiver as an external frequency reference and compared simultaneously to a realization of the unperturbed frequency of the (2)S1/2(F=0)-(2)D3/2(F=2) electric quadrupole transition in [Formula: see text] via an optical femtosecond frequency comb. To profit from long coherent GPS-link measurements, we extrapolate the fractional frequency difference over the various data gaps in the optical clock to maser comparisons which introduces maser noise to the frequency comparison but improves the uncertainty from the GPS-link instability. We determined the total statistical uncertainty consisting of the GPS-link uncertainty and the extrapolation uncertainties for several extrapolation schemes. Using the extrapolation scheme with the smallest combined uncertainty, we find a fractional frequency difference [Formula: see text] of -1.3×10(-15) with a combined uncertainty of 1.2×10(-15) for a total measurement time of 67 h. This result is consistent with an agreement of the frequencies realized by both optical clocks and with recent absolute frequency measurements against caesium fountain clocks within the corresponding uncertainties.
Kingma, J G; Martin, J; Rouleau, J R
1994-07-01
Instantaneous diastolic left coronary artery pressure-flow relations (PFR) shift during acute tamponade as pressure surrounding the heart increases. Coronary pressure at zero flow (Pf = 0) on the linear portion of the PFR is the weighted mean of the different myocardial waterfall pressures, the distribution of which varies across the left ventricular wall during diastole. However, instantaneous PFR measured in large epicardial coronary arteries cannot be used to estimate Pf = 0 in the different myocardial tissue layers. During coronary vasodilatation in a capacitance-free model, myocardial PFR differs from subendocardium to subepicardium. Therefore, we studied the effects of acute tamponade during maximal pharmacology induced coronary vasodilatation on myocardial PFR in in situ anesthetized dogs. Tamponade reduced cardiac output, aortic pressure, and coronary blood flow. Results demonstrate that different mechanisms influence distribution of myocardial blood flow during tamponade. Subepicardial vascular resistance is unchanged and the extrapolated Pf = 0 is increased, thereby shifting PFR to a higher intercept on the pressure axis. Subendocardial vascular resistance is increased while the extrapolated Pf = 0 remains unchanged. Results indicate that in the setting of acute tamponade with coronary vasodilatation different mechanisms regulate the distribution of myocardial blood flow: in the subepicardium only outflow pressure increases, whereas in the subendocardium only vascular resistance increases.
NASA Technical Reports Server (NTRS)
Furnstenau, Norbert; Ellis, Stephen R.
2015-01-01
In order to determine the required visual frame rate (FR) for minimizing prediction errors with out-the-window video displays at remote/virtual airport towers, thirteen active air traffic controllers viewed high dynamic fidelity simulations of landing aircraft and decided whether aircraft would stop as if to be able to make a turnoff or whether a runway excursion would be expected. The viewing conditions and simulation dynamics replicated visual rates and environments of transport aircraft landing at small commercial airports. The required frame rate was estimated using Bayes inference on prediction errors by linear FRextrapolation of event probabilities conditional on predictions (stop, no-stop). Furthermore estimates were obtained from exponential model fits to the parametric and non-parametric perceptual discriminabilities d' and A (average area under ROC-curves) as dependent on FR. Decision errors are biased towards preference of overshoot and appear due to illusionary increase in speed at low frames rates. Both Bayes and A - extrapolations yield a framerate requirement of 35 < FRmin < 40 Hz. When comparing with published results [12] on shooter game scores the model based d'(FR)-extrapolation exhibits the best agreement and indicates even higher FRmin > 40 Hz for minimizing decision errors. Definitive recommendations require further experiments with FR > 30 Hz.
Modeling of transitional flows
NASA Technical Reports Server (NTRS)
Lund, Thomas S.
1988-01-01
An effort directed at developing improved transitional models was initiated. The focus of this work was concentrated on the critical assessment of a popular existing transitional model developed by McDonald and Fish in 1972. The objective of this effort was to identify the shortcomings of the McDonald-Fish model and to use the insights gained to suggest modifications or alterations of the basic model. In order to evaluate the transitional model, a compressible boundary layer code was required. Accordingly, a two-dimensional compressible boundary layer code was developed. The program was based on a three-point fully implicit finite difference algorithm where the equations were solved in an uncoupled manner with second order extrapolation used to evaluate the non-linear coefficients. Iteration was offered as an option if the extrapolation error could not be tolerated. The differencing scheme was arranged to be second order in both spatial directions on an arbitrarily stretched mesh. A variety of boundary condition options were implemented including specification of an external pressure gradient, specification of a wall temperature distribution, and specification of an external temperature distribution. Overall the results of the initial phase of this work indicate that the McDonald-Fish model does a poor job at predicting the details of the turbulent flow structure during the transition region.
Rate dependent strengths of some solder joints
NASA Astrophysics Data System (ADS)
Williamson, D. M.; Field, J. E.; Palmer, S. J. P.; Siviour, C. R.
2007-08-01
The shear strengths of three lead-free solder joints have been measured over the range of loading rates 10-3 to ~105 mm min-1. Binary (SnAg), ternary (SnAgCu) and quaternary (Castin: SnAgCuSb) alloys have been compared to a conventional binary SnPb solder alloy. Results show that at loading rates from 10-3 to 102 mm min-1, all four materials exhibit a linear relationship between the shear strength and the loading rate when the data are plotted on a log-log plot. At the highest loading rate of 105 mm min-1, the strengths of the binary alloys were in agreement with extrapolations made from the lower loading rate data. In contrast, the strengths of the higher order alloys were found to be significantly lower than those predicted by extrapolation. This is explained by a change in failure mechanism on the part of the higher order alloys. Similar behaviour was found in measurements of the tensile strengths of solder joints using a novel high-rate loading tensile test. Optical and electron microscopy were used to examine the microstructures of interest in conjunction with energy dispersive x-ray analysis for elemental identification. The effect of artificial aging and reflow of the solder joints is also reported.
Brunori, Paola; Masi, Piergiorgio; Faggiani, Luigi; Villani, Luciano; Tronchin, Michele; Galli, Claudio; Laube, Clarissa; Leoni, Antonella; Demi, Maila; La Gioia, Antonio
2011-04-11
Neonatal jaundice might lead to severe clinical consequences. Measurement of bilirubin in samples is interfered by hemolysis. Over a method-depending cut-off value of measured hemolysis, bilirubin value is not accepted and a new sample is required for evaluation although this is not always possible, especially with newborns and cachectic oncological patients. When usage of different methods, less prone to interferences, is not feasible an alternative recovery method for analytical significance of rejected data might help clinicians to take appropriate decisions. We studied the effects of hemolysis over total bilirubin measurement, comparing hemolysis-interfered bilirubin measurement with the non-interfered value. Interference curves were extrapolated over a wide range of bilirubin (0-30 mg/mL) and hemolysis (H Index 0-1100). Interference "altitude" curves were calculated and plotted. A bimodal acceptance table was calculated. Non-interfered bilirubin of given samples was calculated, by linear interpolation between the nearest lower and upper interference curves. Rejection of interference-sensitive data from hemolysed samples for every method should be based not upon the interferent concentration but upon a more complex algorithm based upon the concentration-dependent bimodal interaction between the interfered analyte and the measured interferent. The altitude-curve cartography approach to interfered assays may help laboratories to build up their own method-dependent algorithm and to improve the trueness of their data by choosing a cut-off value different from the one (-10% interference) proposed by manufacturers. When re-sampling or an alternative method is not available the altitude-curve cartography approach might also represent an alternative recovery method for analytical significance of rejected data. Copyright © 2011 Elsevier B.V. All rights reserved.
The Extrapolation of Elementary Sequences
NASA Technical Reports Server (NTRS)
Laird, Philip; Saul, Ronald
1992-01-01
We study sequence extrapolation as a stream-learning problem. Input examples are a stream of data elements of the same type (integers, strings, etc.), and the problem is to construct a hypothesis that both explains the observed sequence of examples and extrapolates the rest of the stream. A primary objective -- and one that distinguishes this work from previous extrapolation algorithms -- is that the same algorithm be able to extrapolate sequences over a variety of different types, including integers, strings, and trees. We define a generous family of constructive data types, and define as our learning bias a stream language called elementary stream descriptions. We then give an algorithm that extrapolates elementary descriptions over constructive datatypes and prove that it learns correctly. For freely-generated types, we prove a polynomial time bound on descriptions of bounded complexity. An especially interesting feature of this work is the ability to provide quantitative measures of confidence in competing hypotheses, using a Bayesian model of prediction.
Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?
NASA Astrophysics Data System (ADS)
Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.
2016-02-01
It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.
Estimating Demand for Industrial and Commercial Land Use Given Economic Forecasts
Batista e Silva, Filipe; Koomen, Eric; Diogo, Vasco; Lavalle, Carlo
2014-01-01
Current developments in the field of land use modelling point towards greater level of spatial and thematic resolution and the possibility to model large geographical extents. Improvements are taking place as computational capabilities increase and socioeconomic and environmental data are produced with sufficient detail. Integrated approaches to land use modelling rely on the development of interfaces with specialized models from fields like economy, hydrology, and agriculture. Impact assessment of scenarios/policies at various geographical scales can particularly benefit from these advances. A comprehensive land use modelling framework includes necessarily both the estimation of the quantity and the spatial allocation of land uses within a given timeframe. In this paper, we seek to establish straightforward methods to estimate demand for industrial and commercial land uses that can be used in the context of land use modelling, in particular for applications at continental scale, where the unavailability of data is often a major constraint. We propose a set of approaches based on ‘land use intensity’ measures indicating the amount of economic output per existing areal unit of land use. A base model was designed to estimate land demand based on regional-specific land use intensities; in addition, variants accounting for sectoral differences in land use intensity were introduced. A validation was carried out for a set of European countries by estimating land use for 2006 and comparing it to observations. The models’ results were compared with estimations generated using the ‘null model’ (no land use change) and simple trend extrapolations. Results indicate that the proposed approaches clearly outperformed the ‘null model’, but did not consistently outperform the linear extrapolation. An uncertainty analysis further revealed that the models’ performances are particularly sensitive to the quality of the input land use data. In addition, unknown future trends of regional land use intensity widen considerably the uncertainty bands of the predictions. PMID:24647587
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alkhatib, H; Oves, S; Gebreamlak, W
Purpose: To investigate discrepancies between measured percent depth dose curves of a linear accelerator at depths beyond the commissioning data and those generated by the treatment planning system (TPS) via extrapolation. Methods: Relative depth doses were measured on an Elekta Synergy™ linac for photon beams of 6 -MV and 10-MV. SSDs for all curves were 100-cm and field sizes ranged from 4×4 to 35×35-cm{sup 2}. As most scanning tanks cannot provide depths greater than about 30-cm, percent depth dose measurements, extending 45-cm depths, were performed in Solid Water™ using a 0.125-cc ionization chamber (PTW model TN31012). The buildup regions ofmore » the curves were acquired with a parallel plate chamber (PTW model TN34001). Extrapolated curves were generated by the TPS (Phillips Pinnacle{sup 3} v. 9.6) by applying beams to CT images of 50-cm of Solid Water™ with density override set to 1.0-g/cc. Results: Percent difference between the two sets of curves (measured and TPS) was investigated. There is significant discrepancy in the buildup region to a depth of 7-mm. Beyond this depth, the two sets show good agreement. When analyzing the tail end of the curves, we saw percent difference of between 1.2% and 3.2%. The highest disagreement for the 6-MV curves was 10×10-cm{sup 2} (3%) and for the 10-MV curves it was the 35×35-cm{sup 2} (3.2%). Conclusion: A qualitative analysis of the measured data versus PDD curves generated by the TPS shows generally good agreement beyond 1-cm. However, a measurable percent difference was observed when comparing curves at depths beyond that provided by the commissioning data and at depths in the buildup region. Possible explanations for this include inaccuracies in modeling of the Solid Water™ or drift in beam energy since commissioning. Additionally, closer attention must be paid for measurements in the buildup region.« less
Barnes, M P; Ebert, M A
2008-03-01
The concept of electron pencil-beam dose distributions is central to pencil-beam algorithms used in electron beam radiotherapy treatment planning. The Hogstrom algorithm, which is a common algorithm for electron treatment planning, models large electron field dose distributions by the superposition of a series of pencil beam dose distributions. This means that the accurate characterisation of an electron pencil beam is essential for the accuracy of the dose algorithm. The aim of this study was to evaluate a measurement based approach for obtaining electron pencil-beam dose distributions. The primary incentive for the study was the accurate calculation of dose distributions for narrow fields as traditional electron algorithms are generally inaccurate for such geometries. Kodak X-Omat radiographic film was used in a solid water phantom to measure the dose distribution of circular 12 MeV beams from a Varian 21EX linear accelerator. Measurements were made for beams of diameter, 1.5, 2, 4, 8, 16 and 32 mm. A blocked-field technique was used to subtract photon contamination in the beam. The "error function" derived from Fermi-Eyges Multiple Coulomb Scattering (MCS) theory for corresponding square fields was used to fit resulting dose distributions so that extrapolation down to a pencil beam distribution could be made. The Monte Carlo codes, BEAM and EGSnrc were used to simulate the experimental arrangement. The 8 mm beam dose distribution was also measured with TLD-100 microcubes. Agreement between film, TLD and Monte Carlo simulation results were found to be consistent with the spatial resolution used. The study has shown that it is possible to extrapolate narrow electron beam dose distributions down to a pencil beam dose distribution using the error function. However, due to experimental uncertainties and measurement difficulties, Monte Carlo is recommended as the method of choice for characterising electron pencil-beam dose distributions.
Lunar terrain mapping and relative-roughness analysis
Rowan, Lawrence C.; McCauley, John F.; Holm, Esther A.
1971-01-01
Terrain maps of the equatorial zone (long 70° E.-70° W. and lat 10° N-10° S.) were prepared at scales of 1:2,000,000 and 1:1,000,000 to classify lunar terrain with respect to roughness and to provide a basis for selecting sites for Surveyor and Apollo landings as well as for Ranger and Lunar Orbiter photographs. The techniques that were developed as a result of this effort can be applied to future planetary exploration. By using the best available earth-based observational data and photographs 1:1,000,000-scale and U.S. Geological Survey lunar geologic maps and U.S. Air Force Aeronautical Chart and Information Center LAC charts, lunar terrain was described by qualitative and quantitative methods and divided into four fundamental classes: maria, terrae, craters, and linear features. Some 35 subdivisions were defined and mapped throughout the equatorial zone, and, in addition, most of the map units were illustrated by photographs. The terrain types were analyzed quantitatively to characterize and order their relative-roughness characteristics. Approximately 150,000 east-west slope measurements made by a photometric technique (photoclinometry) in 51 sample areas indicate that algebraic slope-frequency distributions are Gaussian, and so arithmetic means and standard deviations accurately describe the distribution functions. The algebraic slope-component frequency distributions are particularly useful for rapidly determining relative roughness of terrain. The statistical parameters that best describe relative roughness are the absolute arithmetic mean, the algebraic standard deviation, and the percentage of slope reversal. Statistically derived relative-relief parameters are desirable supplementary measures of relative roughness in the terrae. Extrapolation of relative roughness for the maria was demonstrated using Ranger VII slope-component data and regional maria slope data, as well as the data reported here. It appears that, for some morphologically homogeneous mare areas, relative roughness can be extrapolated to the large scales from measurements at small scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, M. C.; Truster, T. J.; Cochran, K. B.
Advanced reactors designed to operate at higher temperatures than current light water reactors require structural materials with high creep strength and creep-fatigue resistance to achieve long design lives. Grade 91 is a ferritic/martensitic steel designed for long creep life at elevated temperatures. It has been selected as a candidate material for sodium fast reactor intermediate heat exchangers and other advanced reactor structural components. This report focuses on the creep deformation and rupture life of Grade 91 steel. The time required to complete an experiment limits the availability of long-life creep data for Grade 91 and other structural materials. Design methodsmore » often extrapolate the available shorter-term experimental data to longer design lives. However, extrapolation methods tacitly assume the underlying material mechanisms causing creep for long-life/low-stress conditions are the same as the mechanisms controlling creep in the short-life/high-stress experiments. A change in mechanism for long-term creep could cause design methods based on extrapolation to be non-conservative. The goal for physically-based microstructural models is to accurately predict material response in experimentally-inaccessible regions of design space. An accurate physically-based model for creep represents all the material mechanisms that contribute to creep deformation and damage and predicts the relative influence of each mechanism, which changes with loading conditions. Ideally, the individual mechanism models adhere to the material physics and not an empirical calibration to experimental data and so the model remains predictive for a wider range of loading conditions. This report describes such a physically-based microstructural model for Grade 91 at 600° C. The model explicitly represents competing dislocation and diffusional mechanisms in both the grain bulk and grain boundaries. The model accurately recovers the available experimental creep curves at higher stresses and the limited experimental data at lower stresses, predominately primary creep rates. The current model considers only one temperature. However, because the model parameters are, for the most part, directly related to the physics of fundamental material processes, the temperature dependence of the properties are known. Therefore, temperature dependence can be included in the model with limited additional effort. The model predicts a mechanism shift for 600° C at approximately 100 MPa from a dislocation- dominated regime at higher stress to a diffusion-dominated regime at lower stress. This mechanism shift impacts the creep life, notch-sensitivity, and, likely, creep ductility of Grade 91. In particular, the model predicts existing extrapolation methods for creep life may be non-conservative when attempting to extrapolate data for higher stress creep tests to low stress, long-life conditions. Furthermore, the model predicts a transition from notchstrengthening behavior at high stress to notch-weakening behavior at lower stresses. Both behaviors may affect the conservatism of existing design methods.« less
Extrapolating Survival from Randomized Trials Using External Data: A Review of Methods
Jackson, Christopher; Stevens, John; Ren, Shijie; Latimer, Nick; Bojke, Laura; Manca, Andrea; Sharples, Linda
2016-01-01
This article describes methods used to estimate parameters governing long-term survival, or times to other events, for health economic models. Specifically, the focus is on methods that combine shorter-term individual-level survival data from randomized trials with longer-term external data, thus using the longer-term data to aid extrapolation of the short-term data. This requires assumptions about how trends in survival for each treatment arm will continue after the follow-up period of the trial. Furthermore, using external data requires assumptions about how survival differs between the populations represented by the trial and external data. Study reports from a national health technology assessment program in the United Kingdom were searched, and the findings were combined with “pearl-growing” searches of the academic literature. We categorized the methods that have been used according to the assumptions they made about how the hazards of death vary between the external and internal data and through time, and we discuss the appropriateness of the assumptions in different circumstances. Modeling choices, parameter estimation, and characterization of uncertainty are discussed, and some suggestions for future research priorities in this area are given. PMID:27005519
Mesopotamia, A Difficult but Interesting Topic.
ERIC Educational Resources Information Center
Kavett, Hyman
1979-01-01
Describes a method to help students become participants in historical analysis rather than observers of ancient history. Mesopotamia is used as a case study of a culture for which opportunities exist for conjecture, hypothesis formation, research, extrapolation, problem solving, and statements of causality. (Author/DB)
Uncertainties Associated with Flux Measurements Due to Heterogeneous Contaminant Distributions
Mass flux and mass discharge measurements at contaminated sites have been applied to assist with remedial management, and can be divided into two broad categories: point-scale measurement techniques and pumping methods. Extrapolation across un-sampled space is necessary when usi...
Absolute calibration of Doppler coherence imaging velocity images
NASA Astrophysics Data System (ADS)
Samuell, C. M.; Allen, S. L.; Meyer, W. H.; Howard, J.
2017-08-01
A new technique has been developed for absolutely calibrating a Doppler Coherence Imaging Spectroscopy interferometer for measuring plasma ion and neutral velocities. An optical model of the interferometer is used to generate zero-velocity reference images for the plasma spectral line of interest from a calibration source some spectral distance away. Validation of this technique using a tunable diode laser demonstrated an accuracy better than 0.2 km/s over an extrapolation range of 3.5 nm; a two order of magnitude improvement over linear approaches. While a well-characterized and very stable interferometer is required, this technique opens up the possibility of calibrated velocity measurements in difficult viewing geometries and for complex spectral line-shapes.
Kurita, N; Ronning, F; Tokiwa, Y; Bauer, E D; Subedi, A; Singh, D J; Thompson, J D; Movshovich, R
2009-04-10
We have performed low-temperature specific heat and thermal conductivity measurements of the Ni-based superconductor BaNi2As2 (T{c}=0.7 K) in a magnetic field. In a zero field, thermal conductivity shows T-linear behavior in the normal state and exhibits a BCS-like exponential decrease below T{c}. The field dependence of the residual thermal conductivity extrapolated to zero temperature is indicative of a fully gapped superconductor. This conclusion is supported by the analysis of the specific heat data, which are well fit by the BCS temperature dependence from T{c} down to the lowest temperature of 0.1 K.
Effect of water vapor on sound absorption in nitrogen at low frequency/pressure ratios
NASA Technical Reports Server (NTRS)
Zuckerwar, A. J.; Griffin, W. A.
1981-01-01
Sound absorption measurements were made in N2-H2O binary mixtures at 297 K over the frequency/pressure range f/P of 0.1-2500 Hz/atm to investigate the vibrational relaxation peak of N2 and its location on f/P axis as a function of humidity. At low humidities the best fit to a linear relationship between the f/P(max) and humidity yields an intercept of 0.013 Hz/atm and a slope of 20,000 Hz/atm-mole fraction. The reaction rate constants derived from this model are lower than those obtained from the extrapolation of previous high-temperature data.
Kinetic limitations on the diffusional control theory of the ablation rate of carbon.
NASA Technical Reports Server (NTRS)
Maahs, H. G.
1971-01-01
It is shown that the theoretical maximum oxidation rate is limited in many cases even at temperatures much higher than 1650 deg K, not by oxygen transport, but by the kinetics of the carbon-oxygen reaction itself. Mass-loss rates have been calculated at air pressures of 0.01 atm, 1 atm, and 100 atm. It is found that at high temperatures the rate of the oxidation reaction is much slower than has generally been assumed on the basis of a simple linear extrapolation of Scala's 'fast' and 'slow' rate expressions. Accordingly it cannot be assumed that a transport limitation inevitably must be reached at high temperatures.
A far-field non-reflecting boundary condition for two-dimensional wake flows
NASA Technical Reports Server (NTRS)
Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli
1995-01-01
Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.
AXES OF EXTRAPOLATION IN RISK ASSESSMENTS
Extrapolation in risk assessment involves the use of data and information to estimate or predict something that has not been measured or observed. Reasons for extrapolation include that the number of combinations of environmental stressors and possible receptors is too large to c...
CROSS-SPECIES DOSE EXTRAPOLATION FOR DIESEL EMISSIONS
Models for cross-species (rat to human) dose extrapolation of diesel emission were evaluated for purposes of establishing guidelines for human exposure to diesel emissions (DE) based on DE toxicological data obtained in rats. Ideally, a model for this extrapolation would provide...
Line-of-sight extrapolation noise in dust polarization
NASA Astrophysics Data System (ADS)
Poh, Jason; Dodelson, Scott
2017-05-01
The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g. 350 GHz) is due solely to dust and then extrapolate the signal down to a lower frequency (e.g. 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typical Galactic thermal dust temperatures of ˜20 K , these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise on a greybody dust model consistent with Planck and Pan-STARRS observations, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r ≲0.0015 in the greybody dust models considered in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perko, Z; Bortfeld, T; Hong, T
Purpose: The safe use of radiotherapy requires the knowledge of tolerable organ doses. For experimental fractionation schemes (e.g. hypofractionation) these are typically extrapolated from traditional fractionation schedules using the Biologically Effective Dose (BED) model. This work demonstrates that using the mean dose in the standard BED equation may overestimate tolerances, potentially leading to unsafe treatments. Instead, extrapolation of mean dose tolerances should take the spatial dose distribution into account. Methods: A formula has been derived to extrapolate mean physical dose constraints such that they are mean BED equivalent. This formula constitutes a modified BED equation where the influence of themore » spatial dose distribution is summarized in a single parameter, the dose shape factor. To quantify effects we analyzed 14 liver cancer patients previously treated with proton therapy in 5 or 15 fractions, for whom also photon IMRT plans were available. Results: Our work has two main implications. First, in typical clinical plans the dose distribution can have significant effects. When mean dose tolerances are extrapolated from standard fractionation towards hypofractionation they can be overestimated by 10–15%. Second, the shape difference between photon and proton dose distributions can cause 30–40% differences in mean physical dose for plans having the same mean BED. The combined effect when extrapolating proton doses to mean BED equivalent photon doses in traditional 35 fraction regimens resulted in up to 7–8 Gy higher doses than when applying the standard BED formula. This can potentially lead to unsafe treatments (in 1 of the 14 analyzed plans the liver mean dose was above its 32 Gy tolerance). Conclusion: The shape effect should be accounted for to avoid unsafe overestimation of mean dose tolerances, particularly when estimating constraints for hypofractionated regimens. In addition, tolerances established for a given treatment modality cannot necessarily be applied to other modalities with drastically different dose distributions.« less
Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR.
Mobli, Mehdi; Hoch, Jeffrey C
2014-11-01
Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. Copyright © 2014 Elsevier B.V. All rights reserved.
Rational approximations from power series of vector-valued meromorphic functions
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vector-valued function, F: C yields C(sup N), which is analytic at z = 0 and meromorphic in a neighborhood of z = 0, and let its Maclaurin series be given. In this work we developed vector-valued rational approximation procedures for F(z) by applying vector extrapolation methods to the sequence of partial sums of its Maclaurin series. We analyzed some of the algebraic and analytic properties of the rational approximations thus obtained, and showed that they were akin to Pade approximations. In particular, we proved a Koenig type theorem concerning their poles and a de Montessus type theorem concerning their uniform convergence. We showed how optical approximations to multiple poles and to Laurent expansions about these poles can be constructed. Extensions of the procedures above and the accompanying theoretical results to functions defined in arbitrary linear spaces was also considered. One of the most interesting and immediate applications of the results of this work is to the matrix eigenvalue problem. In a forthcoming paper we exploited the developments of the present work to devise bona fide generalizations of the classical power method that are especially suitable for very large and sparse matrices. These generalizations can be used to approximate simultaneously several of the largest distinct eigenvalues and corresponding eigenvectors and invariant subspaces of arbitrary matrices which may or may not be diagonalizable, and are very closely related with known Krylov subspace methods.
NMR measurement of bitumen at different temperatures.
Yang, Zheng; Hirasaki, George J
2008-06-01
Heavy oil (bitumen) is characterized by its high viscosity and density, which is a major obstacle to both well logging and recovery. Due to the lost information of T2 relaxation time shorter than echo spacing (TE) and interference of water signal, estimation of heavy oil properties from NMR T2 measurements is usually problematic. In this work, a new method has been developed to overcome the echo spacing restriction of NMR spectrometer during the application to heavy oil (bitumen). A FID measurement supplemented the start of CPMG. Constrained by its initial magnetization (M0) estimated from the FID and assuming log normal distribution for bitumen, the corrected T2 relaxation time of bitumen sample can be obtained from the interpretation of CPMG data. This new method successfully overcomes the TE restriction of the NMR spectrometer and is nearly independent on the TE applied in the measurement. This method was applied to the measurement at elevated temperatures (8-90 degrees C). Due to the significant signal-loss within the dead time of FID, the directly extrapolated M0 of bitumen at relatively lower temperatures (<60 degrees C) was found to be underestimated. However, resulting from the remarkably lowered viscosity, the extrapolated M0 of bitumen at over 60 degrees C can be reasonably assumed to be the real value. In this manner, based on the extrapolation at higher temperatures (> or = 60 degrees C), the M0 value of bitumen at lower temperatures (<60 degrees C) can be corrected by Curie's Law. Consequently, some important petrophysical properties of bitumen, such as hydrogen index (HI), fluid content and viscosity were evaluated by using corrected T2.
Effective-range function methods for charged particle collisions
NASA Astrophysics Data System (ADS)
Gaspard, David; Sparenberg, Jean-Marc
2018-04-01
Different versions of the effective-range function method for charged particle collisions are studied and compared. In addition, a novel derivation of the standard effective-range function is presented from the analysis of Coulomb wave functions in the complex plane of the energy. The recently proposed effective-range function denoted as Δℓ [Ramírez Suárez and Sparenberg, Phys. Rev. C 96, 034601 (2017), 10.1103/PhysRevC.96.034601] and an earlier variant [Hamilton et al., Nucl. Phys. B 60, 443 (1973), 10.1016/0550-3213(73)90193-4] are related to the standard function. The potential interest of Δℓ for the study of low-energy cross sections and weakly bound states is discussed in the framework of the proton-proton S10 collision. The resonant state of the proton-proton collision is successfully computed from the extrapolation of Δℓ instead of the standard function. It is shown that interpolating Δℓ can lead to useful extrapolation to negative energies, provided scattering data are known below one nuclear Rydberg energy (12.5 keV for the proton-proton system). This property is due to the connection between Δℓ and the effective-range function by Hamilton et al. that is discussed in detail. Nevertheless, such extrapolations to negative energies should be used with caution because Δℓ is not analytic at zero energy. The expected analytic properties of the main functions are verified in the complex energy plane by graphical color-based representations.
NASA Astrophysics Data System (ADS)
Coco, Armando; Russo, Giovanni
2018-05-01
In this paper we propose a second-order accurate numerical method to solve elliptic problems with discontinuous coefficients (with general non-homogeneous jumps in the solution and its gradient) in 2D and 3D. The method consists of a finite-difference method on a Cartesian grid in which complex geometries (boundaries and interfaces) are embedded, and is second order accurate in the solution and the gradient itself. In order to avoid the drop in accuracy caused by the discontinuity of the coefficients across the interface, two numerical values are assigned on grid points that are close to the interface: a real value, that represents the numerical solution on that grid point, and a ghost value, that represents the numerical solution extrapolated from the other side of the interface, obtained by enforcing the assigned non-homogeneous jump conditions on the solution and its flux. The method is also extended to the case of matrix coefficient. The linear system arising from the discretization is solved by an efficient multigrid approach. Unlike the 1D case, grid points are not necessarily aligned with the normal derivative and therefore suitable stencils must be chosen to discretize interface conditions in order to achieve second order accuracy in the solution and its gradient. A proper treatment of the interface conditions will allow the multigrid to attain the optimal convergence factor, comparable with the one obtained by Local Fourier Analysis for rectangular domains. The method is robust enough to handle large jump in the coefficients: order of accuracy, monotonicity of the errors and good convergence factor are maintained by the scheme.
Possibilities and limitations of the kinetic plot method in supercritical fluid chromatography.
De Pauw, Ruben; Desmet, Gert; Broeckhoven, Ken
2013-08-30
Although supercritical fluid chromatography (SFC) is becoming a technique of increasing importance in the field of analytical chromatography, methods to compare the performance of SFC-columns and separations in an unbiased way are not fully developed. The present study uses mathematical models to investigate the possibilities and limitations of the kinetic plot method in SFC as this easily allows to investigate a wide range of operating pressures, retention and mobile phase conditions. The variable column length (L) kinetic plot method was further investigated in this work. Since the pressure history is identical for each measurement, this method gives the true kinetic performance limit in SFC. The deviations of the traditional way of measuring the performance as a function of flow rate (fixed back pressure and column length) and the isopycnic method with respect to this variable column length method were investigated under a wide range of operational conditions. It is found that using the variable L method, extrapolations towards other pressure drops are not valid in SFC (deviation of ∼15% for extrapolation from 50 to 200bar pressure drop). The isopycnic method provides the best prediction but its use is limited when operating closer towards critical point conditions. When an organic modifier is used, the predictions are improved for both methods with respect to the variable L method (e.g. deviations decreases from 20% to 2% when 20mol% of methanol is added). Copyright © 2013 Elsevier B.V. All rights reserved.
Extrapolation procedures in Mott electron polarimetry
NASA Technical Reports Server (NTRS)
Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.
1992-01-01
In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.
Exploring precipitation pattern scaling methodologies and robustness among CMIP5 models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kravitz, Ben; Lynch, Cary; Hartin, Corinne
Pattern scaling is a well-established method for approximating modeled spatial distributions of changes in temperature by assuming a time-invariant pattern that scales with changes in global mean temperature. We compare two methods of pattern scaling for annual mean precipitation (regression and epoch difference) and evaluate which method is better in particular circumstances by quantifying their robustness to interpolation/extrapolation in time, inter-model variations, and inter-scenario variations. Both the regression and epoch-difference methods (the two most commonly used methods of pattern scaling) have good absolute performance in reconstructing the climate model output, measured as an area-weighted root mean square error. We decomposemore » the precipitation response in the RCP8.5 scenario into a CO 2 portion and a non-CO 2 portion. Extrapolating RCP8.5 patterns to reconstruct precipitation change in the RCP2.6 scenario results in large errors due to violations of pattern scaling assumptions when this CO 2-/non-CO 2-forcing decomposition is applied. As a result, the methodologies discussed in this paper can help provide precipitation fields to be utilized in other models (including integrated assessment models or impacts assessment models) for a wide variety of scenarios of future climate change.« less
Exploring precipitation pattern scaling methodologies and robustness among CMIP5 models
Kravitz, Ben; Lynch, Cary; Hartin, Corinne; ...
2017-05-12
Pattern scaling is a well-established method for approximating modeled spatial distributions of changes in temperature by assuming a time-invariant pattern that scales with changes in global mean temperature. We compare two methods of pattern scaling for annual mean precipitation (regression and epoch difference) and evaluate which method is better in particular circumstances by quantifying their robustness to interpolation/extrapolation in time, inter-model variations, and inter-scenario variations. Both the regression and epoch-difference methods (the two most commonly used methods of pattern scaling) have good absolute performance in reconstructing the climate model output, measured as an area-weighted root mean square error. We decomposemore » the precipitation response in the RCP8.5 scenario into a CO 2 portion and a non-CO 2 portion. Extrapolating RCP8.5 patterns to reconstruct precipitation change in the RCP2.6 scenario results in large errors due to violations of pattern scaling assumptions when this CO 2-/non-CO 2-forcing decomposition is applied. As a result, the methodologies discussed in this paper can help provide precipitation fields to be utilized in other models (including integrated assessment models or impacts assessment models) for a wide variety of scenarios of future climate change.« less
Optimizing the use of rainbow trout hepatocytes for bioaccumulation assessments with fish
Measured rates of biotransformation by cryopreserved trout hepatocytes can be extrapolated to the whole animal as a means of predicting metabolism impacts on chemical bioaccumulation. Future use of these methods within a regulatory context requires, however, that they be standar...
7 CFR 2902.5 - Item designation.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., USDA will use life cycle cost information only from tests using the BEES analytical method. (c... availability of such items and the economic and technological feasibility of using such items, including life cycle costs. USDA will gather information on individual products within an item and extrapolate that...
Agro-ecoregionalization of Iowa using multivariate geographical clustering
Carol L. Williams; William W. Hargrove; Matt Leibman; David E. James
2008-01-01
Agro-ecoregionalization is categorization of landscapes for use in crop suitability analysis, strategic agroeconomic development, risk analysis, and other purposes. Past agro-ecoregionalizations have been subjective, expert opinion driven, crop specific, and unsuitable for statistical extrapolation. Use of quantitative analytical methods provides an opportunity for...