Infrared length scale and extrapolations for the no-core shell model
Wendt, K. A.; Forssén, C.; Papenbrock, T.; ...
2015-06-03
In this paper, we precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3(A-1)-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound statesmore » of 4He, 6He, 6Li, and 7Li. Finally, we also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.« less
Infrared length scale and extrapolations for the no-core shell model
Wendt, K. A.; Forssén, C.; Papenbrock, T.; Sääf, D.
2015-06-03
In this paper, we precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3(A-1)-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for ^{6}Li. We apply our result and perform accurate IR extrapolations for bound states of ^{4}He, ^{6}He, ^{6}Li, and ^{7}Li. Finally, we also attempt to extrapolate NCSM results for ^{10}B and ^{16}O with bare interactions from chiral effective field theory over tens of MeV.
NASA Technical Reports Server (NTRS)
Mack, Robert J.; Kuhn, Neil S.
2006-01-01
A study was performed to determine a limiting separation distance for the extrapolation of pressure signatures from cruise altitude to the ground. The study was performed at two wind-tunnel facilities with two research low-boom wind-tunnel models designed to generate ground pressure signatures with "flattop" shapes. Data acquired at the first wind-tunnel facility showed that pressure signatures had not achieved the desired low-boom features for extrapolation purposes at separation distances of 2 to 5 span lengths. However, data acquired at the second wind-tunnel facility at separation distances of 5 to 20 span lengths indicated the "limiting extrapolation distance" had been achieved so pressure signatures could be extrapolated with existing codes to obtain credible predictions of ground overpressures.
Interspecies extrapolation encompasses two related but distinct topic areas that are germane to quantitative extrapolation and hence computational toxicology-dose scaling and parameter scaling. Dose scaling is the process of converting a dose determined in an experimental animal ...
Interspecies extrapolation encompasses two related but distinct topic areas that are germane to quantitative extrapolation and hence computational toxicology-dose scaling and parameter scaling. Dose scaling is the process of converting a dose determined in an experimental animal ...
Ecotoxicological effects extrapolation models
Suter, G.W. II
1996-09-01
One of the central problems of ecological risk assessment is modeling the relationship between test endpoints (numerical summaries of the results of toxicity tests) and assessment endpoints (formal expressions of the properties of the environment that are to be protected). For example, one may wish to estimate the reduction in species richness of fishes in a stream reach exposed to an effluent and have only a fathead minnow 96 hr LC50 as an effects metric. The problem is to extrapolate from what is known (the fathead minnow LC50) to what matters to the decision maker, the loss of fish species. Models used for this purpose may be termed Effects Extrapolation Models (EEMs) or Activity-Activity Relationships (AARs), by analogy to Structure-Activity Relationships (SARs). These models have been previously reviewed in Ch. 7 and 9 of and by an OECD workshop. This paper updates those reviews and attempts to further clarify the issues involved in the development and use of EEMs. Although there is some overlap, this paper does not repeat those reviews and the reader is referred to the previous reviews for a more complete historical perspective, and for treatment of additional extrapolation issues.
The Extrapolation of Elementary Sequences
NASA Technical Reports Server (NTRS)
Laird, Philip; Saul, Ronald
1992-01-01
We study sequence extrapolation as a stream-learning problem. Input examples are a stream of data elements of the same type (integers, strings, etc.), and the problem is to construct a hypothesis that both explains the observed sequence of examples and extrapolates the rest of the stream. A primary objective -- and one that distinguishes this work from previous extrapolation algorithms -- is that the same algorithm be able to extrapolate sequences over a variety of different types, including integers, strings, and trees. We define a generous family of constructive data types, and define as our learning bias a stream language called elementary stream descriptions. We then give an algorithm that extrapolates elementary descriptions over constructive datatypes and prove that it learns correctly. For freely-generated types, we prove a polynomial time bound on descriptions of bounded complexity. An especially interesting feature of this work is the ability to provide quantitative measures of confidence in competing hypotheses, using a Bayesian model of prediction.
Local theory of extrapolation methods
NASA Astrophysics Data System (ADS)
Kulikov, Gennady
2010-03-01
In this paper we discuss the theory of one-step extrapolation methods applied both to ordinary differential equations and to index 1 semi-explicit differential-algebraic systems. The theoretical background of this numerical technique is the asymptotic global error expansion of numerical solutions obtained from general one-step methods. It was discovered independently by Henrici, Gragg and Stetter in 1962, 1964 and 1965, respectively. This expansion is also used in most global error estimation strategies as well. However, the asymptotic expansion of the global error of one-step methods is difficult to observe in practice. Therefore we give another substantiation of extrapolation technique that is based on the usual local error expansion in a Taylor series. We show that the Richardson extrapolation can be utilized successfully to explain how extrapolation methods perform. Additionally, we prove that the Aitken-Neville algorithm works for any one-step method of an arbitrary order s, under suitable smoothness.
Systematic Errors and Graphical Extrapolation.
ERIC Educational Resources Information Center
Blickensderfer, Roger
1985-01-01
Presents a laboratory exercise designed to introduce graphical extrapolation. Major advantages of the method are in its simplicity and speed. The only measuring devices are a centimeter ruler and a micrometer caliper to check wall thickness. (JN)
Extrapolation methods for vector sequences
NASA Technical Reports Server (NTRS)
Smith, David A.; Ford, William F.; Sidi, Avram
1987-01-01
This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.
Infrared extrapolations for atomic nuclei
Furnstahl, R. J.; Hagen, Gaute; Papenbrock, Thomas F.; Wendt, Kyle A.
2015-01-01
Harmonic oscillator model-space truncations introduce systematic errors to the calculation of binding energies and other observables. We identify the relevant infrared (IR) scaling variable and give values for this nucleus-dependent quantity. We consider isotopes of oxygen computed with the coupled-cluster method from chiral nucleon–nucleon interactions at next-to-next-to-leading order and show that the IR component of the error is sufficiently understood to permit controlled extrapolations. By employing oscillator spaces with relatively large frequencies, that are well above the energy minimum, the ultraviolet corrections can be suppressed while IR extrapolations over tens of MeVs are accurate for ground-state energies. However, robust uncertainty quantification for extrapolated quantities that fully accounts for systematic errors is not yet developed.
Infrared extrapolations for atomic nuclei
Furnstahl, R. J.; Hagen, Gaute; Papenbrock, Thomas F.; ...
2015-01-01
Harmonic oscillator model-space truncations introduce systematic errors to the calculation of binding energies and other observables. We identify the relevant infrared (IR) scaling variable and give values for this nucleus-dependent quantity. We consider isotopes of oxygen computed with the coupled-cluster method from chiral nucleon–nucleon interactions at next-to-next-to-leading order and show that the IR component of the error is sufficiently understood to permit controlled extrapolations. By employing oscillator spaces with relatively large frequencies, that are well above the energy minimum, the ultraviolet corrections can be suppressed while IR extrapolations over tens of MeVs are accurate for ground-state energies. However, robust uncertaintymore » quantification for extrapolated quantities that fully accounts for systematic errors is not yet developed.« less
Extrapolation limitations of multilayer feedforward neural networks
NASA Technical Reports Server (NTRS)
Haley, Pamela J.; Soloway, Donald
1992-01-01
The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.
Extrapolation limitations of multilayer feedforward neural networks
NASA Technical Reports Server (NTRS)
Haley, Pamela J.; Soloway, Donald
1992-01-01
The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.
Vertical extrapolations of wind speed
Doran, J.C.; Buck, J.W.; Heflick, S.K.
1982-09-01
The extrapolation of wind speeds and wind speed distributions from a lower to an upper level is examined, with particular emphasis on the power law approach. While the power laws are useful for representing the behavior of winds under a variety of conditions, they are shown to be inherently incorrect and misleading for extrapolations. The law's apparent simplicity nevertheless makes it attractive for certain purposes, and its performance at a number of windy sites is tested. The principal feature seems to be the large degree of scatter found from site to site, and even at a single site from one time to the next. Part of this is attributable to the effects of stability, as is seen by dividing the data into daytime and nighttime periods, but the scatter is by no means eliminated by this division. The behavior of the power law exponents is poorer still in complex terrain. While some general tendencies of these exponents can be found, their use cannot be recommended for anything more than a preliminary or rough estimate of wind speeds. Extrapolation formulas for Weibull distributions are also tested with the same data base. They are found to work reasonably well in the mean, but the uncertainties present make their use in any particular case somewhat risky. The use of kites to obtain estimates either of wind speed distributions or power law exponent distributions is simulated. As expected, there is a considerable degree of scatter associated with the results, but the use of kites seems to offer some small possibility of improvement compared to results obtained from the simple extrapolation formulas for Weibull distributions.
Extrapolating bound state data of anions into the metastable domain
NASA Astrophysics Data System (ADS)
Feuerbacher, Sven; Sommerfeld, Thomas; Cederbaum, Lorenz S.
2004-10-01
Computing energies of electronically metastable resonance states is still a great challenge. Both scattering techniques and quantum chemistry based L2 methods are very time consuming. Here we investigate two more economical extrapolation methods. Extrapolating bound states energies into the metastable region using increased nuclear charges has been suggested almost 20 years ago. We critically evaluate this attractive technique employing our complex absorbing potential/Green's function method that allows us to follow a bound state into the continuum. Using the 2Πg resonance of N2- and the 2Πu resonance of CO2- as examples, we found that the extrapolation works suprisingly well. The second extrapolation method involves increasing of bond lengths until the sought resonance becomes stable. The keystone is to extrapolate the attachment energy and not the total energy of the system. This method has the great advantage that the whole potential energy curve is obtained with quite good accuracy by the extrapolation. Limitations of the two techniques are discussed.
Extrapolation of acenocoumarol pharmacogenetic algorithms.
Jiménez-Varo, Enrique; Cañadas-Garre, Marisa; Garcés-Robles, Víctor; Gutiérrez-Pimentel, María José; Calleja-Hernández, Miguel Ángel
2015-11-01
Acenocoumarol (ACN) has a narrow therapeutic range that is especially difficult to control at the start of its administration. Various dosing pharmacogenetic-guided dosing algorithms have been developed, but further work on their external validation is required. The aim of this study was to evaluate the extrapolation of pharmacogenetic algorithms for ACN as an alternative to the development of a specific algorithm for a given population. The predictive performance, deviation, accuracy, and clinical significance of five pharmacogenetic algorithms (EU-PACT, Borobia, Rathore, Markatos, Krishna Kumar) were compared in 189 stable ACN patients representing all indications for anticoagulant treatment. The correlation between the dose predictions of the five pharmacogenetic models ranged from 7.7 to 70.6% and the percentage of patients with a correct prediction (deviation ≤20% from actual ACN dose) ranged from 5.9 to 40.7%. EU-PACT and Borobia pharmacogenetic dosing algorithms were the most accurate in our setting and evidenced the best clinical performance. Among the five models studied, the EU-PACT and Borobia pharmacogenetic dosing algorithms demonstrated the best potential for extrapolation. Copyright © 2015 Elsevier Inc. All rights reserved.
Ethanol kinetics: extent of error in back extrapolation procedures.
al-Lanqawi, Y; Moreland, T A; McEwen, J; Halliday, F; Durnin, C J; Stevenson, I H
1992-01-01
1. Plasma ethanol concentrations were measured in 24 male volunteers for 9 h after a single oral dose of 710 mg kg-1. 2. The rate of decline of the plasma ethanol concentration (k0; mean +/- s.d.), was 186 +/- 26 mg l-1 h-1. 3. In each individual, three elimination rates were used to back-extrapolate plasma ethanol concentrations over 3 and 5 h periods from observed values at 4 h and 6 h post-dosing assuming zero-order kinetics. The extrapolated values were then compared with the observed concentrations. 4. Using the mean k0 values for the subjects the mean error in back extrapolation was small but highly variable. The variability in the error increased with the length of the extrapolation period. 5. When a k0 value of 150 mg l-1 h-1 (a value often cited as a population mean) was used for back extrapolation this resulted in significant under-estimation of actual values whereas the use of a k0 value of 238 mg l-1 h-1 (the highest value observed in the present study) resulted in significant over-estimation of actual values. 6. These results indicate that because the kinetics of ethanol are associated with substantial inter-subject variability the use of a single slope value to back calculate blood concentrations can give rise to considerable error. PMID:1457265
Extrapolated stabilized explicit Runge-Kutta methods
NASA Astrophysics Data System (ADS)
Martín-Vaquero, J.; Kleefeld, B.
2016-12-01
Extrapolated Stabilized Explicit Runge-Kutta methods (ESERK) are proposed to solve multi-dimensional nonlinear partial differential equations (PDEs). In such methods it is necessary to evaluate the function nt times per step, but the stability region is O (nt2). Hence, the computational cost is O (nt) times lower than for a traditional explicit algorithm. In that way stiff problems can be integrated by the use of simple explicit evaluations in which case implicit methods usually had to be used. Therefore, they are especially well-suited for the method of lines (MOL) discretizations of parabolic nonlinear multi-dimensional PDEs. In this work, first s-stages first-order methods with extended stability along the negative real axis are obtained. They have slightly shorter stability regions than other traditional first-order stabilized explicit Runge-Kutta algorithms (also called Runge-Kutta-Chebyshev codes). Later, they are used to derive nt-stages second- and fourth-order schemes using Richardson extrapolation. The stability regions of these fourth-order codes include the interval [ - 0.01nt2, 0 ] (nt being the number of total functions evaluations), which are shorter than stability regions of ROCK4 methods, for example. However, the new algorithms neither suffer from propagation of errors (as other Runge-Kutta-Chebyshev codes as ROCK4 or DUMKA) nor internal instabilities. Additionally, many other types of higher-order (and also lower-order) methods can be obtained easily in a similar way. These methods also allow adaptation of the length step with no extra cost. Hence, the stability domain is adapted precisely to the spectrum of the problem at the current time of integration in an optimal way, i.e., with minimal number of additional stages. We compare the new techniques with other well-known algorithms with good results in very stiff diffusion or reaction-diffusion multi-dimensional nonlinear equations.
AXES OF EXTRAPOLATION IN RISK ASSESSMENTS
Extrapolation in risk assessment involves the use of data and information to estimate or predict something that has not been measured or observed. Reasons for extrapolation include that the number of combinations of environmental stressors and possible receptors is too large to c...
AXES OF EXTRAPOLATION IN RISK ASSESSMENTS
Extrapolation in risk assessment involves the use of data and information to estimate or predict something that has not been measured or observed. Reasons for extrapolation include that the number of combinations of environmental stressors and possible receptors is too large to c...
Builtin vs. auxiliary detection of extrapolation risk.
Munson, Miles Arthur; Kegelmeyer, W. Philip,
2013-02-01
A key assumption in supervised machine learning is that future data will be similar to historical data. This assumption is often false in real world applications, and as a result, prediction models often return predictions that are extrapolations. We compare four approaches to estimating extrapolation risk for machine learning predictions. Two builtin methods use information available from the classification model to decide if the model would be extrapolating for an input data point. The other two build auxiliary models to supplement the classification model and explicitly model extrapolation risk. Experiments with synthetic and real data sets show that the auxiliary models are more reliable risk detectors. To best safeguard against extrapolating predictions, however, we recommend combining builtin and auxiliary diagnostics.
Implicit Extrapolation Methods for Variable Coefficient Problems
NASA Technical Reports Server (NTRS)
Jung, M.; Ruede, U.
1996-01-01
Implicit extrapolation methods for the solution of partial differential equations are based on applying the extrapolation principle indirectly. Multigrid tau-extrapolation is a special case of this idea. In the context of multilevel finite element methods, an algorithm of this type can be used to raise the approximation order, even when the meshes are nonuniform or locally refined. Here previous results are generalized to the variable coefficient case and thus become applicable for nonlinear problems. The implicit extrapolation multigrid algorithm converges to the solution of a higher order finite element system. This is obtained without explicitly constructing higher order stiffness matrices but by applying extrapolation in a natural form within the algorithm. The algorithm requires only a small change of a basic low order multigrid method.
Recursive algorithms for vector extrapolation methods
NASA Technical Reports Server (NTRS)
Ford, William F.; Sidi, Avram
1988-01-01
Three classes of recursion relations are devised for implementing some extrapolation methods for vector sequences. One class of recursion relations can be used to implement methods like the modified minimal polynomial extrapolation and the topological epsilon algorithm; another allows implementation of methods like minimal polynomial and reduced rank extrapolation; while the remaining class can be employed in the implementation of the vector E-algorithm. Operation counts and storage requirements for these methods are also discussed, and some related techniques for special applications are also presented. Included are methods for the rapid evaluations of the vector E-algorithm.
Signal extrapolation based on wavelet representation
NASA Astrophysics Data System (ADS)
Xia, Xiang-Gen; Kuo, C.-C. Jay; Zhang, Zhen
1993-11-01
The Papoulis-Gerchberg (PG) algorithm is well known for band-limited signal extrapolation. We consider the generalization of the PG algorithm to signals in the wavelet subspaces in this research. The uniqueness of the extrapolation for continuous-time signals is examined, and sufficient conditions on signals and wavelet bases for the generalized PG (GPG) algorithm to converge are given. We also propose a discrete GPG algorithm for discrete-time signal extrapolation, and investigate its convergence. Numerical examples are given to illustrate the performance of the discrete GPG algorithm.
Measurement accuracies in band-limited extrapolation
NASA Technical Reports Server (NTRS)
Kritikos, H. N.
1982-01-01
The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.
Cosmogony as an extrapolation of magnetospheric research
NASA Technical Reports Server (NTRS)
Alfven, H.
1984-01-01
A theory of the origin and evolution of the Solar System which considered electromagnetic forces and plasma effects is revised in light of information supplied by space research. In situ measurements in the magnetospheres and solar wind can be extrapolated outwards in space, to interstellar clouds, and backwards in time, to the formation of the solar system. The first extrapolation leads to a revision of cloud properties essential for the early phases in the formation of stars and solar nebulae. The latter extrapolation facilitates analysis of the cosmogonic processes by extrapolation of magnetospheric phenomena. Pioneer-Voyager observations of the Saturnian rings indicate that essential parts of their structure are fossils from cosmogonic times. By using detailed information from these space missions, it is possible to reconstruct events 4 to 5 billion years ago with an accuracy of a few percent.
Extrapolation procedures in Mott electron polarimetry
NASA Technical Reports Server (NTRS)
Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.
1992-01-01
In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.
Extrapolation procedures in Mott electron polarimetry
NASA Technical Reports Server (NTRS)
Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.
1992-01-01
In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.
Endangered species toxicity extrapolation using ICE models
The National Research Council’s (NRC) report on assessing pesticide risks to threatened and endangered species (T&E) included the recommendation of using interspecies correlation models (ICE) as an alternative to general safety factors for extrapolating across species. ...
Endangered species toxicity extrapolation using ICE models
The National Research Council’s (NRC) report on assessing pesticide risks to threatened and endangered species (T&E) included the recommendation of using interspecies correlation models (ICE) as an alternative to general safety factors for extrapolating across species. ...
Species extrapolation for the 21st century.
Celander, Malin C; Goldstone, Jared V; Denslow, Nancy D; Iguchi, Taisen; Kille, Peter; Meyerhoff, Roger D; Smith, Ben A; Hutchinson, Thomas H; Wheeler, James R
2011-01-01
Safety factors are used in ecological risk assessments to extrapolate from the toxic responses of laboratory test species to all species representing that group in the environment. More accurate extrapolation of species responses is important. Advances in understanding the mechanistic basis for toxicological responses and identifying molecular response pathways can provide a basis for extrapolation across species and, in part, an explanation for the variability in whole organism responses to toxicants. We highlight potential short- and medium-term development goals to meet our long-term aspiration of truly predictive in silico extrapolation across wildlife species' response to toxicants. A conceptual approach for considering cross-species extrapolation is presented. Critical information is required to establish evidence-based species extrapolation, including identification of critical molecular pathways and regulatory networks that are linked to the biological mode of action and species' homologies. A case study is presented that examines steroidogenesis inhibition in fish after exposure to fadrozole or prochloraz. Similar effects for each compound among fathead minnow, medaka, and zebrafish were attributed to similar inhibitor pharmacokinetic/pharmacodynamic distributions and sequences of cytochrome P45019A1/2 (CYP19A1/2). Rapid advances in homology modeling allow the prediction of interactions of chemicals with enzymes, for example, CYP19 aromatase, which would eventually allow a prediction of potential aromatase toxicity of new compounds across a range of species. Eventually, predictive models will be developed to extrapolate across species, although substantial research is still required. Knowledge gaps requiring research include defining differences in life histories (e.g., reproductive strategies), understanding tissue-specific gene expression, and defining the role of metabolism on toxic responses and how these collectively affect the power of interspecies
Surrogate endpoint analysis: an exercise in extrapolation.
Baker, Stuart G; Kramer, Barnett S
2013-03-06
Surrogate endpoints offer the hope of smaller or shorter cancer trials. It is, however, important to realize they come at the cost of an unverifiable extrapolation that could lead to misleading conclusions. With cancer prevention, the focus is on hypothesis testing in small surrogate endpoint trials before deciding whether to proceed to a large prevention trial. However, it is not generally appreciated that a small surrogate endpoint trial is highly sensitive to a deviation from the key Prentice criterion needed for the hypothesis-testing extrapolation. With cancer treatment, the focus is on estimation using historical trials with both surrogate and true endpoints to predict treatment effect based on the surrogate endpoint in a new trial. Successively leaving out one historical trial and computing the predicted treatment effect in the left-out trial yields a standard error multiplier that summarizes the increased uncertainty in estimation extrapolation. If this increased uncertainty is acceptable, three additional extrapolation issues (biological mechanism, treatment following observation of the surrogate endpoint, and side effects following observation of the surrogate endpoint) need to be considered. In summary, when using surrogate endpoint analyses, an appreciation of the problems of extrapolation is crucial.
Typical object velocity influences motion extrapolation.
Makin, Alexis D J; Stewart, Andrew J; Poliakoff, Ellen
2009-02-01
Previous work indicates that extrapolation of object motion during occlusion is affected by the velocity of the immediately preceding trial. Here we ask whether longer-term velocity representations can also influence motion extrapolation. Red, blue or green targets disappeared behind an occluder. Participants pressed a button when they thought the target had reached the other side. Red targets were slower (10-20 deg/s), blue targets moved at medium velocities (14-26 deg/s) and green targets were faster (20-30 deg/s). We compared responses on a subset of red and green trials which always travelled at 20 deg/s. Although trial velocities were identical, participants responded as if the green targets moved faster (M = 22.64 deg/s) then the red targets (M = 19.72 deg/s). This indicates that motion extrapolation is affected by longer-term information about the typical velocity of different categories of stimuli.
Motion Extrapolation in the Central Fovea
Shi, Zhuanghua; Nijhawan, Romi
2012-01-01
Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination). A recent “correction-for-extrapolation” hypothesis suggests that the absence of forward shifts is caused by sensory signals representing ‘failed’ predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea. PMID:22438976
Chiral extrapolation of SU(3) amplitudes
Ecker, Gerhard
2011-05-23
Approximations of chiral SU(3) amplitudes at NNLO are proposed to facilitate the extrapolation of lattice data to the physical meson masses. Inclusion of NNLO terms is essential for investigating convergence properties of chiral SU(3) and for determining low-energy constants in a controllable fashion. The approximations are tested with recent lattice data for the ratio of decay constants F{sub K}/F{sub {pi}}.
NASA Astrophysics Data System (ADS)
Kosek, Wieslaw
2016-04-01
Future Earth Orientation Parameters data are needed to compute real time transformation between the celestial and terrestrial reference frames. This transformation is realized by predictions of x, y pole coordinates data, UT1-UTC data and precesion-nutation extrapolation model. This paper is focused on the pole coordinates data prediction by combination of the least-squares (LS) extrapolation and autoregressive (AR) prediction models (LS+AR). The AR prediction which is applied to the LS extrapolation residuals of pole coordinates data does not able to predict all frequency bands of them and it is mostly tuned to predict subseasonal oscillations. The absolute values of differences between pole coordinates data and their LS+AR predictions increase with prediction length and depend mostly on starting prediction epochs, thus time series of these differences for 2, 4 and 8 weeks in the future were analyzed. Time frequency spectra of these differences for different prediction lengths are very similar showing some power in the frequency band corresponding to the prograde Chandler and annual oscillations, which means that the increase of prediction errors is caused by mismodelling of these oscillations by the LS extrapolation model. Thus, the LS+AR prediction method can be modified by taking into additional AR prediction correction computed from time series of these prediction differences for different prediction lengths. This additional AR prediction is mostly tuned to the seasonal frequency band of pole coordinates data.
Mid-Field Sonic Boom Extrapolation Methodology
NASA Technical Reports Server (NTRS)
Cheung, Samson; Davis, Sanford; Tu, Eugene
1999-01-01
In the design cycle of low-boom airplanes, sonic boom prediction must be accurate and efficient. The classical linear method, Whitham's F-function theory, has been widely applied to predict sonic boom signatures. However, linear theory fails to capture the nonlinear effects created by large civil transport. Computational fluid dynamics (CFD) has been used successfully to predict sonic boom signals at the near and mid fields. Nevertheless, it is computationally expansive in airplane design runs. In the present study, the method of characteristics is used to predict sonic boom signals in an efficient fashion. The governing equations are the axisymmetric Euler's equations with constant enthalpy. Since the method solves Euler's equations, it captures more nonlinear effects than the classical Whitham's F-function technique. Furthermore, the method of characteristics is an efficient marching scheme for initial value problems. In this study, we will first review the current CFD extrapolation technique and the work previously done in sonic boom extrapolation. Then, we will introduce the governing equations and the method of characteristics. Finally, we will show that the present method yields the same accurate results as previous CFD techniques, but with higher efficiency.
Extrapolating Solar Dynamo Models Throughout the Heliosphere
NASA Astrophysics Data System (ADS)
Cox, B. T.; Miesch, M. S.; Augustson, K.; Featherstone, N. A.
2014-12-01
There are multiple theories that aim to explain the behavior of the solar dynamo, and their associated models have been fiercely contested. The two prevailing theories investigated in this project are the Convective Dynamo model that arises from the pure solving of the magnetohydrodynamic equations, as well as the Babcock-Leighton model that relies on sunspot dissipation and reconnection. Recently, the supercomputer simulations CASH and BASH have formed models of the behavior of the Convective and Babcock-Leighton models, respectively, in the convective zone of the sun. These models show the behavior of the models within the sun, while much less is known about the effects these models may have further away from the solar surface. The goal of this work is to investigate any fundamental differences between the Convective and Babcock-Leighton models of the solar dynamo outside of the sun and extending into the solar system via the use of potential field source surface extrapolations implemented via python code that operates on data from CASH and BASH. The use of real solar data to visualize supergranular flow data in the BASH model is also used to learn more about the behavior of the Babcock-Leighton Dynamo. From the process of these extrapolations it has been determined that the Babcock-Leighton model, as represented by BASH, maintains complex magnetic fields much further into the heliosphere before reverting into a basic dipole field, providing 3D visualisations of the models distant from the sun.
Dioxin equivalency: Challenge to dose extrapolation
Brown, J.F. Jr.; Silkworth, J.B.
1995-12-31
Extensive research has shown that all biological effects of dioxin-like agents are mediated via a single biochemical target, the Ah receptor (AhR), and that the relative biologic potencies of such agents in any given system, coupled with their exposure levels, may be described in terms of toxic equivalents (TEQ). It has also shown that the TEQ sources include not only chlorinated species such as the dioxins (PCDDs), PCDFs, and coplanar PCBs, but also non-chlorinated substances such as the PAHs of wood smoke, the AhR agonists of cooked meat, and the indolocarbazol (ICZ) derived from cruciferous vegetables. Humans have probably had elevated exposures to these non-chlorinated TEQ sources ever since the discoveries of fire, cooking, and the culinary use of Brassica spp. Recent assays of CYP1A2 induction show that these ``natural`` or ``traditional`` AhR agonists are contributing 50--100 times as much to average human TEQ exposures as do the chlorinated xenobiotics. Currently, the safe doses of the xenobiotic TEQ sources are estimated from their NOAELs and large extrapolation factors, derived from arbitrary mathematical models, whereas the NOAELs themselves are regarded as the safe doses for the TEQs of traditional dietary components. Available scientific data can neither support nor refute either approach to assessing the health risk of an individual chemical substance. However, if two substances be toxicologically equivalent, then their TEQ-adjusted health risks must also be equivalent, and the same dose extrapolation procedure should be used for both.
Extrapolation of toxic indices among test objects
Tichý, Miloň; Rucki, Marián; Roth, Zdeněk; Hanzlíková, Iveta; Vlková, Alena; Tumová, Jana; Uzlová, Rút
2010-01-01
Oligochaeta Tubifex tubifex, fish fathead minnow (Pimephales promelas), hepatocytes isolated from rat liver and ciliated protozoan are absolutely different organisms and yet their acute toxicity indices correlate. Correlation equations for special effects were developed for a large heterogeneous series of compounds (QSAR, quantitative structure-activity relationships). Knowing those correlation equations and their statistic evaluation, one can extrapolate the toxic indices. The reason is that a common physicochemical property governs the biological effect, namely the partition coefficient between two unmissible phases, simulated generally by n-octanol and water. This may mean that the transport of chemicals towards a target is responsible for the magnitude of the effect, rather than reactivity, as one would assume suppose. PMID:21331180
Border extrapolation using fractal attributes in remote sensing images
NASA Astrophysics Data System (ADS)
Cipolletti, M. P.; Delrieux, C. A.; Perillo, G. M. E.; Piccolo, M. C.
2014-01-01
In management, monitoring and rational use of natural resources the knowledge of precise and updated information is essential. Satellite images have become an attractive option for quantitative data extraction and morphologic studies, assuring a wide coverage without exerting negative environmental influence over the study area. However, the precision of such practice is limited by the spatial resolution of the sensors and the additional processing algorithms. The use of high resolution imagery (i.e., Ikonos) is very expensive for studies involving large geographic areas or requiring long term monitoring, while the use of less expensive or freely available imagery poses a limit in the geographic accuracy and physical precision that may be obtained. We developed a methodology for accurate border estimation that can be used for establishing high quality measurements with low resolution imagery. The method is based on the original theory by Richardson, taking advantage of the fractal nature of geographic features. The area of interest is downsampled at different scales and, at each scale, the border is segmented and measured. Finally, a regression of the dependence of the measured length with respect to scale is computed, which then allows for a precise extrapolation of the expected length at scales much finer than the originally available. The method is tested with both synthetic and satellite imagery, producing accurate results in both cases.
The Role of Motion Extrapolation in Amphibian Prey Capture
2015-01-01
Sensorimotor delays decouple behaviors from the events that drive them. The brain compensates for these delays with predictive mechanisms, but the efficacy and timescale over which these mechanisms operate remain poorly understood. Here, we assess how prediction is used to compensate for prey movement that occurs during visuomotor processing. We obtained high-speed video records of freely moving, tongue-projecting salamanders catching walking prey, emulating natural foraging conditions. We found that tongue projections were preceded by a rapid head turn lasting ∼130 ms. This motor lag, combined with the ∼100 ms phototransduction delay at photopic light levels, gave a ∼230 ms visuomotor response delay during which prey typically moved approximately one body length. Tongue projections, however, did not significantly lag prey position but were highly accurate instead. Angular errors in tongue projection accuracy were consistent with a linear extrapolation model that predicted prey position at the time of tongue contact using the average prey motion during a ∼175 ms period one visual latency before the head movement. The model explained successful strikes where the tongue hit the fly, and unsuccessful strikes where the fly turned and the tongue hit a phantom location consistent with the fly's earlier trajectory. The model parameters, obtained from the data, agree with the temporal integration and latency of retinal responses proposed to contribute to motion extrapolation. These results show that the salamander predicts future prey position and that prediction significantly improves prey capture success over a broad range of prey speeds and light levels. SIGNIFICANCE STATEMENT Neural processing delays cause actions to lag behind the events that elicit them. To cope with these delays, the brain predicts what will happen in the future. While neural circuits in the retina and beyond have been suggested to participate in such predictions, few behaviors have been
MMOC- MODIFIED METHOD OF CHARACTERISTICS SONIC BOOM EXTRAPOLATION
NASA Technical Reports Server (NTRS)
Darden, C. M.
1994-01-01
The Modified Method of Characteristics Sonic Boom Extrapolation program (MMOC) is a sonic boom propagation method which includes shock coalescence and incorporates the effects of asymmetry due to volume and lift. MMOC numerically integrates nonlinear equations from data at a finite distance from an airplane configuration at flight altitude to yield the sonic boom pressure signature at ground level. MMOC accounts for variations in entropy, enthalpy, and gravity for nonlinear effects near the aircraft, allowing extrapolation to begin nearer the body than in previous methods. This feature permits wind tunnel sonic boom models of up to three feet in length, enabling more detailed, realistic models than the previous six-inch sizes. It has been shown that elongated airplanes flying at high altitude and high Mach numbers can produce an acceptably low sonic boom. Shock coalescence in MMOC includes three-dimensional effects. The method is based on an axisymmetric solution with asymmetric effects determined by circumferential derivatives of the standard shock equations. Bow shocks and embedded shocks can be included in the near-field. The method of characteristics approach in MMOC allows large computational steps in the radial direction without loss of accuracy. MMOC is a propagation method rather than a predictive program. Thus input data (the flow field on a cylindrical surface at approximately one body length from the axis) must be supplied from calculations or experimental results. The MMOC package contains a uniform atmosphere pressure field program and interpolation routines for computing the required flow field data. Other user supplied input to MMOC includes Mach number, flow angles, and temperature. MMOC output tabulates locations of bow shocks and embedded shocks. When the calculations reach ground level, the overpressure and distance are printed, allowing the user to plot the pressure signature. MMOC is written in FORTRAN IV for batch execution and has been
Frequency extrapolation by nonconvex compressive sensing
Chartrand, Rick; Sidky, Emil Y; Pan, Xiaochaun
2010-12-03
Tomographic imaging modalities sample subjects with a discrete, finite set of measurements, while the underlying object function is continuous. Because of this, inversion of the imaging model, even under ideal conditions, necessarily entails approximation. The error incurred by this approximation can be important when there is rapid variation in the object function or when the objects of interest are small. In this work, we investigate this issue with the Fourier transform (FT), which can be taken as the imaging model for magnetic resonance imaging (MRl) or some forms of wave imaging. Compressive sensing has been successful for inverting this data model when only a sparse set of samples are available. We apply the compressive sensing principle to a somewhat related problem of frequency extrapolation, where the object function is represented by a super-resolution grid with many more pixels than FT measurements. The image on the super-resolution grid is obtained through nonconvex minimization. The method fully utilizes the available FT samples, while controlling aliasing and ringing. The algorithm is demonstrated with continuous FT samples of the Shepp-Logan phantom with additional small, high-contrast objects.
Bandlimited image extrapolation with faster convergence.
Cahana, D; Stark, H
1981-08-15
The Gerchberg-Papoulis (GP) algorithm has been widely discussed in the literature in connection with band-limited or space-limited image extrapolation. Despite its seemingly superior noise-resistant properties over earlier superresolution schemes, the GP algorithm generally exhibits very slow convergence thereby making the choice of starting point critical. We discuss how additional a priori information, such as the low-pass projection of the image (LPI), can be incorporated in the algorithm to decrease the initial error between the starting point of the recursion and the true signal. We also investigate how convergence rates might be improved by (1) using the LPI in each iteration to achieve a double per cycle correction, and (2) applying adaptive thresholding. Somewhat surprisingly, it was found that using the LPI had only a minor effect on the rate of convergence. On the other hand, when combined with adaptive thresholding the use of the LPI both significantly reduced the starting point error and improved the rate of convergence.
Bandlimited image extrapolation with faster convergence
NASA Astrophysics Data System (ADS)
Cahana, D.; Stark, H.
1981-08-01
Techniques for increasing the convergence rate of the extrapolation algorithm proposed by Gerchberg (1974) and Papoulis (1975) for image restoration applications are presented. The techniques involve the modification of the Gerchberg-Papoulis algorithm to include additional a priori data such as the low-pass projection of the image either by the inclusion of the data at the start of the recursion to reduce the starting-point error, or by use of the low-pass image in each iteration to correct twice in the frequency domain. The performance of the GP algorithm and the two modifications presented in the restorations of a signal consisting of widely separated spectral components of equal magnitude and a signal with spectral components grouped in passbands is compared, and it is found that while both modifications reduced the starting point error, the convergence rate of the second technique was not substantially greater than that of the first despite the additional iterative frequency-plane correction. A significant improvement in the starting-point errors and convergence rates of both modified algorithms is obtained, however, when they are combined with adaptive thresholding in the presence of low noise levels and a signal with relatively well spaced impulse-type spectral components.
Hard hadronic collisions: extrapolation of standard effects
Ali, A.; Aurenche, P.; Baier, R.; Berger, E.; Douiri, A.; Fontannaz, M.; Humpert, B.; Ingelman, G.; Kinnunen, R.; Pietarinen, E.
1984-01-01
We study hard hadronic collisions for the proton-proton (pp) and the proton-antiproton (p anti p) option in the CERN LEP tunnel. Based on our current knowledge of hard collisions at the present CERN p anti p Collider, and with the help of quantum chromodynamics (QCD), we extrapolate to the next generation of hadron colliders with a centre-of-mass energy E/sub cm/ = 10 to 20 TeV. We estimate various signatures, trigger rates, event topologies, and associated distributions for a variety of old and new physical processes, involving prompt photons, leptons, jets, W/sup + -/ and Z bosons in the final state. We also calculate the maximum fermion and boson masses accessible at the LEP Hadron Collider. The standard QCD and electroweak processes studied here, being the main body of standard hard collisions, quantify the challenge of extracting new physics with hadron colliders. We hope that our estimates will provide a useful profile of the final states, and that our experimental physics colleagues will find this of use in the design of their detectors. 84 references.
Extrapolation of nuclear waste glass aging
Byers, C.D.; Ewing, R.C.; Jercinovic, M.J.; Keil, K.
1984-01-01
Increased confidence is provided to the extrapolation of long-term waste form behavior by comparing the alteration of experimentally aged natural basaltic glass to the condition of the same glass as it has been geologically aged. The similarity between the laboratory and geologic alterations indicates that important aging variables have been identified and incorporated into the laboratory experiments. This provides credibility to the long-term predictions made for waste form borosilicate glasses using similar experimental procedures. In addition, these experiments have demonstrated that the aging processes for natural basaltic glass are relevant to the alteration of nuclear waste glasses, as both appear to react via similar processes. The alteration of a synthetic basaltic glass was measured in MCC-1 tests done at 90/sup 0/C, a SA/V of 0.1 cm/sup -1/ and time periods up to 182 days. Tests were also done using (1) MCC-2 procedures at 190/sup 0/C, a SA/V of 0.1 cm/sup -1/ and time periods up to 91 days and (2) hydration tests in saturated water vapor at 240/sup 0/C, a SA/V of approx. 10/sup 6/ cm/sup -1/, and time periods up to 63 days. These results are compared to alteration observed in natural basaltic glasses of great age. 6 references, 6 figures, 1 table.
Extrapolation of acute toxicity across bee species.
Thompson, Helen
2016-10-01
In applying cross-species extrapolation safety factors from honeybees to other bee species, some basic principles of toxicity have not been included, for example, the importance of body mass in determining a toxic dose. The present study re-analyzed published toxicity data, taking into account the reported mass of the individuals in the identified species. The analysis demonstrated a shift to the left in the distribution of sensitivity of honeybees relative to 20 other bee species when body size is taken into account, with the 95(th) percentile for contact and oral toxicity reducing from 10.7 (based on μg/individual bee) to 5.0 (based on μg/g bodyweight). Such an approach results in the real drivers of species differences in sensitivity-such as variability in absorption, distribution, metabolism, and excretion in and target-receptor binding-being more realistically reflected in the revised safety factor. Body mass can also be used to underpin the other parameter of first-tier risk assessment, that is, exposure. However, the key exposure factors that cannot be predicted from bodyweight are the effects of ecology and behavior of the different species on exposure to a treated crop. Further data are required to understand the biology of species associated with agricultural crops and the potential consequences of effects on individuals at the levels of the colony or bee populations. This information will allow the development of appropriate higher-tier refinement of risk assessments and testing strategies rather than extensive additional toxicity testing at Tier 1. Integr Environ Assess Manag 2016;12:622-626. © 2015 SETAC.
Direct Extrapolation of Biota-sediment Accumulation Factors (BSAFs)
Biota-sediment accumulation factors (BSAFs) for fish and shellfish were extrapolated directly from one location and species to other species, to other locations within a site, to other sites, and their combinations. The median errors in the extrapolations across species at a loc...
CROSS-SPECIES DOSE EXTRAPOLATION FOR DIESEL EMISSIONS
Models for cross-species (rat to human) dose extrapolation of diesel emission were evaluated for purposes of establishing guidelines for human exposure to diesel emissions (DE) based on DE toxicological data obtained in rats. Ideally, a model for this extrapolation would provide...
CROSS-SPECIES DOSE EXTRAPOLATION FOR DIESEL EMISSIONS
Models for cross-species (rat to human) dose extrapolation of diesel emission were evaluated for purposes of establishing guidelines for human exposure to diesel emissions (DE) based on DE toxicological data obtained in rats. Ideally, a model for this extrapolation would provide...
Extrapolation techniques for textural characterization of tissue in medical images
NASA Astrophysics Data System (ADS)
Sensakovic, William F.; Armato, Samuel G., III; Starkey, Adam
2007-03-01
The low in-plane resolution of thoracic computed tomography (CT) scans may force texture analysis in regions of interest (ROIs) that are not completely filled by the tissue under analysis. The inclusion of extraneous tissue textures within the ROI may substantially contaminate these texture descriptor values. The goal of this study is to investigate the accuracy of different image extrapolation methods when calculating common texture descriptor values. Three extrapolation methods (mean fill, tiled fill, and CLEAN deconvolution) were applied to 480 lung parenchyma regions of interest (ROIs) extracted from transverse thoracic CT sections. The ROIs were artificially corrupted, and each extrapolation method was independently applied to create extrapolation-corrected ROIs. Texture descriptor values were calculated and compared for the original, corrupted, and extrapolation-corrected ROIs. For 51 of 53 texture descriptors, the values calculated from extrapolation-corrected ROIs were more accurate than values calculated from corrupted ROIs. Further, a "best" extrapolation method for all texture descriptors was not identified, which implies that the choice of extrapolation method depends on the texture descriptors applied in a given tissue classification scheme.
40 CFR 86.435-78 - Extrapolated emission values.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Extrapolated emission values. 86.435-78 Section 86.435-78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Regulations for 1978 and Later New Motorcycles, General Provisions § 86.435-78 Extrapolated emission values...
Direct Extrapolation of Biota-sediment Accumulation Factors (BSAFs)
Biota-sediment accumulation factors (BSAFs) for fish and shellfish were extrapolated directly from one location and species to other species, to other locations within a site, to other sites, and their combinations. The median errors in the extrapolations across species at a loc...
Hsieh, T C; Chao, Anne
2016-08-26
Measures of phylogenetic diversity are basic tools in many studies of systematic biology. Faith's PD (sum of branch lengths of a phylogenetic tree connecting all focal species) is the most widely used phylogenetic measure. Like species richness, Faith's PD based on sampling data is highly dependent on sample size and sample completeness. The sample-size- and sample-coverage-based integration of rarefaction and extrapolation of Faith's PD was recently developed to make fair comparison across multiple assemblages. However, species abundances are not considered in Faith's PD. Based on the framework of Hill numbers, Faith's PD was generalized to a class of phylogenetic diversity measures that incorporates species abundances. In this article, we develop both theoretical formulae and analytic estimators for seamless rarefaction and extrapolation for this class of abundance-sensitive phylogenetic measures, which includes simple transformations of phylogenetic entropy and of quadratic entropy. This work generalizes the previous rarefaction/extrapolation model of Faith's PD to incorporate species abundance, and also extends the previous rarefaction/extrapolation model of Hill numbers to include phylogenetic differences among species. Thus a unified approach to assessing and comparing species/taxonomic diversity and phylogenetic diversity can be established. A bootstrap method is suggested for constructing confidence intervals around the phylogenetic diversity, facilitating the comparison of multiple assemblages. Our formulation and estimators can be extended to incidence data collected from multiple sampling units. We also illustrate the formulae and estimators using bacterial sequence data from the human distal esophagus and phyllostomid bat data from three habitats. [Extrapolation; diversity; Hill numbers; interpolation; phylogenetic diversity; prediction; rarefaction; sample completeness; sample coverage.].
3D Hail Size Distribution Interpolation/Extrapolation Algorithm
NASA Technical Reports Server (NTRS)
Lane, John
2013-01-01
Radar data can usually detect hail; however, it is difficult for present day radar to accurately discriminate between hail and rain. Local ground-based hail sensors are much better at detecting hail against a rain background, and when incorporated with radar data, provide a much better local picture of a severe rain or hail event. The previous disdrometer interpolation/ extrapolation algorithm described a method to interpolate horizontally between multiple ground sensors (a minimum of three) and extrapolate vertically. This work is a modification to that approach that generates a purely extrapolated 3D spatial distribution when using a single sensor.
Fully vectorial laser resonator modeling by vector extrapolation methods
NASA Astrophysics Data System (ADS)
Asoubar, Daniel; Kuhn, Michael; Wyrowski, Frank
2015-02-01
The optimization of multi-parameter resonators requires flexible simulation techniques beyond the scalar approximation. Therefore we generalize the scalar Fox and Li algorithm for the transversal eigenmode calculation to a fully vectorial model. This modified eigenvalue problem is solved by two polynomial-type vector extrapolation methods, namely the minimal polynomial extrapolation and the reduced rank extrapolation. Compared to other eigenvalue solvers these techniques can also be applied to resonators including nonlinear components. As an example we show the calculation of an azimuthally polarized eigenmode emitted by a resonator containing a discontinuous phase element and a nonlinear active medium. The simulation is verified by experiments.
Extrapolating demography with climate, proximity and phylogeny: approach with caution.
Coutts, Shaun R; Salguero-Gómez, Roberto; Csergő, Anna M; Buckley, Yvonne M
2016-12-01
Plant population responses are key to understanding the effects of threats such as climate change and invasions. However, we lack demographic data for most species, and the data we have are often geographically aggregated. We determined to what extent existing data can be extrapolated to predict population performance across larger sets of species and spatial areas. We used 550 matrix models, across 210 species, sourced from the COMPADRE Plant Matrix Database, to model how climate, geographic proximity and phylogeny predicted population performance. Models including only geographic proximity and phylogeny explained 5-40% of the variation in four key metrics of population performance. However, there was poor extrapolation between species and extrapolation was limited to geographic scales smaller than those at which landscape scale threats typically occur. Thus, demographic information should only be extrapolated with caution. Capturing demography at scales relevant to landscape level threats will require more geographically extensive sampling. © 2016 John Wiley & Sons Ltd/CNRS.
High to Low Dose Extrapolation of Experimental Animal Carcinogenesis Studies,
with its inherent limitations. A number of commonly used mathematical models of dose - response necessary for this extrapolation, will be discussed...thresholds; incorporation of background, or spontaneous responses; modification of the dose - response by pharmacokinetic processes. (Author)
The chemistry side of AOP: implications for toxicity extrapolation
An adverse outcome pathway (AOP) is a structured representation of the biological events that lead to adverse impacts following a molecular initiating event caused by chemical interaction with a macromolecule. AOPs have been proposed to facilitate toxicity extrapolation across s...
On the extrapolation of band-limited signals
NASA Astrophysics Data System (ADS)
Chamzas, C. C.
1980-12-01
The determination of the Fourier Transform of a band-limited signal in terms of a finite segment is examined. The Papoulis' Extrapolation Algorithm is extended in a broader class of signals and its convergence is considerably improved by a multiplication with an adaptive constant, chosen to minimize the mean square error in the extrapolation interval. The discrete version of the iteration is examined and then modified in order to converge to the best linear mean square estimator of the unknown signal when noise is added to the given data. The problem of determining the frequencies, amplitudes and phases of a sinusoidal signal from incomplete noisy data, is considered and the extrapolation algorithm is properly modified to estimate these quantities. The obtained iteration is nonlinear and adaptively reduces the spectral components due to noise. The adaptive extrapolation technique is applied to the problem of image restoration for objects consisting of point or line sources, and to an ultrasonic problem.
Multidimensional signal restoration and band-limited extrapolation, 2
NASA Astrophysics Data System (ADS)
Sanz, J. L. C.; Huang, T. S.
1982-12-01
This technical report consists of three parts. The central problem is the extrapolation of band-limited signals. In part 1, several existing algorithms for band-limited extrapolation are compared: Two-step procedures appeared to give better reconstructions and require less computing time than iterative algorithms. In part 2, five basic procedures for iterative restoration are unified using a Hilbert Space approach. In particular, all known interative algorithms for extrapolation of band-limited signals are shown to be special cases of Bialy's iteration. The authors also obtained faster algorithms than that of Papoulis-Gerchberg. In part 3, the extrapolation problem is presented in a more general setting: Continuation of certain analytic functions. Presented are two steps procedures for finding the continuation of these functions. Some new procedures for band-limited continuation are also discussed as well as the case in which the signal is contaminated with noise.
The chemistry side of AOP: implications for toxicity extrapolation
An adverse outcome pathway (AOP) is a structured representation of the biological events that lead to adverse impacts following a molecular initiating event caused by chemical interaction with a macromolecule. AOPs have been proposed to facilitate toxicity extrapolation across s...
Elimination techniques: from extrapolation to totally positive matrices and CAGD
NASA Astrophysics Data System (ADS)
Gasca, M.; Mühlbach, G.
2000-10-01
In this survey, we will show some connections between several mathematical problems such as extrapolation, linear systems, totally positive matrices and computer-aided geometric design, with elimination techniques as the common tool to deal with all of them.
Role of animal studies in low-dose extrapolation
Fry, R.J.M.
1981-01-01
Current data indicate that in the case of low-LET radiation linear, extrapolation from data obtained at high doses appears to overestimate the risk at low doses to a varying degree. In the case of high-LET radiation, extrapolation from data obtained at doses as low as 40 rad (0.4 Gy) is inappropriate and likely to result in an underestimate of the risk.
Implicit extrapolation methods for multilevel finite element computations
Jung, M.; Ruede, U.
1994-12-31
The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.
Do common systems control eye movements and motion extrapolation?
Makin, Alexis D J; Poliakoff, Ellen
2011-07-01
People are able to judge the current position of occluded moving objects. This operation is known as motion extrapolation. It has previously been suggested that motion extrapolation is independent of the oculomotor system. Here we revisited this question by measuring eye position while participants completed two types of motion extrapolation task. In one task, a moving visual target travelled rightwards, disappeared, then reappeared further along its trajectory. Participants discriminated correct reappearance times from incorrect (too early or too late) with a two-alternative forced-choice button press. In the second task, the target travelled rightwards behind a visible, rectangular occluder, and participants pressed a button at the time when they judged it should reappear. In both tasks, performance was significantly different under fixation as compared to free eye movement conditions. When eye movements were permitted, eye movements during occlusion were related to participants' judgements. Finally, even when participants were required to fixate, small changes in eye position around fixation (<2°) were influenced by occluded target motion. These results all indicate that overlapping systems control eye movements and judgements on motion extrapolation tasks. This has implications for understanding the mechanism underlying motion extrapolation.
Hsieh, T C; Chao, Anne
2017-01-01
Measures of phylogenetic diversity are basic tools in many studies of systematic biology. Faith’s PD (sum of branch lengths of a phylogenetic tree connecting all focal species) is the most widely used phylogenetic measure. Like species richness, Faith’s PD based on sampling data is highly dependent on sample size and sample completeness. The sample-size- and sample-coverage-based integration of rarefaction and extrapolation of Faith’s PD was recently developed to make fair comparison across multiple assemblages. However, species abundances are not considered in Faith’s PD. Based on the framework of Hill numbers, Faith’s PD was generalized to a class of phylogenetic diversity measures that incorporates species abundances. In this article, we develop both theoretical formulae and analytic estimators for seamless rarefaction and extrapolation for this class of abundance-sensitive phylogenetic measures, which includes simple transformations of phylogenetic entropy and of quadratic entropy. This work generalizes the previous rarefaction/extrapolation model of Faith’s PD to incorporate species abundance, and also extends the previous rarefaction/extrapolation model of Hill numbers to include phylogenetic differences among species. Thus a unified approach to assessing and comparing species/taxonomic diversity and phylogenetic diversity can be established. A bootstrap method is suggested for constructing confidence intervals around the phylogenetic diversity, facilitating the comparison of multiple assemblages. Our formulation and estimators can be extended to incidence data collected from multiple sampling units. We also illustrate the formulae and estimators using bacterial sequence data from the human distal esophagus and phyllostomid bat data from three habitats.
Rule-based extrapolation: a continuing challenge for exemplar models.
Denton, Stephen E; Kruschke, John K; Erickson, Michael A
2008-08-01
Erickson and Kruschke (1998, 2002) demonstrated that in rule-plus-exception categorization, people generalize category knowledge by extrapolating in a rule-like fashion, even when they are presented with a novel stimulus that is most similar to a known exception. Although exemplar models have been found to be deficient in explaining rule-based extrapolation, Rodrigues and Murre (2007) offered a variation of an exemplar model that was better able to account for such performance. Here, we present the results of a new rule-plus-exception experiment that yields rule-like extrapolation similar to that of previous experiments, and yet the data are not accounted for by Rodrigues and Murre's augmented exemplar model. Further, a hybrid rule-and-exemplar model is shown to better describe the data. Thus, we maintain that rule-plus-exception categorization continues to be a challenge for exemplar-only models.
Chiral Extrapolation of Lattice Data for Heavy Meson Hyperfine Splittings
X.-H. Guo; P.C. Tandy; A.W. Thomas
2006-03-01
We investigate the chiral extrapolation of the lattice data for the light-heavy meson hyperfine splittings D*-D and B*-B to the physical region for the light quark mass. The chiral loop corrections providing non-analytic behavior in m{sub {pi}} are consistent with chiral perturbation theory for heavy mesons. Since chiral loop corrections tend to decrease the already too low splittings obtained from linear extrapolation, we investigate two models to guide the form of the analytic background behavior: the constituent quark potential model, and the covariant model of QCD based on the ladder-rainbow truncation of the Dyson-Schwinger equations. The extrapolated hyperfine splittings remain clearly below the experimental values even allowing for the model dependence in the description of the analytic background.
Efficient implementation of minimal polynomial and reduced rank extrapolation methods
NASA Technical Reports Server (NTRS)
Sidi, Avram
1990-01-01
The minimal polynomial extrapolation (MPE) and reduced rank extrapolation (RRE) are two effective techniques that have been used in accelerating the convergence of vector sequences, such as those that are obtained from iterative solution of linear and nonlinear systems of equation. Their definitions involve some linear least squares problems, and this causes difficulties in their numerical implementation. Timewise efficient and numerically stable implementations for MPE and RRE are developed. A computer program written in FORTRAN 77 is also appended and applied to some model problems.
Extrapolation of scattering data to the negative-energy region
NASA Astrophysics Data System (ADS)
Blokhintsev, L. D.; Kadyrov, A. S.; Mukhamedzhanov, A. M.; Savin, D. A.
2017-04-01
Explicit analytic expressions are derived for the effective-range function for the case when the interaction is represented by a sum of the short-range square-well and long-range Coulomb potentials. These expressions are then transformed into forms convenient for extrapolating to the negative-energy region and obtaining the information about bound-state properties. Alternative ways of extrapolation are discussed. Analytic properties of separate terms entering these expressions for the effective-range function and the partial-wave scattering amplitude are investigated.
Biosimilars and the extrapolation of indications for inflammatory conditions
Tesser, John RP; Furst, Daniel E; Jacobs, Ira
2017-01-01
Extrapolation is the approval of a biosimilar for use in an indication held by the originator biologic not directly studied in a comparative clinical trial with the biosimilar. Extrapolation is a scientific rationale that bridges all the data collected (ie, totality of the evidence) from one indication for the biosimilar product to all the indications originally approved for the originator. Regulatory approval and marketing authorization of biosimilars in inflammatory indications are made on a case-by-case and agency-by-agency basis after evaluating the totality of evidence from the entire development program. This totality of the evidence comprises extensive comparative analytical, functional, nonclinical, and clinical pharmacokinetic/pharmacodynamic, efficacy, safety, and immunogenicity studies used by regulators when evaluating whether a product can be considered a biosimilar. Extrapolation reduces or eliminates the need for duplicative clinical studies of the biosimilar but must be justified scientifically with appropriate data. Understanding the concept, application, and regulatory decisions based on the extrapolation of data is important since biosimilars have the potential to significantly impact patient care in inflammatory diseases. PMID:28255229
How Accurate Are Infrared Luminosities from Monochromatic Photometric Extrapolation?
NASA Astrophysics Data System (ADS)
Lin, Zesen; Fang, Guanwen; Kong, Xu
2016-12-01
Template-based extrapolations from only one photometric band can be a cost-effective method to estimate the total infrared (IR) luminosities ({L}{IR}) of galaxies. By utilizing multi-wavelength data that covers across 0.35-500 μm in GOODS-North and GOODS-South fields, we investigate the accuracy of this monochromatic extrapolated {L}{IR} based on three IR spectral energy distribution (SED) templates out to z˜ 3.5. We find that the Chary & Elbaz template provides the best estimate of {L}{IR} in Herschel/Photodetector Array Camera and Spectrometer (PACS) bands, while the Dale & Helou template performs best in Herschel/Spectral and Photometric Imaging Receiver (SPIRE) bands. To estimate {L}{IR}, we suggest that extrapolations from the available longest wavelength PACS band based on the Chary & Elbaz template can be a good estimator. Moreover, if the PACS measurement is unavailable, extrapolations from SPIRE observations but based on the Dale & Helou template can also provide a statistically unbiased estimate for galaxies at z≲ 2. The emission with a rest-frame 10-100 μm range of IR SED can be well described by all three templates, but only the Dale & Helou template shows a nearly unbiased estimate of the emission of the rest-frame submillimeter part.
Correlation energy extrapolation by many-body expansion
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...
2017-01-09
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Extrapolation of forest community types with a geographic information system
W.K. Clatterbuck; J. Gregory
1991-01-01
A geographic information system (GIS) was used to project eight forest community types from a 1,190-acre (482-ha) intensively sampled area to an unsampled 19,887-acre (8,054-ha) adjacent area with similar environments on the Western Highland Rim of Tennessee. Both physiographic and vegetative parameters were used to distinguish, extrapolate, and map communities.
Application of a framework for extrapolating chemical effects ...
Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetics, life-stage, and pathway similarities/differences. Here we propose a framework using a tiered approach for species extrapolation that enables a transparent weight-of-evidence driven evaluation of pathway conservation (or lack thereof) in the context of adverse outcome pathways. Adverse outcome pathways describe the linkages from a molecular initiating event, defined as the chemical-biomolecule interaction, through subsequent key events leading to an adverse outcome of regulatory concern (e.g., mortality, reproductive dysfunction). Tier 1 of the extrapolation framework employs in silico evaluations of sequence and structural conservation of molecules (e.g., receptors, enzymes) associated with molecular initiating events or upstream key events. Such evaluations make use of available empirical and sequence data to assess taxonomic relevance. Tier 2 uses in vitro bioassays, such as enzyme inhibition/activation, competitive receptor binding, and transcriptional activation assays to explore functional conservation of pathways across taxa. Finally, Tier 3 provides a comparative analysis of in vivo responses between species utilizing well-established model organisms to assess departure from
Chiral Extrapolation of Light Mesons from the Lattice
NASA Astrophysics Data System (ADS)
Hu, Bin; Doring, Michael; Mai, Maxim; Molina, Raquel; Alexandru, Andrei
2017-01-01
The ρ(770) meson is the most extensively studied resonance in lattice QCD simulations in two (Nf = 2) and three (Nf = 2 + 1) flavors. We analyze all available phase shifts from Nf = 2 simulations using unitarized Chiral Perturbation Theory (UCHPT), and allowing not only for the extrapolation in mass but also in flavor, Nf = 2 ->Nf = 2 + 1 . The flavor extrapolation requires information from a global fit to ππ and πK phase shifts from experiment. In the chiral extrapolations of Nf = 2 simulations, the K K channel has a significant effect and leads to ρ(770) masses surprisingly close to the experimental one. We also discuss recent results on the chiral extrapolations of Nf = 2 + 1 lattice QCD data of the ρ(770) meson and the σ(600) that have become available. Supported by the U.S. Department of Energy Grant DE-SC0014133, contract DE-AC05-06OR23177, and by the National Science Foundation (CAREER Grants Nos. 1452055 and PHY-1151648 , PIF Grant No. 1415459).
MULTIPLE SOLVENT EXPOSURE IN HUMANS: CROSS-SPECIES EXTRAPOLATIONS
Multiple Solvent Exposures in Humans:
Cross-Species Extrapolations
(Future Research Plan)
Vernon A. Benignus1, Philip J. Bushnell2 and William K. Boyes2
A few solvents can be safely studied in acute experiments in human subjects. Data exist in rats f...
Analytic Approximations for the Extrapolation of Lattice Data
Masjuan, Pere
2010-12-22
We present analytic approximations of chiral SU(3) amplitudes for the extrapolation of lattice data to the physical masses and the determination of Next-to-Next-to-Leading-Order low-energy constants. Lattice data for the ratio F{sub K}/F{sub {pi}} is used to test the method.
Extrapolation of supersymmetry-breaking parameters to high energy scales
Stephen P Martin
2002-11-07
The author studies how well one can extrapolate the values of supersymmetry-breaking parameters to very high energy scales using future data from the Large Hadron Collider and an e{sup +}e{sup -} linear collider. He considers tests of the unification of squark and slepton masses in supergravity-inspired models. In gauge-mediated supersymmetry breaking models, he assess the ability to measure the mass scales associated with supersymmetry breaking. He also shows that it is possible to get good constraints on a scalar cubic stop-stop-Higgs couplings near the high scale. Different assumptions with varying levels of optimism about the accuracy of input parameter measurements are made, and their impact on the extrapolated results is documented.
Temperature extrapolation of multicomponent grand canonical free energy landscapes
NASA Astrophysics Data System (ADS)
Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.
2017-08-01
We derive a method for extrapolating the grand canonical free energy landscape of a multicomponent fluid system from one temperature to another. Previously, we introduced this statistical mechanical framework for the case where kinetic energy contributions to the classical partition function were neglected for simplicity [N. A. Mahynski et al., J. Chem. Phys. 146, 074101 (2017)]. Here, we generalize the derivation to admit these contributions in order to explicitly illustrate the differences that result. Specifically, we show how factoring out kinetic energy effects a priori, in order to consider only the configurational partition function, leads to simpler mathematical expressions that tend to produce more accurate extrapolations than when these effects are included. We demonstrate this by comparing and contrasting these two approaches for the simple cases of an ideal gas and a non-ideal, square-well fluid.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
Extrapolated gradientlike algorithms for molecular dynamics and celestial mechanics simulations.
Omelyan, I P
2006-09-01
A class of symplectic algorithms is introduced to integrate the equations of motion in many-body systems. The algorithms are derived on the basis of an advanced gradientlike decomposition approach. Its main advantage over the standard gradient scheme is the avoidance of time-consuming evaluations of force gradients by force extrapolation without any loss of precision. As a result, the efficiency of the integration improves significantly. The algorithms obtained are analyzed and optimized using an error-function theory. The best among them are tested in actual molecular dynamics and celestial mechanics simulations for comparison with well-known nongradient and gradient algorithms such as the Störmer-Verlet, Runge-Kutta, Cowell-Numerov, Forest-Ruth, Suzuki-Chin, and others. It is demonstrated that for moderate and high accuracy, the extrapolated algorithms should be considered as the most efficient for the integration of motion in molecular dynamics simulations.
Phase unwrapping using an extrapolation-projection algorithm
NASA Astrophysics Data System (ADS)
Marendic, Boris; Yang, Yongyi; Stark, Henry
2006-08-01
We explore an approach to the unwrapping of two-dimensional phase functions using a robust extrapolation-projection algorithm. Phase unwrapping is essential for imaging systems that construct the image from phase information. Unlike some existing methods where unwrapping is performed locally on a pixel-by-pixel basis, this work approaches the unwrapping problem from a global point of view. The unwrapping is done iteratively by a modification of the Gerchberg-Papoulis extrapolation algorithm, and the solution is refined by projecting onto the available global data at each iteration. Robustness of the algorithm is demonstrated through its performance in a noisy environment, and in comparison with a least-squares algorithm well-known in the literature.
Phase unwrapping using an extrapolation-projection algorithm.
Marendic, Boris; Yang, Yongyi; Stark, Henry
2006-08-01
We explore an approach to the unwrapping of two-dimensional phase functions using a robust extrapolation-projection algorithm. Phase unwrapping is essential for imaging systems that construct the image from phase information. Unlike some existing methods where unwrapping is performed locally on a pixel-by-pixel basis, this work approaches the unwrapping problem from a global point of view. The unwrapping is done iteratively by a modification of the Gerchberg-Papoulis extrapolation algorithm, and the solution is refined by projecting onto the available global data at each iteration. Robustness of the algorithm is demonstrated through its performance in a noisy environment, and in comparison with a least-squares algorithm well-known in the literature.
Specification of mesospheric density, pressure, and temperature by extrapolation
NASA Technical Reports Server (NTRS)
Graves, M. E.; Low, Y. S.; Miller, A. H.
1973-01-01
A procedure is presented which employs an extrapolation technique to obtain estimates of density, pressure, and temperature up to 90 km from 52 km data. The resulting errors are investigated. The procedure is combined with a special temperature interpolation method around the stratopause to produce such estimates at eight levels between 36 km and 90 km from North American sectional chart data at 5, 2, and 0.4 mb. Fifth charts were processed to obtain mean values and standard deviations at grid points for midseasonal months from 1964 to 1966. The mean values were compared with Groves' model, and internal consistency tests were performed upon the statistics. Through application of the extrapolation procedure, the atmospheric structure of a stratospheric warming event is studied.
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; ...
2016-02-18
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.
2016-02-18
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.
2016-01-01
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.
NASA Astrophysics Data System (ADS)
Mirus, Benjamin B.; Halford, Keith; Sweetkind, Don; Fenelon, Joe
2016-08-01
The suitability of geologic frameworks for extrapolating hydraulic conductivity ( K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.
An efficient extrapolation to the (T)/CBS limit
NASA Astrophysics Data System (ADS)
Ranasinghe, Duminda S.; Barnes, Ericka C.
2014-05-01
We extrapolate to the perturbative triples (T)/complete basis set (CBS) limit using double ζ basis sets without polarization functions (Wesleyan-1-Triples-2ζ or "Wes1T-2Z") and triple ζ basis sets with a single level of polarization functions (Wesleyan-1-Triples-3ζ or "Wes1T-3Z"). These basis sets were optimized for 102 species representing the first two rows of the Periodic Table. The species include the entire set of neutral atoms, positive and negative atomic ions, as well as several homonuclear diatomic molecules, hydrides, rare gas dimers, polar molecules, such as oxides and fluorides, and a few transition states. The extrapolated Wes1T-(2,3)Z triples energies agree with (T)/CBS benchmarks to within ±0.65 mEh, while the rms deviations of comparable model chemistries W1, CBS-APNO, and CBS-QB3 for the same test set are ±0.23 mEh, ±2.37 mEh, and ±5.80 mEh, respectively. The Wes1T-(2,3)Z triples calculation time for the largest hydrocarbon in the G2/97 test set, C6H5Me+, is reduced by a factor of 25 when compared to W1. The cost-effectiveness of the Wes1T-(2,3)Z extrapolation validates the usefulness of the Wes1T-2Z and Wes1T-3Z basis sets which are now available for a more efficient extrapolation of the (T) component of any composite model chemistry.
Biosimilars in Inflammatory Bowel Disease: Facts and Fears of Extrapolation.
Ben-Horin, Shomron; Vande Casteele, Niels; Schreiber, Stefan; Lakatos, Peter Laszlo
2016-12-01
Biologic drugs such as infliximab and other anti-tumor necrosis factor monoclonal antibodies have transformed the treatment of immune-mediated inflammatory conditions such as Crohn's disease and ulcerative colitis (collectively known as inflammatory bowel disease [IBD]). However, the complex manufacturing processes involved in producing these drugs mean their use in clinical practice is expensive. Recent or impending expiration of patents for several biologics has led to development of biosimilar versions of these drugs, with the aim of providing substantial cost savings and increased accessibility to treatment. Biosimilars undergo an expedited regulatory process. This involves proving structural, functional, and biological biosimilarity to the reference product (RP). It is also expected that clinical equivalency/comparability will be demonstrated in a clinical trial in one (or more) sensitive population. Once these requirements are fulfilled, extrapolation of biosimilar approval to other indications for which the RP is approved is permitted without the need for further clinical trials, as long as this is scientifically justifiable. However, such justification requires that the mechanism(s) of action of the RP in question should be similar across indications and also comparable between the RP and the biosimilar in the clinically tested population(s). Likewise, the pharmacokinetics, immunogenicity, and safety of the RP should be similar across indications and comparable between the RP and biosimilar in the clinically tested population(s). To date, most anti-tumor necrosis factor biosimilars have been tested in trials recruiting patients with rheumatoid arthritis. Concerns have been raised regarding extrapolation of clinical data obtained in rheumatologic populations to IBD indications. In this review, we discuss the issues surrounding indication extrapolation, with a focus on extrapolation to IBD.
Chiral extrapolation of the X(3872) binding energy
NASA Astrophysics Data System (ADS)
Baru, V.; Epelbaum, E.; Filin, A. A.; Gegelia, J.; Nefediev, A. V.
2016-02-01
The role of pion dynamics in the X(3872) charmonium-like state is studied in the framework of a renormalisable effective quantum field theory approach and they are found to play a substantial role in the formation of the X. Chiral extrapolation from the physical point to unphysically large pion masses is performed and the results are confronted with the lattice predictions. The proposed approach overrides the gap between the lattice calculations and the physical limit in mπ.
An efficient extrapolation to the (T)/CBS limit
Ranasinghe, Duminda S.; Barnes, Ericka C.
2014-05-14
We extrapolate to the perturbative triples (T)/complete basis set (CBS) limit using double ζ basis sets without polarization functions (Wesleyan-1-Triples-2ζ or “Wes1T-2Z”) and triple ζ basis sets with a single level of polarization functions (Wesleyan-1-Triples-3ζ or “Wes1T-3Z”). These basis sets were optimized for 102 species representing the first two rows of the Periodic Table. The species include the entire set of neutral atoms, positive and negative atomic ions, as well as several homonuclear diatomic molecules, hydrides, rare gas dimers, polar molecules, such as oxides and fluorides, and a few transition states. The extrapolated Wes1T-(2,3)Z triples energies agree with (T)/CBS benchmarks to within ±0.65 mE{sub h}, while the rms deviations of comparable model chemistries W1, CBS-APNO, and CBS-QB3 for the same test set are ±0.23 mE{sub h}, ±2.37 mE{sub h}, and ±5.80 mE{sub h}, respectively. The Wes1T-(2,3)Z triples calculation time for the largest hydrocarbon in the G2/97 test set, C{sub 6}H{sub 5}Me{sup +}, is reduced by a factor of 25 when compared to W1. The cost-effectiveness of the Wes1T-(2,3)Z extrapolation validates the usefulness of the Wes1T-2Z and Wes1T-3Z basis sets which are now available for a more efficient extrapolation of the (T) component of any composite model chemistry.
Extrapolation and direct matching mediate anticipation in infancy.
Green, Dorota; Kochukhova, Olga; Gredebäck, Gustaf
2014-02-01
Why are infants able to anticipate occlusion events and other people's actions but not the movement of self-propelled objects? This study investigated infant and adult anticipatory gaze shifts during observation of self-propelled objects and human goal-directed actions. Six-month-old infants anticipated self-propelled balls but not human actions. This demonstrates that different processes mediate the ability to anticipate human actions (direct matching) versus self-propelled objects (extrapolation).
Line-of-sight extrapolation noise in dust polarization
NASA Astrophysics Data System (ADS)
Poh, Jason; Dodelson, Scott
2017-05-01
The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g. 350 GHz) is due solely to dust and then extrapolate the signal down to a lower frequency (e.g. 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typical Galactic thermal dust temperatures of ˜20 K , these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise on a greybody dust model consistent with Planck and Pan-STARRS observations, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r ≲0.0015 in the greybody dust models considered in this
Accurate extrapolation of electron correlation energies from small basis sets.
Bakowies, Dirk
2007-10-28
A new two-point scheme is proposed for the extrapolation of electron correlation energies obtained with small basis sets. Using the series of correlation-consistent polarized valence basis sets, cc-pVXZ, the basis set truncation error is expressed as deltaE(X) proportional, variant(X + xi(i))(-gamma). The angular momentum offset xi(i) captures differences in effective rates of convergence previously observed for first-row molecules. It is based on simple electron counts and tends to values close to 0 for hydrogen-rich compounds and values closer to 1 for pure first-row compounds containing several electronegative atoms. The formula is motivated theoretically by the structure of correlation-consistent basis sets which include basis functions up to angular momentum L = X-1 for hydrogen and helium and up to L = X for first-row atoms. It contains three parameters which are calibrated against a large set of 105 reference molecules (H, C, N, O, F) for extrapolations of MP2 and CCSD valence-shell correlation energies from double- and triple-zeta (DT) and triple- and quadruple-zeta (TQ) basis sets. The new model is shown to be three to five times more accurate than previous two-point schemes using a single parameter, and (TQ) extrapolations are found to reproduce a small set of available R12 reference data better than even (56) extrapolations using the conventional asymptotic limit formula deltaE(X) proportional, variantX(-3). Applications to a small selection of boron compounds and to neon show very satisfactory results as well. Limitations of the model are discussed.
Line-of-Sight Extrapolation Noise in Dust Polarization
Poh, Jason; Dodelson, Scott
2016-06-28
The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g., 350 GHz) is due solely to dust and then extrapolate the signal down to lower frequency (e.g., 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typical Galactic thermal dust temperatures of about 20K, these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r < 0.0015 .
Function Learning: An Exemplar Account of Extrapolation Performance
2003-10-29
Function learning An exemplar account of extrapolation performance Peter J Kwantes DRDC Toronto Andrew Neal University of Queensland...Defence R&D Canada – Toronto Technical Report DRDC Toronto TR 2003-138 October 2003 Author Peter J Kwantes Approved by Dr. Lochlan Magee Head...architecture for representing knowledge of functional relationships in a virtual operator. Kwantes , P.J., Neal, A. 2003. Function learning. An
Data-Driven Scale Extrapolation: Application on Continental Scale
NASA Astrophysics Data System (ADS)
Gong, L.
2014-12-01
Large-scale hydrological models and land surface models are so far the only tools for assessing current and future water resources. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited availability and quality of data, as well as model uncertainties. A new purely data-driven scale-extrapolation method to estimate discharge for a large region solely from selected small sub-basins, which are typically 1-2 orders of magnitude smaller than the large region, has been developed. When tested in the Baltic Sea drainage basin, the method was able to provide accurate discharge estimation for the gauged area with sub-basins that cover 5% of the gauged area. There exist multiple sets of sub-basins whose climate and hydrology resemble those of the gauged area equally well. Those multiple sets estimate annual discharge for the gauged area consistently well with 6 % average error. The scale-extrapolation method is completely data-driven; therefore it does not force any modelling error into the prediction. The scale-extrapolation method is now further tested at continent scale in Europe and North America to exam its potential for climate change studies.
Statistically extrapolated nowcasting of summertime precipitation over the Eastern Alps
NASA Astrophysics Data System (ADS)
Chen, Min; Bica, Benedikt; Tüchler, Lukas; Kann, Alexander; Wang, Yong
2017-07-01
This paper presents a new multiple linear regression (MLR) approach to updating the hourly, extrapolated precipitation forecasts generated by the INCA (Integrated Nowcasting through Comprehensive Analysis) system for the Eastern Alps. The generalized form of the model approximates the updated precipitation forecast as a linear response to combinations of predictors selected through a backward elimination algorithm from a pool of predictors. The predictors comprise the raw output of the extrapolated precipitation forecast, the latest radar observations, the convective analysis, and the precipitation analysis. For every MLR model, bias and distribution correction procedures are designed to further correct the systematic regression errors. Applications of the MLR models to a verification dataset containing two months of qualified samples, and to one-month gridded data, are performed and evaluated. Generally, MLR yields slight, but definite, improvements in the intensity accuracy of forecasts during the late evening to morning period, and significantly improves the forecasts for large thresholds. The structure-amplitude-location scores, used to evaluate the performance of the MLR approach, based on its simulation of morphological features, indicate that MLR typically reduces the overestimation of amplitudes and generates similar horizontal structures in precipitation patterns and slightly degraded location forecasts, when compared with the extrapolated nowcasting.
A simple extrapolation of thermodynamic perturbation theory to infinite order
Ghobadi, Ahmadreza F.; Elliott, J. Richard
2015-09-21
Recent analyses of the third and fourth order perturbation contributions to the equations of state for square well spheres and Lennard-Jones chains show trends that persist across orders and molecular models. In particular, the ratio between orders (e.g., A{sub 3}/A{sub 2}, where A{sub i} is the ith order perturbation contribution) exhibits a peak when plotted with respect to density. The trend resembles a Gaussian curve with the peak near the critical density. This observation can form the basis for a simple recursion and extrapolation from the highest available order to infinite order. The resulting extrapolation is analytic and therefore cannot fully characterize the critical region, but it remarkably improves accuracy, especially for the binodal curve. Whereas a second order theory is typically accurate for the binodal at temperatures within 90% of the critical temperature, the extrapolated result is accurate to within 99% of the critical temperature. In addition to square well spheres and Lennard-Jones chains, we demonstrate how the method can be applied semi-empirically to the Perturbed Chain - Statistical Associating Fluid Theory (PC-SAFT)
Verwichte, E.; Foullon, C.; White, R. S.; Van Doorsselaere, T.
2013-04-10
Two transversely oscillating coronal loops are investigated in detail during a flare on the 2011 September 6 using data from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. We compare two independent methods to determine the Alfven speed inside these loops. Through the period of oscillation and loop length, information about the Alfven speed inside each loop is deduced seismologically. This is compared with the Alfven speed profiles deduced from magnetic extrapolation and spectral methods using AIA bandpass. We find that for both loops the two methods are consistent. Also, we find that the average Alfven speed based on loop travel time is not necessarily a good measure to compare with the seismological result, which explains earlier reported discrepancies. Instead, the effect of density and magnetic stratification on the wave mode has to be taken into account. We discuss the implications of combining seismological, extrapolation, and spectral methods in deducing the physical properties of coronal loops.
Determination of Extrapolation Distance with Measured Pressure Signatures from Two Low-Boom Models
NASA Technical Reports Server (NTRS)
Mack, Robert J.; Kuhn, Neil
2004-01-01
A study to determine a limiting distance to span ratio for the extrapolation of near-field pressure signatures is described and discussed. This study was to be done in two wind-tunnel facilities with two wind-tunnel models. At this time, only the first half had been completed, so the scope of this report is limited to the design of the models, and to an analysis of the first set of measured pressure signatures. The results from this analysis showed that the pressure signatures measured at separation distances of 2 to 5 span lengths did not show the desired low-boom shapes. However, there were indications that the pressure signature shapes were becoming 'flat-topped'. This trend toward a 'flat-top' pressure signatures shape was seen to be a gradual one at the distance ratios employed in this first series of wind-tunnel tests.
Polanco, Carlos; Buhse, Thomas; Vizcaíno, Gloria; Picciotto, Jacobo Levy
2017-01-01
This paper addresses the polar profile of ancient proteins using a comparative study of amino acids found in 25 000 000-year-old shells described in Abelson's work. We simulated the polar profile with a computer platform that represented an evolutionary computational toy model that mimicked the generation of small proteins starting from a pool of monomeric amino acids and that included several dynamic properties, such as self-replication and fragmentation-recombination of the proteins. The simulations were taken up to 15 generations and produced a considerable number of proteins of 25 amino acids in length. The computational model included the amino acids found in the ancient shells, the thermal degradation factor, and the relative abundance of the amino acids observed in the Miller-Urey experimental simulation of the prebiotic amino acid formation. We found that the amino acid polar profiles of the ancient shells and those simulated and extrapolated from the Miller-Urey abundances are coincident.
Smooth extrapolation of unknown anatomy via statistical shape models
NASA Astrophysics Data System (ADS)
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
Extrapolation of vertical target motion through a brief visual occlusion.
Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco
2010-03-01
It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects.
Acute toxicity value extrapolation with fish and aquatic invertebrates
Buckler, Denny R.; Mayer, Foster L.; Ellersieck, Mark R.; Asfaw, Amha
2005-01-01
Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled “Ecological Risk Analysis” (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more
Acute toxicity value extrapolation with fish and aquatic invertebrates.
Buckler, Denny R; Mayer, Foster L; Ellersieck, Mark R; Asfaw, Amha
2005-11-01
Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled "Ecological Risk Analysis" (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more sensitive to
Poisson’s Ratio Extrapolation from Digital Image Correlation Experiments
2013-03-01
5f. WORK UNIT NUMBER Q16H 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NO. Air Force Research Laboratory...ACRONYM(S) Air Force Research Laboratory (AFMC) AFRL/RQR 5 Pollux Drive 11. SPONSOR/MONITOR’S REPORT Edwards AFB CA 93524-7048 NUMBER(S) AFRL-RQ-ED-TP...Affairs Clearance Number XXXXX. POISSON’S RATIO EXTRAPOLATION FROM DIGITAL IMAGE CORRELATION EXPERIMENTS Timothy C. Miller Air Force Research
Chiral extrapolations on the lattice with strange sea quarks
NASA Astrophysics Data System (ADS)
Descotes-Genon, Sébastien
2005-06-01
The (light but not-so-light) strange quark may play a special role in the low-energy dynamics of QCD. Strange sea-quark pairs may induce significant differences in the pattern of chiral symmetry breaking in the chiral limits of two and three massless flavours, in relation with the violation of the Zweig rule in the scalar sector. This effect could affect chiral extrapolations of unquenched lattice simulations with three dynamical flavours, and it could be detected through the quark-mass dependence of hadron observables [S. Descotes-Genon, hep-ph/0410233].
Chiral and continuum extrapolation of partially-quenched hadron masses
Chris Allton; Wes Armour; Derek Leinweber; Anthony Thomas; Ross Young
2005-09-29
Using the finite-range regularization (FRR) of chiral effective field theory, the chiral extrapolation formula for the vector meson mass is derived for the case of partially-quenched QCD. We re-analyze the dynamical fermion QCD data for the vector meson mass from the CP-PACS collaboration. A global fit, including finite lattice spacing effects, of all 16 of their ensembles is performed. We study the FRR method together with a naive polynomial approach and find excellent agreement ({approx}1%) with the experimental value of M{sub {rho}} from the former approach. These results are extended to the case of the nucleon mass.
Novel extrapolation method in the Monte Carlo shell model
Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio
2010-12-15
We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of {sup 56}Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g{sub 9/2}-shell calculation of {sup 64}Ge.
Evidence for risk extrapolation in decision making by tadpoles
Crane, Adam L.; Ferrari, Maud C. O.
2017-01-01
Through time, the activity patterns, morphology, and development of both predators and prey change, which in turn alter the relative vulnerability of prey to their coexisting predators. Recognizing these changes can thus allow prey to make optimal decisions by projecting risk trends into the future. We used tadpoles (Lithobates sylvaticus) to test the hypothesis that tadpoles can extrapolate information about predation risk from past information. We exposed tadpoles to an odour that represented either a temporally consistent risk or an increasing risk. When tested for their response to the odour, the initial antipredator behaviour of tadpoles did not differ, appearing to approach the limit of their maximum response, but exposure to increasing risk induced longer retention of these responses. When repeating the experiment using lower risk levels, heightened responses occurred for tadpoles exposed to increasing risk, and the strongest responses were exhibited by those that received an abrupt increase compared to a steady increase. Our results indicate that tadpoles can assess risk trends through time and adjust their antipredator responses in a way consistent with an extrapolated trend. This is a sophisticated method for prey to avoid threats that are becoming more (or less) dangerous over part of their lifespan. PMID:28230097
Extrapolating Single Organic Ion Solvation Thermochemistry from Simulated Water Nanodroplets.
Coles, Jonathan P; Houriez, Céline; Meot-Ner Mautner, Michael; Masella, Michel
2016-09-08
We compute the ion/water interaction energies of methylated ammonium cations and alkylated carboxylate anions solvated in large nanodroplets of 10 000 water molecules using 10 ns molecular dynamics simulations and an all-atom polarizable force-field approach. Together with our earlier results concerning the solvation of these organic ions in nanodroplets whose molecular sizes range from 50 to 1000, these new data allow us to discuss the reliability of extrapolating absolute single-ion bulk solvation energies from small ion/water droplets using common power-law functions of cluster size. We show that reliable estimates of these energies can be extrapolated from a small data set comprising the results of three droplets whose sizes are between 100 and 1000 using a basic power-law function of droplet size. This agrees with an earlier conclusion drawn from a model built within the mean spherical framework and paves the road toward a theoretical protocol to systematically compute the solvation energies of complex organic ions.
Behavioral effects of carbon monoxide: Meta analyses and extrapolations
Benignus, V.A.
1993-03-16
In the absence of reliable data, the present work was performed to estimate the dose effect function of carboxyhemoglobin (COHb) on behavior in humans. By meta analysis, a COHb-behavior dose-effects functions was estimated for rats and corrected for effects of hypothermia (which accompanies COHb increases in rats but not in humans). Using pulmonary function models and blood-gas equations, equivalent COHb values were calculated for data in the literature on hypoxic hypoxia (HH) and behavior. Another meta analysis was performed to fit a dose-effects function to the equivalent-COHb data and to correct for the behavioral effects of hypocapnia (which usually occurs during HH but not with COHb elevation). The two extrapolations agreed closely and indicated that for healthy, sedentary persons, it would require 18-25% COHb to produce a 10% decrement in behavior. Confidence intervals were computed to characterize the uncertainty. Frequent reports of lower-level effects were discussed.
Dead time corrections using the backward extrapolation method
NASA Astrophysics Data System (ADS)
Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.
2017-05-01
Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.
Spatial extrapolation of lysimeter results using thermal infrared imaging
NASA Astrophysics Data System (ADS)
Voortman, B. R.; Bosveld, F. C.; Bartholomeus, R. P.; Witte, J. P. M.
2016-12-01
Measuring evaporation (E) with lysimeters is costly and prone to numerous errors. By comparing the energy balance and the remotely sensed surface temperature of lysimeters with those of the undisturbed surroundings, we were able to assess the representativeness of lysimeter measurements and to quantify differences in evaporation caused by spatial variations in soil moisture content. We used an algorithm (the so called 3T model) to spatially extrapolate the measured E of a reference lysimeter based on differences in surface temperature, net radiation and soil heat flux. We tested the performance of the 3T model on measurements with multiple lysimeters (47.5 cm inner diameter) and micro lysimeters (19.2 cm inner diameter) installed in bare sand, moss and natural dry grass. We developed different scaling procedures using in situ measurements and remotely sensed surface temperatures to derive spatially distributed estimates of Rn and G and explored the physical soundness of the 3T model. Scaling of Rn and G considerably improved the performance of the 3T model for the bare sand and moss experiments (Nash-Sutcliffe efficiency (NSE) increasing from 0.45 to 0.89 and from 0.81 to 0.94, respectively). For the grass surface, the scaling procedures resulted in a poorer performance of the 3T model (NSE decreasing from 0.74 to 0.70), which was attributed to effects of shading and the difficulty to correct for differences in emissivity between dead and living biomass. The 3T model is physically unsound if the field scale average air temperature, measured at an arbitrarily chosen reference height, is used as input to the model. The proposed measurement system is relatively cheap, since it uses a zero tension (freely draining) lysimeter which results are extrapolated by the 3T model to the unaffected surroundings. The system is promising for bridging the gap between ground observations and satellite based estimates of E.
Survival Extrapolation in the Presence of Cause Specific Hazards
Benaglia, Tatiana; Jackson, Christopher H.; Sharples, Linda D.
2016-01-01
Health economic evaluations require estimates of expected survival from patients receiving different interventions, often over a lifetime. However, data on the patients of interest are typically only available for a much shorter follow-up time, from randomised trials or cohorts. Previous work showed how to use general population mortality to improve extrapolations of the short-term data, assuming a constant additive or multiplicative effect on the hazards for all-cause mortality for study patients relative to the general population. A more plausible assumption may be a constant effect on the hazard for the specific cause of death targeted by the treatments. To address this problem, we use independent parametric survival models for cause-specific mortality among the general population. Since causes of death are unobserved for the patients of interest, a polyhazard model is used to express their all-cause mortality as a sum of latent cause-specific hazards. Assuming proportional cause-specific hazards between the general and study populations then allows us to extrapolate mortality of the patients of interest to the long term. A Bayesian framework is used to jointly model all sources of data. By simulation we show that ignoring cause-specific hazards leads to biased estimates of mean survival when the proportion of deaths due to the cause of interest changes through time. The methods are applied to an evaluation of implantable cardioverter defibrillators (ICD) for the prevention of sudden cardiac death among patients with cardiac arrhythmia. After accounting for cause-specific mortality, substantial differences are seen in estimates of life years gained from ICD. PMID:25413028
Exposure Matching for Extrapolation of Efficacy in Pediatric Drug Development.
Mulugeta, Yeruk; Barrett, Jeffrey S; Nelson, Robert; Eshete, Abel Tilahun; Mushtaq, Alvina; Yao, Lynne; Glasgow, Nicole; Mulberg, Andrew E; Gonzalez, Daniel; Green, Dionna; Florian, Jeffry; Krudys, Kevin; Seo, Shirley; Kim, Insook; Chilukuri, Dakshina; Burckart, Gilbert J
2016-11-01
During drug development, matching adult systemic exposures of drugs is a common approach for dose selection in pediatric patients when efficacy is partially or fully extrapolated. This is a systematic review of approaches used for matching adult systemic exposures as the basis for dose selection in pediatric trials submitted to the US Food and Drug Administration (FDA) between 1998 and 2012. The trial design of pediatric pharmacokinetic (PK) studies and the pediatric and adult systemic exposure data were obtained from FDA publicly available databases containing reviews of pediatric trials. Exposure-matching approaches that were used as the basis for pediatric dose selection were reviewed. The PK data from the adult and pediatric populations were used to quantify exposure agreement between the 2 patient populations. The main measures were the pediatric PK studies' trial design elements and drug systemic exposures (adult and pediatric). There were 31 products (86 trials) with full or partial extrapolation of efficacy with an available PK assessment. Pediatric exposures had a range of mean Cmax and AUC ratios (pediatric/adult) of 0.63 to 4.19 and 0.36 to 3.60, respectively. Seven of the 86 trials (8.1%) had a predefined acceptance boundary used to match adult exposures. The key PK parameter was consistently predefined for antiviral and anti-infective products. Approaches to match exposure in children and adults varied across products. A consistent approach for systemic exposure matching and evaluating pediatric PK studies is needed to guide future pediatric trials. © 2016, The American College of Clinical Pharmacology.
Validation subset selections for extrapolation oriented QSPAR models.
Szántai-Kis, Csaba; Kövesdi, István; Kéri, György; Orfi, László
2003-01-01
One of the most important features of QSPAR models is their predictive ability. The predictive ability of QSPAR models should be checked by external validation. In this work we examined three different types of external validation set selection methods for their usefulness in in-silico screening. The usefulness of the selection methods was studied in such a way that: 1) We generated thousands of QSPR models and stored them in 'model banks'. 2) We selected a final top model from the model banks based on three different validation set selection methods. 3) We predicted large data sets, which we called 'chemical universe sets', and calculated the corresponding SEPs. The models were generated from small fractions of the available water solubility data during a GA Variable Subset Selection procedure. The external validation sets were constructed by random selections, uniformly distributed selections or by perimeter-oriented selections. We found that the best performing models on the perimeter-oriented external validation sets usually gave the best validation results when the remaining part of the available data was overwhelmingly large, i.e., when the model had to make a lot of extrapolations. We also compared the top final models obtained from external validation set selection methods in three independent and different sizes of 'chemical universe sets'.
Detail enhancement of blurred infrared images based on frequency extrapolation
NASA Astrophysics Data System (ADS)
Xu, Fuyuan; Zeng, Deguo; Zhang, Jun; Zheng, Ziyang; Wei, Fei; Wang, Tiedan
2016-05-01
A novel algorithm for enhancing the details of the blurred infrared images based on frequency extrapolation has been raised in this paper. Unlike other researchers' work, this algorithm mainly focuses on how to predict the higher frequency information based on the Laplacian pyramid separation of the blurred image. This algorithm uses the first level of the high frequency component of the pyramid of the blurred image to reverse-generate a higher, non-existing frequency component, and adds back to the histogram equalized input blurred image. A simple nonlinear operator is used to analyze the extracted first level high frequency component of the pyramid. Two critical parameters are participated in the calculation known as the clipping parameter C and the scaling parameter S. The detailed analysis of how these two parameters work during the procedure is figure demonstrated in this paper. The blurred image will become clear, and the detail will be enhanced due to the added higher frequency information. This algorithm has the advantages of computational simplicity and great performance, and it can definitely be deployed in the real-time industrial applications. We have done lots of experiments and gave illustrations of the algorithm's performance in this paper to convince its effectiveness.
Impact ejecta dynamics in an atmosphere - Experimental results and extrapolations
NASA Technical Reports Server (NTRS)
Schultz, P. H.; Gault, D. E.
1982-01-01
It is noted that the impacts of 0.635-cm aluminum projectiles at 6 km/sec into fine pumice dust, at 1 atm, generate a ball of ionized gas behind an expanding curtain of upward moving ejecta. The gas ball forms a toroid which dissolves as it is driven along the interior of the ejecta curtain, by contrast to near-surface explosions in which a fireball envelops early-time crater growth. High frame rate Schlieren photographs show that the atmosphere at the base of the ejecta curtain is initially turbulent, but later forms a vortex. These experiments suggest that although small size ejecta may be decelerated by air drag, they are not simply lofted and suspended but become incorporated in an ejecta cloud that is controlled by air flow which is produced by the response of the atmosphere to the impact. The extrapolation of these results to large body impacts on the earth suggests such contrasts with laboratory experiments as a large quantity of impact-generated vapor, the supersonic advance of the ejecta curtain, the lessened effect of air drag due to the tenuous upper atmosphere, and the role of secondary cratering.
Adaptation to implied tilt: extensive spatial extrapolation of orientation gradients
Roach, Neil W.; Webb, Ben S.
2013-01-01
To extract the global structure of an image, the visual system must integrate local orientation estimates across space. Progress is being made toward understanding this integration process, but very little is known about whether the presence of structure exerts a reciprocal influence on local orientation coding. We have previously shown that adaptation to patterns containing circular or radial structure induces tilt-aftereffects (TAEs), even in locations where the adapting pattern was occluded. These spatially “remote” TAEs have novel tuning properties and behave in a manner consistent with adaptation to the local orientation implied by the circular structure (but not physically present) at a given test location. Here, by manipulating the spatial distribution of local elements in noisy circular textures, we demonstrate that remote TAEs are driven by the extrapolation of orientation structure over remarkably large regions of visual space (more than 20°). We further show that these effects are not specific to adapting stimuli with polar orientation structure, but require a gradient of orientation change across space. Our results suggest that mechanisms of visual adaptation exploit orientation gradients to predict the local pattern content of unfilled regions of space. PMID:23882243
Extrapolation of EMMA to the Moon and Mars
NASA Astrophysics Data System (ADS)
Maurette, Michel
In this section, EMMA is first extrapolated to the Moon, to hopefully get new clues about a confusing problem that people have failed to figure out since 1970. It deals with the so-called meteoritic contamination of the lunar crust in siderophile elements such as iridium, which was previously attributed to the crater-forming impactors and not to micrometeorites. We next move to Mars to try to check whether EMMA can account for the high sulfur and nickel contents of Martian soils measured with instruments carried by the rovers Spirit and Opportunity in 2004. Before 1999, this obscure contamination looked at first glance to be of a limited interest. Consequently, it was neglected in earlier works. We discovered recently that the true reason for this neglect was probably that the description of this contamination on the Moon and Mars requires facing an astonishing diversity of very dificult problems in planetology, in which we became bogged down. But it was too late to quit.
An empirical relationship for extrapolating sparse experimental lap joint data.
Segalman, Daniel Joseph; Starr, Michael James
2010-10-01
Correctly incorporating the influence of mechanical joints in built-up mechanical systems is a critical element for model development for structural dynamics predictions. Quality experimental data are often difficult to obtain and is rarely sufficient to determine fully parameters for relevant mathematical models. On the other hand, fine-mesh finite element (FMFE) modeling facilitates innumerable numerical experiments at modest cost. Detailed FMFE analysis of built-up structures with frictional interfaces reproduces trends among problem parameters found experimentally, but there are qualitative differences. Those differences are currently ascribed to the very approximate nature of the friction model available in most finite element codes. Though numerical simulations are insufficient to produce qualitatively correct behavior of joints, some relations, developed here through observations of a multitude of numerical experiments, suggest interesting relationships among joint properties measured under different loading conditions. These relationships can be generalized into forms consistent with data from physical experiments. One such relationship, developed here, expresses the rate of energy dissipation per cycle within the joint under various combinations of extensional and clamping load in terms of dissipation under other load conditions. The use of this relationship-though not exact-is demonstrated for the purpose of extrapolating a representative set of experimental data to span the range of variability observed from real data.
Time-domain incident-field extrapolation technique based on the singularity-expansion method
Klaasen, J.J.
1991-05-01
In this report, a method presented to extrapolate measurements from Nuclear Electromagnetic Pulse (NEMP) assessments directly in the time domain. This method is based on a time-domain extrapolation function which is obtained from the Singularity Expansion Method representation of the measured incident field of the NEMP simulator. Once the time-domain extrapolation function is determined, the responses recorded during an assessment can be extrapolated simply by convolving them with the time domain extrapolation function. It is found that to obtain useful extrapolated responses, the incident field measurements needs to be made minimum phase; otherwise unbounded results can be obtained. Results obtained with this technique are presented, using data from actual assessments.
Interspecies extrapolation in carcinogenesis: prediction between rats and mice.
Gold, L S; Bernstein, L; Magaw, R; Slone, T H
1989-05-01
Interspecies extrapolation in carcinogenesis is studied by evaluating prediction from rats to mice and from mice to rats. The Carcinogenic Potency Database, which includes 3500 cancer tests conducted in rats or mice on 955 compounds, is used for the analysis. About half of the chemicals tested for carcinogenicity are positive in at least one test, and this proportion is similar when rats and mice are considered separately. For 392 chemicals tested in both species, 76% of the rat carcinogens are positive in the mouse, and 70% of mouse carcinogens are positive in the rat. When compounds composed solely of chlorine, carbon, hydrogen, and, optionally, oxygen are excluded from the analysis, 75% of mouse carcinogens are positive in the rat. Overall concordance (the percentage positive in both species plus the percentage negative in both) is 76%. Three factors that affect prediction between rats and mice are discussed: chemical class, mutagenicity in the Salmonella assay, and the dose level at which a chemical is toxic. Prediction is more accurate for mutagens than non-mutagens and for substances that are toxic at low (versus only at high) doses. Species differences are not the result of failure in the bioassay to attain the maximum tolerated dose in the negative species or of more frequent testing in the positive species. An analysis of the predictive value of positivity for the 10 most common target sites indicates that most sites are good predictors of carcinogenicity at some site in the other species; the poorest predictors among these common sites are the rat urinary bladder and the mouse liver.
NASA Technical Reports Server (NTRS)
Darden, C. M.
1984-01-01
A method for analyzing shock coalescence which includes three dimensional effects was developed. The method is based on an extension of the axisymmetric solution, with asymmetric effects introduced through an additional set of governing equations, derived by taking the second circumferential derivative of the standard shock equations in the plane of symmetry. The coalescence method is consistent with and has been combined with a nonlinear sonic boom extrapolation program which is based on the method of characteristics. The extrapolation program, is able to extrapolate pressure signatures which include embedded shocks from an initial data line in the plane of symmetry at approximately one body length from the axis of the aircraft to the ground. The axisymmetric shock coalescence solution, the asymmetric shock coalescence solution, the method of incorporating these solutions into the extrapolation program, and the methods used to determine spatial derivatives needed in the coalescence solution are described. Results of the method are shown for a body of revolution at a small, positive angle of attack.
Extrapolating human judgments from skip-gram vector representations of word meaning.
Hollis, Geoff; Westbury, Chris; Lefsrud, Lianne
2017-08-01
There is a growing body of research in psychology that attempts to extrapolate human lexical judgments from computational models of semantics. This research can be used to help develop comprehensive norm sets for experimental research, it has applications to large-scale statistical modelling of lexical access and has broad value within natural language processing and sentiment analysis. However, the value of extrapolated human judgments has recently been questioned within psychological research. Of primary concern is the fact that extrapolated judgments may not share the same pattern of statistical relationship with lexical and semantic variables as do actual human judgments; often the error component in extrapolated judgments is not psychologically inert, making such judgments problematic to use for psychological research. We present a new methodology for extrapolating human judgments that partially addresses prior concerns of validity. We use this methodology to extrapolate human judgments of valence, arousal, dominance, and concreteness for 78,286 words. We also provide resources for users to extrapolate these human judgments for three million English words and short phrases. Applications for large sets of extrapolated human judgments are demonstrated and discussed.
Cross-species extrapolation of chemical effects: Challenges and new insights
One of the greatest uncertainties in chemical risk assessment is extrapolation of effects from tested to untested species. While this undoubtedly is a challenge in the human health arena, species extrapolation is a particularly daunting task in ecological assessments, where it is...
Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I
2014-01-01
Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300
Load extrapolations based on measurements from an offshore wind turbine at alpha ventus
NASA Astrophysics Data System (ADS)
Lott, Sarah; Cheng, Po Wen
2016-09-01
Statistical extrapolations of loads can be used to estimate the extreme loads that are supposed to occur on average once in a given return period. Load extrapolations of extreme loads recorded for a period of three years at different measurement positions of an offshore wind turbine at the alpha ventus offshore test field have been performed. The difficulties that arise when using measured instead of simulated extreme loads in order to determine 50-year return loads will be discussed in detail. The main challenge are outliers in the databases that have a significant influence on the extrapolated extreme loads. Results of the short- and longterm extreme load extrapolations, comprising different methods for the extreme load extraction, the choice of the statistical distribution function as well as the fitting method are presented. Generally, load extrapolation with measurement data is possible, but care should be taken in terms of the selection of the database and the choice of the distribution function and fitting method.
Strong, James Asa; Elliott, Michael
2017-03-15
The reporting of ecological phenomena and environmental status routinely required point observations, collected with traditional sampling approaches to be extrapolated to larger reporting scales. This process encompasses difficulties that can quickly entrain significant errors. Remote sensing techniques offer insights and exceptional spatial coverage for observing the marine environment. This review provides guidance on (i) the structures and discontinuities inherent within the extrapolative process, (ii) how to extrapolate effectively across multiple spatial scales, and (iii) remote sensing techniques and data sets that can facilitate this process. This evaluation illustrates that remote sensing techniques are a critical component in extrapolation and likely to underpin the production of high-quality assessments of ecological phenomena and the regional reporting of environmental status. Ultimately, is it hoped that this guidance will aid the production of robust and consistent extrapolations that also make full use of the techniques and data sets that expedite this process.
Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?
NASA Astrophysics Data System (ADS)
Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.
2016-02-01
It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.
NASA Astrophysics Data System (ADS)
Ismail, Amira; Gorgey, Annie
2015-10-01
Extrapolation involves taking a certain linear combination of the numerical solutions of a base method applied with different stepsizes to obtain greater accuracy. This linear combination is done so as to eliminate the leading error term. The technique of extrapolation in accelerating convergence has been successfully in numerical solution of ordinary differential equations. In this study, symmetric Runge-Kutta methods for solving linear and nonlinear stiff problem are considered. Symmetric methods admit asymptotic error expansion in even powers of the stepsize and are therefore of special interest because successive extrapolations can increase the order by two at time. Although extrapolation can give greater accuracy, due to the stepsize chosen, the numerical approximations are often destroy due to the accumulated round off errors. Therefore, it is important to control the rounding errors especially when applying extrapolation. One way to minimize round off errors is by applying compensated summation. In this paper, the numerical results are given for the symmetric Runge-Kutta methods Implicit Midpoint and Implicit Trapezoidal Rule applied with and without compensated summation. The result shows that symmetric methods with higher level extrapolation using compensated summation gives much smaller errors. On the other hand, symmetric methods without compensated summation when applied with extrapolation, the errors are affected badly by rounding errors.
Chen, Yuan; Liu, Liling; Nguyen, Khanh; Fretland, Adrian J
2011-03-01
Reaction phenotyping using recombinant human cytochromes P450 (P450) has great utility in early discovery. However, to fully realize the advantages of using recombinant expressed P450s, the extrapolation of data from recombinant systems to human liver microsomes (HLM) is required. In this study, intersystem extrapolation factors (ISEFs) were established for CYP1A2, CYP2C8, CYP2C9, CYP2C19, CYP2D6, and CYP3A4 using 11 probe substrates, based on substrate depletion and/or metabolite formation kinetics. The ISEF values for CYP2C9, CYP2D6, and CYP3A4 determined using multiple substrates were similar across substrates. When enzyme kinetics of metabolite formation for CYP1A2, 2C9, 2D6, and 3A4 were used, the ISEFs determined were generally within 2-fold of that determined on the basis of substrate depletion. Validation of ISEFs was conducted using 10 marketed drugs by comparing the extrapolated data with published data. The major isoforms responsible for the metabolism were identified, and the contribution of the predominant P450s was similar to that of previously reported data. In addition, phenotyping data from internal compounds, extrapolated using the rhP450-ISEF method, were comparable to those obtained using an HLM-based inhibition assay approach. Moreover, the intrinsic clearance (CL(int)) calculated from extrapolated rhP450 data correlated well with measured HLM CL(int). The ISEF method established in our laboratory provides a convenient tool in early reaction phenotyping for situations in which the HLM-based inhibition approach is limited by low turnover and/or unavailable metabolite formation. Furthermore, this method allows for quantitative extrapolation of HLM intrinsic clearance from rhP450 phenotyping data simultaneously to obtaining the participating metabolizing enzymes.
Conic state extrapolation. [computer program for space shuttle navigation and guidance requirements
NASA Technical Reports Server (NTRS)
Shepperd, S. W.; Robertson, W. M.
1973-01-01
The Conic State Extrapolation Routine provides the capability to conically extrapolate any spacecraft inertial state vector either backwards or forwards as a function of time or as a function of transfer angle. It is merely the coded form of two versions of the solution of the two-body differential equations of motion of the spacecraft center of mass. Because of its relatively fast computation speed and moderate accuracy, it serves as a preliminary navigation tool and as a method of obtaining quick solutions for targeting and guidance functions. More accurate (but slower) results are provided by the Precision State Extrapolation Routine.
An extrapolation scheme for solid-state NMR chemical shift calculations
NASA Astrophysics Data System (ADS)
Nakajima, Takahito
2017-06-01
Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.
ERIC Educational Resources Information Center
Boudreaux, Gregory M.; Wells, M. Scott
2007-01-01
Everyone with a thorough knowledge of single variable calculus knows that integration can be used to find the length of a curve on a given interval, called its arc length. Fortunately, if one endeavors to pose and solve more interesting problems than simply computing lengths of various curves, there are techniques available that do not require an…
Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetic...
Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetic...
Melting of "non-magic" argon clusters and extrapolation to the bulk limit
NASA Astrophysics Data System (ADS)
Senn, Florian; Wiebke, Jonas; Schumann, Ole; Gohr, Sebastian; Schwerdtfeger, Peter; Pahl, Elke
2014-01-01
The melting of argon clusters ArN is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Koči, and P. Schwerdtfeger, "Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations," Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes.
Melting of “non-magic” argon clusters and extrapolation to the bulk limit
Senn, Florian Wiebke, Jonas; Schumann, Ole; Gohr, Sebastian; Schwerdtfeger, Peter; Pahl, Elke
2014-01-28
The melting of argon clusters Ar{sub N} is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Koči, and P. Schwerdtfeger, “Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations,” Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes.
NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.
Hinrichs, R N; McLean, S P
1995-10-01
This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.
Controversy on toxicological dose-response relationships and low-dose extrapolation of respective risks is often the consequence of misleading data presentation, lack of differentiation between types of response variables, and diverging mechanistic interpretation. In this chapter...
Controversy on toxicological dose-response relationships and low-dose extrapolation of respective risks is often the consequence of misleading data presentation, lack of differentiation between types of response variables, and diverging mechanistic interpretation. In this chapter...
Measuring Thermodynamic Length
Crooks, Gavin E
2007-09-07
Thermodynamic length is a metric distance between equilibrium thermodynamic states. Among other interesting properties, this metric asymptotically bounds the dissipation induced by a finite time transformation of a thermodynamic system. It is also connected to the Jensen-Shannon divergence, Fisher information, and Rao's entropy differential metric. Therefore, thermodynamic length is of central interestin understanding matter out of equilibrium. In this Letter, we will consider how to denethermodynamic length for a small system described by equilibrium statistical mechanics and how to measure thermodynamic length within a computer simulation. Surprisingly, Bennett's classic acceptance ratio method for measuring free energy differences also measures thermodynamic length.
NASA Astrophysics Data System (ADS)
Li, C.; Nowack, R. L.; Pyrak-Nolte, L.
2003-12-01
Seismic tomographic experiments in soil and rock are strongly affected by limited and non-uniform ray coverage. We propose a new method to extrapolate data used for seismic tomography to full coverage. The proposed two-stage autoregressive extrapolation technique can be used to extend the available data and provide better tomographic images. The algorithm is based on the principle that the extrapolated data adds minimal information to the existing data. A two-stage autoregressive (AR) extrapolation scheme is then applied to the seismic tomography problem. The first stage of the extrapolation is to find the optimal prediction-error filter (PE filter). For the second stage, we use the PE filter to find the values for the missing data so that the power out of the PE filter is minimized. At the second stage, we are able to estimate missing data values with the same spectrum as the known data. This is similar to maximizing an entropy criterion. Synthetic tomographic experiments have been conducted and demonstrate that the two-stage AR extrapolation technique is a powerful tool for data extrapolation and can improve the quality of tomographic inversions of experimental and field data. Moreover, the two-stage AR extrapolation technique is tolerant to noise in the data and can still extrapolate the data to obtain overall patterns, which is very important for real data applications. In this study, we have applied AR extrapolation to a series of datasets from laboratory tomographic experiments on synthetic sediments with known structure. In these tomographic experiments, glass beads saturated with de-ionized water were used as the synthetic water-saturated background sediments. The synthetic sediments were packed in plastic cylindrical containers with a diameter of 220 mm. Tomographic experiments were then set up to measure transmitted acoustic waves through the sediment samples from multiple directions. We recorded data for sources and receivers with varying angular
The sarcomere length-tension relation in skeletal muscle
1978-01-01
Tension development during isometric tetani in single fibers of frog semitendinosus muscle occurs in three phases: (a) in initial fast-rise phase; (b) a slow-rise phase; and (c) a plateau, which lasts greater than 10 s. The slow-rise phase has previously been assumed to rise out of a progressive increase of sarcomere length dispersion along the fiber (Gordon et al. 1966. J. Physiol. [Lond.]. 184:143--169;184:170-- 192). Consequently, the "true" tetanic tension has been considered to be the one existing before the onset of the slow-rise phase; this is obtained by extrapolating the slowly rising tension back to the start of the tetanus. In the study by Gordon et al. (1966. J. Physiol. [Lond.] 184:170--192), as well as in the present study, the relation between this extrapolated tension and sarcomere length gave the familiar linear descending limb of the length-tension relation. We tested the assumption that the slow rise of tension was due to a progressive increase in sarcomere length dispersion. During the fast rise, the slow rise, and the plateau of tension, the sarcomere length dispersion at any area along the muscle was less than 4% of the average sarcomere length. Therefore, a progressive increase of sarcomere length dispersion during contraction appears unable to account for the slow rise of tetanic tension. A sarcomere length-tension relation was constructed from the levels of tension and sarcomere length measured during the plateau. Tension was independent of sarcomere length between 1.9 and 2.6 microgram, and declined to 50% maximal at 3.4 microgram. This result is difficult to reconcile with the cross-bridge model of force generation. PMID:309929
Optimal channels of the Garvey-Kelson mass relations in extrapolation
NASA Astrophysics Data System (ADS)
Bao, Man; He, Zeng; Cheng, YiYuan; Zhao, YuMin; Arima, Akito
2017-02-01
Garvey-Kelson mass relations connect nuclear masses of neighboring nuclei within high accuracy, and provide us with convenient tools in predicting unknown masses by extrapolations from existent experimental data. In this paper we investigate optimal "channels" of the Garvey-Kelson relations in extrapolation to the unknown regions, and tabulate our predicted masses by using these optimized channels of the Garvey-Kelson relations.
NASA Astrophysics Data System (ADS)
Tay, Kim Gaik; Kek, Sie Long; Abdul-Kahar, Rosmila
2015-05-01
In this paper, we have further improved the limitations of our previous two Richardson's extrapolation spreadsheet calculators for computing differentiations numerically. The new feature in this new Richardson's extrapolation spreadsheet calculator is fully automated up to any level based on the stopping criteria using VBA programming. The new version is more flexible because it is controlled by programming. Furthermore, it reduces computational time and CPU memory.
In situ LTE exposure of the general public: Characterization and extrapolation.
Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc
2012-09-01
In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields.
Structure-factor extrapolation using the scalar approximation: theory, applications and limitations.
Genick, Ulrich K
2007-10-01
For many experiments in macromolecular crystallography, the overall structure of the protein/nucleic acid is already known and the aim of the experiment is to determine the effect a chemical or physical perturbation/activation has on the structure of the molecule. In a typical experiment, an experimenter will collect a data set from a crystal in the unperturbed state, perform the perturbation (i.e. soaking a ligand into the crystal or activating the sample with light) and finally collect a data set from the perturbed crystal. In many cases the perturbation fails to activate all molecules, so that the crystal contains a mix of molecules in the activated and native states. In these cases, it has become common practice to calculate a data set corresponding to a hypothetical fully activated crystal by linear extrapolation of structure-factor amplitudes. These extrapolated data sets often aid greatly in the interpretation of electron-density maps. However, the extrapolation of structure-factor amplitudes is based on a mathematical shortcut that treats structure factors as scalars, not vectors. Here, a full derivation is provided of the error introduced by this approximation and it is determined how this error scales with key experimental parameters. The perhaps surprising result of this analysis is that for most structural changes encountered in protein crystals, the error introduced by the scalar approximation is very small. As a result, the extrapolation procedure is largely limited by the propagation of experimental uncertainties of individual structure-factor amplitudes. Ultimately, propagation of these uncertainties leads to a reduction in the effective resolution of the extrapolated data set. The program XTRA, which implements SASFE (scalar approximation to structure-factor extrapolation), performs error-propagation calculations and determines the effective resolution of the extrapolated data set, is further introduced.
Blake, R L; Ferguson, H
1992-01-01
Examining for a possible limb length discrepancy is an important part of the podiatric biomechanical examination. The authors present a review of the literature pertaining to the definition of and examination for a limb length discrepancy. They present a typical rationale for lift therapy in the treatment of this pathology.
Trinkaus, Erik; Holliday, Trenton W.; Auerbach, Benjamin M.
2014-01-01
The Late Pleistocene archaic humans from western Eurasia (the Neandertals) have been described for a century as exhibiting absolutely and relatively long clavicles. This aspect of their body proportions has been used to distinguish them from modern humans, invoked to account for other aspects of their anatomy and genetics, used in assessments of their phylogenetic polarities, and used as evidence for Late Pleistocene population relationships. However, it has been unclear whether the usual scaling of Neandertal clavicular lengths to their associated humeral lengths reflects long clavicles, short humeri, or both. Neandertal clavicle lengths, along with those of early modern humans and latitudinally diverse recent humans, were compared with both humeral lengths and estimated body masses (based on femoral head diameters). The Neandertal do have long clavicles relative their humeri, even though they fall within the ranges of variation of early and recent humans. However, when scaled to body masses, their humeral lengths are relatively short, and their clavicular lengths are indistinguishable from those of Late Pleistocene and recent modern humans. The few sufficiently complete Early Pleistocene Homo clavicles seem to have relative lengths also well within recent human variation. Therefore, appropriately scaled clavicular length seems to have varied little through the genus Homo, and it should not be used to account for other aspects of Neandertal biology or their phylogenetic status. PMID:24616525
NASA Astrophysics Data System (ADS)
Mueller, David S.
2013-04-01
Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.
The Extrapolation of High Altitude Solar Cell I(V) Characteristics to AM0
NASA Technical Reports Server (NTRS)
Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Reinke, William; Blankenship, Kurt; Demers, James
2007-01-01
The high altitude aircraft method has been used at NASA GRC since the early 1960's to calibrate solar cell short circuit current, ISC, to Air Mass Zero (AMO). This method extrapolates ISC to AM0 via the Langley plot method, a logarithmic extrapolation to 0 air mass, and includes corrections for the varying Earth-Sun distance to 1.0 AU and compensating for the non-uniform ozone distribution in the atmosphere. However, other characteristics of the solar cell I(V) curve do not extrapolate in the same way. Another approach is needed to extrapolate VOC and the maximum power point (PMAX) to AM0 illumination. As part of the high altitude aircraft method, VOC and PMAX can be obtained as ISC changes during the flight. These values can then the extrapolated, sometimes interpolated, to the ISC(AM0) value. This approach should be valid as long as the shape of the solar spectra in the stratosphere does not change too much from AMO. As a feasibility check, the results are compared to AMO I(V) curves obtained using the NASA GRC X25 based multi-source simulator. This paper investigates the approach on both multi-junction solar cells and sub-cells.
Efficiency improved scalar wave low-rank extrapolation with an effective perfectly matched layer
NASA Astrophysics Data System (ADS)
Chen, Hanming; Zhou, Hui; Xia, Muming
2017-02-01
Low-rank extrapolation is a relatively new method for seismic wave simulation. However, the low-rank method involved requires several fast Fourier transforms (FFTs) per time step, and the number of FFTs increases with the time-stepping size and complexity of the model, which leads to high computational cost at each step. To reduce the cost per time step, a more efficient low-rank extrapolation scheme is presented by splitting the original wave propagator into two parts. The first part represents the traditional pseudo-spectral operator, and is calculated by FFT directly. The residual part compensates the time-stepping error, and is approximated by low-rank decomposition. Compared with the conventional low-rank extrapolation scheme, the improved extrapolation scheme enables using a lower rank for the decomposition to attain similar approximation accuracy, which reduces the number of floating-point operations per time step, and thus reduces the total computational cost. To avoid the wraparound effect caused by FFTs, we develop an effective split perfectly matched layer (PML) to absorb outgoing waves near the boundary. Numerical examples verify the accuracy of the developed low-rank extrapolation scheme and the effectiveness of the PML.
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-10-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about -0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%.
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-01-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about –0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%. PMID:23724368
Choice of order and extrapolation method in Aarseth-type N-body algorithms
NASA Astrophysics Data System (ADS)
Press, William H.; Spergel, David N.
1988-02-01
The force-versus-time history of a typical particle in a 50-body King model is taken as input data, and its 'extrapolatability' is measured. Extrapolatability means how far the force can be extrapolated, measured in units of a locally defined rate-of-change time scale, and still be within a specified fractional accuracy of the true values. Greater extrapolatability means larger step size, hence greater efficiency, in an Aarseth-type N-body code. Extrapolatability is found to depend systematically on the order of the extrapolation method, but it goes to a finite limit in the limit of large order. A formula for choosing the optimal (most efficient) order for any desired accuracy is given; higher orders than are presently in use are indicated. Neither rational function extrapolation nor a somewhat vector-regularized polynomial method is found to be systematically better than component-wise polynomial extrapolation, indicating that extrapolatability can be viewed as an intrinsic property of the underlying N-body forces, independent of the extrapolation method.
Extrapolation occurs in multiple object tracking when eye movements are controlled.
Luu, Tina; Howe, Piers D L
2015-08-01
There is much debate regarding the types of information observers use to track moving objects. Howe and Holcombe (Journal of Vision 12(13): 1-10, 2012) recently reported evidence that observers employ extrapolation while tracking. However, their study is potentially confounded because it did not control for eye movements. As eye movements can aid extrapolation, it is unclear whether extrapolation can still occur in multiple object tracking (MOT) when eye movements are eliminated. In the current study, we addressed this question using an eye tracker to ensure that fixation was always maintained on a central fixation point while observers performed a tracking task. In the predictable condition, objects always travelled along linear paths. In the unpredictable condition, objects randomly changed direction every 300-600 ms. If observers employ extrapolation, we would expect performance to be greater in the former condition than in the latter condition. Our results showed that observers did indeed perform better in the predictable condition than in the unpredictable condition, at least when tracking just two objects (Experiments 1, 3, and 4). Extrapolation occurred less when tracking loads increased or when the objects moved more slowly (Experiment 2).
Larsen, Ross E.
2016-04-12
In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally, tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.
Larsen, Ross E.
2016-04-12
In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally,more » tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.« less
Animal extrapolation in preclinical studies: An analysis of the tragic case of TGN1412.
Lemoine, Maël
2017-02-01
According to the received view, the transportation view, animal extrapolation consists in inductive prediction of the outcome of a mechanism in a target, based on an analogical mechanism in a model. Through an analysis of the failure of preclinical studies of TGN1412, an innovative drug, to predict the tragic consequences of its first-in-man trial in 2006, the received view is challenged by a proposed view of animal extrapolation, the chimera view. According to this view, animal extrapolation is based on a hypothesis about how human organisms work, supported by the amalgamation of results drawn from various experimental organisms, and only predicting the 'predictive grid', that is, a global framework of the effects to be expected.
Extrapolation method in the Monte Carlo Shell Model and its applications
Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio
2011-05-06
We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking {sup 56}Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as {sup 72}Ge with f5pg9-shell. The structure of {sup 72}Se is also studied including the discussion of the shape-coexistence phenomenon.
NASA Astrophysics Data System (ADS)
Zhao, Yi-Gong; Corsini, G.; Dalle Mese, E.
The method of extrapolation of frequency data based on the finite size property of the Gerchberg-Papoulis algorithm is used to address the problem of radar image enhancement. The rate of convergence of the algorithm and the behavior of noise-affected data are discussed. Simulation results show that the convergence rate can be very slow, depending on the ratio of the amount of extrapolated data to that of observed data. This behavior is due to the eigenvalues of the system matrix close to 1.
Windtunnel Rebuilding And Extrapolation To Flight At Transsonic Speed For ExoMars
NASA Astrophysics Data System (ADS)
Fertig, Markus; Neeb, Dominik; Gulhan, Ali
2011-05-01
The static as well as the dynamic behaviour of the EXOMARS vehicle in the transonic velocity regime has been investigated experimentally by the Supersonic and Hypersonic Technology Department of DLR in order to investigate the behaviour prior to parachute opening. Since the experimental work was performed in air, a numerical extrapolation to flight by means of CFD is necessary. At low supersonic speed this extrapolation to flight was performed by the Spacecraft Department of the Institute of Flow Technology of DLR employing the CFD code TAU. Numerical as well as experimental results for the wind tunnel test at Mach 1.2 will be compared and discussed for three different angles of attack.
Coefficients of Effective Length.
ERIC Educational Resources Information Center
Edwards, Roger H.
1981-01-01
Under certain conditions, a validity Coefficient of Effective Length (CEL) can produce highly misleading results. A modified coefficent is suggested for use when empirical studies indicate that underlying assumptions have been violated. (Author/BW)
Myofilament length dependent activation.
de Tombe, Pieter P; Mateja, Ryan D; Tachampa, Kittipong; Ait Mou, Younss; Farman, Gerrie P; Irving, Thomas C
2010-05-01
The Frank-Starling law of the heart describes the interrelationship between end-diastolic volume and cardiac ejection volume, a regulatory system that operates on a beat-to-beat basis. The main cellular mechanism that underlies this phenomenon is an increase in the responsiveness of cardiac myofilaments to activating Ca(2+) ions at a longer sarcomere length, commonly referred to as myofilament length-dependent activation. This review focuses on what molecular mechanisms may underlie myofilament length dependency. Specifically, the roles of inter-filament spacing, thick and thin filament based regulation, as well as sarcomeric regulatory proteins are discussed. Although the "Frank-Starling law of the heart" constitutes a fundamental cardiac property that has been appreciated for well over a century, it is still not known in muscle how the contractile apparatus transduces the information concerning sarcomere length to modulate ventricular pressure development.
ERIC Educational Resources Information Center
Martins, Roberto de A.
1978-01-01
Describes a thought experiment using a general analysis approach with Lorentz transformations to show that the apparent self-contradictions of special relativity concerning the length-paradox are really non-existant. (GA)
Myofilament length dependent activation
de Tombe, Pieter P.; Mateja, Ryan D.; Tachampa, Kittipong; Mou, Younss Ait; Farman, Gerrie P.; Irving, Thomas C.
2010-05-25
The Frank-Starling law of the heart describes the interrelationship between end-diastolic volume and cardiac ejection volume, a regulatory system that operates on a beat-to-beat basis. The main cellular mechanism that underlies this phenomenon is an increase in the responsiveness of cardiac myofilaments to activating Ca{sup 2+} ions at a longer sarcomere length, commonly referred to as myofilament length-dependent activation. This review focuses on what molecular mechanisms may underlie myofilament length dependency. Specifically, the roles of inter-filament spacing, thick and thin filament based regulation, as well as sarcomeric regulatory proteins are discussed. Although the 'Frank-Starling law of the heart' constitutes a fundamental cardiac property that has been appreciated for well over a century, it is still not known in muscle how the contractile apparatus transduces the information concerning sarcomere length to modulate ventricular pressure development.
Fezza, John P; Massry, Guy
2015-08-01
A numerical measurement of the length of the lower eyelid is valuable in understanding the aging process of the lower lid. This study recorded multiple values for the lower lid length to provide average values in each age group. This measurement will allow surgeons to better assess and treat the lower lid. Female patients were studied in age groups every decade starting in the 20- to 29-year-old group and ending in the 90- to 99-year-old group. Twenty patients were assessed in each age group for a total of 160 patients. In each age group, an average measurement was recorded for the lower lid length. The lid length average was 10.4 mm in the 20- to 29-year-old group and increased to 18.6 mm in the 90- to 99-year-old group. A steady increase in lower lid measurements numerically confirms that lower lid length increases with age. For each decade, there was an almost linear increase in lower lid length, with the greatest increase in the 40- to 49-year-old group. This study numerically confirmed that the lower eyelid length vertically increases with age. Documenting that the lower lid does lengthen every decade of life and obtaining average numerical values of lower lid length allows physicians insight into the expected aging changes and typical amount of lower lid lengthening at each decade. This also provides blepharoplasty surgeons another tool to more accurately define the aging process and creates a baseline and a potential goal in restoring a more youthful lid.
Gustafson, David H.
1968-01-01
Five methodologies for predicting hospital length of stay were developed and compared. Two—a subjective Bayesian forecaster and a regression forecaster—also measured the relative importance of the symptomatic and demographic factors in predicting length of stay. The performance of the methodologies was evaluated with several criteria of effectiveness and one of cost. The results should provide encouragement for those interested in computer applications to utilization review and to scheduling inpatient admissions. PMID:5673664
Sprouse, Gene D.
2011-07-15
Technological changes have moved publishing to electronic-first publication where the print version has been relegated to simply another display mode. Distribution in HTML and EPUB formats, for example, changes the reading environment and reduces the need for strict pagination. Therefore, in an effort to streamline the calculation of length, the APS journals will no longer use the printed page as the determining factor for length. Instead the journals will now use word counts (or word equivalents for tables, figures, and equations) to establish length; for details please see http://publish.aps.org/authors/length-guide. The title, byline, abstract, acknowledgment, and references will not be included in these counts allowing authors the freedom to appropriately credit coworkers, funding sources, and the previous literature, bringing all relevant references to the attention of readers. This new method for determining length will be easier for authors to calculate in advance, and lead to fewer length-associated revisions in proof, yet still retain the quality of concise communication that is a virtue of short papers.
Observations on oesophageal length.
Kalloor, G J; Deshpande, A H; Collis, J L
1976-01-01
The subject of oesophageal length is discussed. The great variations in the length of the oesophagus in individual patients is noted, and the practical use of its recognition in oesophageal surgery is stressed. An apprasial of the various methods available for this measurement is made; this includes the use of external chest measurement, endoscopic measurement, and the measurement of the level of the electrical mucosal potential change. Correlative studies of these various methods are made, and these show a very high degree of significance. These studies involved simultaneous measurement of external and internal oesophageal length in 26 patients without a hiatal hernia or gastro-oesophageal length in 26 patients without a hiatal hernia or gastro-oesophageal reflux symptoms, 42 patients with sliding type hiatal hernia, and 17 patients with a peptic stricture in association with hiatal hernia. The method of measuring oesophageal length by the use of the external chest measurement, that is, the distance between the lower incisor teeth and the xiphisternum, measured with the neck fully extended and the patient lying supine, is described in detail, its practical application in oesophageal surgery is illustrated, and its validity tested by internal measurements. The findings of this study demonstrate that the external chest measurement provides a mean of assessing the true static length of the oesophagus, corrected for the size of the individual. Images PMID:941114
USDA-ARS?s Scientific Manuscript database
In this study, six extrapolation methods have been compared for their ability to estimate daily crop evapotranspiration (ETd) from instantaneous latent heat flux estimates derived from digital airborne multispectral remote sensing imagery. Data used in this study were collected during an experiment...
Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)
1998-01-01
The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.
Buckler, Denny R., Foster L. Mayer, Mark R. Ellersieck and Amha Asfaw. 2003. Evaluation of Minimum Data Requirements for Acute Toxicity Value Extrapolation with Aquatic Organisms. EPA/600/R-03/104. U.S. Environmental Protection Agency, National Health and Environmental Effects Re...
Extrapolating intensified forest inventory data to the surrounding landscape using landsat
Evan B. Brooks; John W. Coulston; Valerie A. Thomas; Randolph H. Wynne
2015-01-01
In 2011, a collection of spatially intensified plots was established on three of the Experimental Forests and Ranges (EFRs) sites with the intent of facilitating FIA program objectives for regional extrapolation. Characteristic coefficients from harmonic regression (HR) analysis of associated Landsat stacks are used as inputs into a conditional random forests model to...
Route-to-route extrapolation of the toxic potency of MTBE.
Dourson, M L; Felter, S P
1997-12-01
MTBE is a volatile organic compound used as an oxygenating agent in gasoline. Inhalation from fumes while refueling automobiles is the principle route of exposure for humans, and toxicity by this route has been well studied. Oral exposures to MTBE exist as well, primarily due to groundwater contamination from leaking stationary sources, such as underground storage tanks. Assessing the potential public health impacts of oral exposures to MTBE is problematic because drinking water studies do not exist for MTBE, and the few oil-gavage studies from which a risk assessment could be derived are limited. This paper evaluates the suitability of the MTBE database for conducting an inhalation route-to-oral route extrapolation of toxicity. This includes evaluating the similarity of critical effect between these two routes, quantifiable differences in absorption, distribution, metabolism, and excretion, and sufficiency of toxicity data by the inhalation route. We conclude that such an extrapolation is appropriate and have validated the extrapolation by finding comparable toxicity between a subchronic gavage oral bioassay and oral doses we extrapolate from a subchronic inhalation bioassay. Our results are extended to the 2-year inhalation toxicity study by Chun et al. (1992) in which rats were exposed to 0, 400, 3000, or 8000 ppm MTBE for 6 hr/d, 5 d/wk. We have estimated the equivalent oral doses to be 0, 130, 940, or 2700 mg/kg/d. These equivalent doses may be useful in conducting noncancer and cancer risk assessments.
NASA Astrophysics Data System (ADS)
Lutz, Jesse J.; Piecuch, Piotr
2008-04-01
The recently proposed potential energy surface (PES) extrapolation scheme, which predicts smooth molecular PESs corresponding to larger basis sets from the relatively inexpensive calculations using smaller basis sets by scaling electron correlation energies [A.J.C. Varandas and P. Piecuch, Chem. Phys. Lett. 430,448 (2006)], is applied to the PESs associated with the conrotatory and disrotatory isomerization pathways of bicyclo[l.l.0]butane to buta-l,3-diene. The relevant electronic structure calculations are performed using the completely renormalized coupled-cluster method with singly and doubly excited clusters, and a non-iterative treatment of connected triply excited clusters, termed CR-CC(2,3). A comparison with the explicit CR-CC(2,3) calculations using the large correlation-consistent basis set of the cc-pVQZ quality shows that the cc-pVQZ PESs obtained by the extrapolation from the smaller basis set calculations employing the cc-pVDZ and cc-pVTZ basis sets are practically identical, to within fractions of a millihartree, to the true cc-pVQZ PESs. It is also demonstrated that one can use a similar extrapolation procedure to accurately predict the complete basis set (CBS) limits of the calculated PESs from the results of smaller basis set calculations at a fraction of the effort required by the conventional point-wise CBS extrapolations.
NASA Astrophysics Data System (ADS)
Lutz, Jesse J.; Piecuch, Piotr
2008-04-01
The recently proposed potential energy surface (PES) extrapolation scheme, which predicts smooth molecular PESs corresponding to larger basis sets from the relatively inexpensive calculations using smaller basis sets by scaling electron correlation energies [A. J. C. Varandas and P. Piecuch, Chem. Phys. Lett. 430, 448 (2006)], is applied to the PESs associated with the conrotatory and disrotatory isomerization pathways of bicyclo[1.1.0]butane to buta-1,3-diene. The relevant electronic structure calculations are performed using the completely renormalized coupled-cluster method with singly and doubly excited clusters and a noniterative treatment of connected triply excited clusters, termed CR-CC(2,3), which is known to provide a highly accurate description of chemical reaction profiles involving biradical transition states and intermediates. A comparison with the explicit CR-CC(2,3) calculations using the large correlation-consistent basis set of the cc-pVQZ quality shows that the cc-pVQZ PESs obtained by the extrapolation from the smaller basis set calculations employing the cc-pVDZ and cc-pVTZ basis sets are practically identical, to within fractions of a millihartree, to the true cc-pVQZ PESs. It is also demonstrated that one can use a similar extrapolation procedure to accurately predict the complete basis set (CBS) limits of the calculated PESs from the results of smaller basis set calculations at a fraction of the effort required by the conventional pointwise CBS extrapolations.
Monte Carlo analysis: error of extrapolated thermal conductivity from molecular dynamics simulations
Liu, Xiang-Yang; Andersson, Anders David
2016-11-07
In this short report, we give an analysis of the extrapolated thermal conductivity of UO2 from earlier molecular dynamics (MD) simulations [1]. Because almost all material properties are functions of temperature, e.g. fission gas release, the fuel thermal conductivity is the most important parameter from a model sensitivity perspective [2]. Thus, it is useful to perform such analysis.
NASA Technical Reports Server (NTRS)
Mendelson, A.; Manson, S. S.
1960-01-01
A method using finite-difference recurrence relations is presented for direct extrapolation of families of curves. The method is illustrated by applications to creep-rupture data for several materials and it is shown that good results can be obtained without the necessity for any of the usual parameter concepts.
1989-07-21
GROUP SUB-GROUP Physiologically-based pharmacokinetic model Saturable metab- olism, Respiratory eliminationi Ialocarbon Inhalation expo- sure, H...nlocarbon oral exposure, Interspecles extrapolations, Pharmacokinetics , l,l,]-trichloroethane, 1,l-dichloroethylene, 19 ABSTRACT (Continue on reverse if...necessary and identify by block number) In pursuit of the goal of establishing a scientific basis for the interspecies extrapo- lation of pharmacokinetic
The Thematic-Extrapolation Method: Incorporating Career Patterns into Career Counseling.
ERIC Educational Resources Information Center
Jepsen, David A.
1994-01-01
Focuses on Super's concept of career model, idea that one person's sequence of work positions constitutes whole and unique career. Describes Thematic-Extrapolation Method (TEM), method for predicting career patterns developed by Super in 1954 and summarizes TEM in 3 identifiable steps. Concludes that modified TEM remains promising, but largely…
NASA Technical Reports Server (NTRS)
Kahn, M. M. S.; Cahill, J. F.
1983-01-01
Use of this analytical parameter, it is shown, highlights the distinction between cases which are dominated by trailing-edge separation, and those for which separation at the shock foot is dominant. Use of the analytical parameter and the distinction noted above greatly improves the correlation of separation data and the extrapolation of wind tunnel data to flight conditions.
Buckler, Denny R., Foster L. Mayer, Mark R. Ellersieck and Amha Asfaw. 2003. Evaluation of Minimum Data Requirements for Acute Toxicity Value Extrapolation with Aquatic Organisms. EPA/600/R-03/104. U.S. Environmental Protection Agency, National Health and Environmental Effects Re...
Feller, David
2013-02-21
Simple modifications of complete basis set extrapolation formulas chosen from the literature are examined with respect to their abilities to reproduce a diverse set of 183 reference atomization energies derived primarily from very large basis set standard, frozen core coupled-cluster singles, doubles plus perturbative triples (CCSD(T)) with the aug-cc-pVnZ basis sets. This reference set was augmented with a few larger chemical systems treated with explicitly correlated CCSD(T)-F12b using a quadruple zeta quality basis set followed by extrapolation to complete basis set limit. Tuning the extrapolation formula parameters for the present reference set resulted in substantial reductions in the error metrics. In the case of the best performing approach, the aVnZ extrapolated results are equivalent to or better than results obtained from raw aV(n + 3)Z basis set calculations. To the extent this behavior holds for molecules outside the reference set, it represents an improvement of at least one basis set level over the original formulations and a further significant reduction in the amount of computer time needed to accurately approximate the basis set limit.
Inter-species extrapolation of pharmacokinetic data of three prostacyclin-mimetics.
Hildebrand, M
1994-11-01
Cica-, eptalo- and iloprost are chemically and metabolically stabilized derivatives of prostacyclin which maintain the pharmacodynamic profile of the endogenous precursor. While iloprost is still subject to beta-oxidative degradation of the upper side chain, cicaprost is highly metabolically stable. Eptaloprost was synthesized to realize the pro-drug concept in PGI2-mimetics and was designed to be activated to cicaprost by single beta-oxidation. All three prostacyclin-mimetics were studied in various animal species (mouse, rat, rabbit, monkey, dog and pig) and in man to determine their pharmacokinetic profiles. Based upon this data, it was of interest whether an inter-species extrapolation of pharmacokinetic parameters can be performed to show the predictive value of animal experimentation. Allometric inter-species extrapolation is performed by modelling pharmacokinetic data (Y) as exponential functions (x) of species characteristics (e.g. body weight, W) as: Y = .aWx. For total clearance and volumes of distribution at steady state, a clear-cut correlation with x-values of 0.6-0.8 and 1.0-1.1 could be shown for all three compounds. For cicaprost, which was excreted unchanged in several species, renal and non-renal clearance was also mathematically scalable. Due to the use of different compartment models to describe plasma disposition, different sets of half-life data were obtained and could not be extrapolated reasonably. However, mean residence time showed a dependency on body weight with 0.25 as power function. In case of cicaprost, only the dog, which extensively metabolizes the compound, could not be enrolled in inter-species extrapolation. Excretion half-lives or residence times did not show a significant correlation to body weight or maximum life time potential. The present inter-species extrapolation showed a dependency from species body weight for model-independent pharmacokinetic data, e.g. clearance, volume of distribution at steady state and
Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian
2014-07-01
Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species' evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals ("MammalDIET"). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external
Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian
2014-01-01
Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species’ evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals (“MammalDIET”). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external
Relativistic Length Agony Continued
NASA Astrophysics Data System (ADS)
Redzic, D. V.
2014-06-01
We made an attempt to remedy recent confusing treatments of some basic relativistic concepts and results. Following the argument presented in an earlier paper (Redzic 2008b), we discussed the misconceptions that are recurrent points in the literature devoted to teaching relativity such as: there is no change in the object in Special Relativity, illusory character of relativistic length contraction, stresses and strains induced by Lorentz contraction, and related issues. We gave several examples of the traps of everyday language that lurk in Special Relativity. To remove a possible conceptual and terminological muddle, we made a distinction between the relativistic length reduction and relativistic FitzGerald-Lorentz contraction, corresponding to a passive and an active aspect of length contraction, respectively; we pointed out that both aspects have fundamental dynamical contents. As an illustration of our considerations, we discussed briefly the Dewan-Beran-Bell spaceship paradox and the 'pole in a barn' paradox.
Upper Extremity Length Equalization
DeCoster, Thomas A.; Ritterbusch, John; Crawford, Mark
1992-01-01
Significant upper extremity length inequality is uncommon but can cause major functional problems. The ability to position and use the hand may be impaired by shortness of any of the long bones of the upper extremity. In many respects upper and lower extremity length problems are similar. They most commonly occur after injury to a growing bone and the treatment modalities utilized in the lower extremity may be applied to the upper extremity. These treatment options include epiphysiodesis, shortening osteotomy, angulatory correction osteotomy and lengthening. This report reviews the literature relative to upper extremity length inequality and equalization and presents an algorithm for evaluation and planning appropriate treatment for patients with this condition. This algorithm is illustrated by two clinical cases of posttraumatic shortness of the radius which were effectively treated. ImagesFigure 1Figure 2Figure 3
NASA Astrophysics Data System (ADS)
Bližňák, Vojtěch; Sokol, Zbyněk; Zacharov, Petr
2017-02-01
An evaluation of convective cloud forecasts performed with the numerical weather prediction (NWP) model COSMO and extrapolation of cloud fields is presented using observed data derived from the geostationary satellite Meteosat Second Generation (MSG). The present study focuses on the nowcasting range (1-5 h) for five severe convective storms in their developing stage that occurred during the warm season in the years 2012-2013. Radar reflectivity and extrapolated radar reflectivity data were assimilated for at least 6 h depending on the time of occurrence of convection. Synthetic satellite imageries were calculated using radiative transfer model RTTOV v10.2, which was implemented into the COSMO model. NWP model simulations of IR10.8 μm and WV06.2 μm brightness temperatures (BTs) with a horizontal resolution of 2.8 km were interpolated into the satellite projection and objectively verified against observations using Root Mean Square Error (RMSE), correlation coefficient (CORR) and Fractions Skill Score (FSS) values. Naturally, the extrapolation of cloud fields yielded an approximately 25% lower RMSE, 20% higher CORR and 15% higher FSS at the beginning of the second forecasted hour compared to the NWP model forecasts. On the other hand, comparable scores were observed for the third hour, whereas the NWP forecasts outperformed the extrapolation by 10% for RMSE, 15% for CORR and up to 15% for FSS during the fourth forecasted hour and 15% for RMSE, 27% for CORR and up to 15% for FSS during the fifth forecasted hour. The analysis was completed by a verification of the precipitation forecasts yielding approximately 8% higher RMSE, 15% higher CORR and up to 45% higher FSS when the NWP model simulation is used compared to the extrapolation for the first hour. Both the methods yielded unsatisfactory level of precipitation forecast accuracy from the fourth forecasted hour onward.
Improved short range forecasting by blending techniques using extrapolation and NWP model forecasts
NASA Astrophysics Data System (ADS)
Jang, M.; Jee, J. B.; Kim, S.; Park, J. G.
2016-12-01
Nowcasting and short range forecast rely more and more on "blending" techniques combining several data sources (both in situ and remote sensing observation, NWP, model output statistic data, high resolution topography, etc..) in a seamless way using lead-time-dependent weights. Developed nowcasting techniques blend extrapolation-based forecasts with numerical weather prediction (NWP)-based forecasts, heavily weighting the extrapolation forecasts at 0 3h lead times and transitioning emphasis to the NWP-based forecasts at the later lead times. Korea Meteorological Administration (KMA) employs NOAA's Local Analysis and Prediction System (LAPS) which is called KLAPS. It provides the hot-start initial condition to the very short-range forecasting system called advanced storm-scale analysis and prediction system (ASAPS) based on the Weather Research and Forecasting (WRF) model. MAPLE (McGill Algorithm for Precipitation nowcasting by Lagrangian Extrapolation) uses radar composite maps to predict the location of precipitation echoes several hours in advance (up to 6 hours) using the variational echo tracking method and a semi-Lagrangian backward advection technique. This system has been operating in real-time since June 2008, the output being used in operations by KMA's weather forecasters and hydrologists. The spatial resolution of both products are 1km. The purpose of this study is to improve the accuracy of short range forecasting using the merging method (distance, similarity) between radar-based extrapolation forecast (MAPLE) and precipitation forecast from NWP model (ASAPS). In this study, a new approach to applying different weights to blend extrapolation and model forecasts based on intensities and forecast times is applied and tested. As a result, very short range forecasts was confirmed the possibility to be improved.
To scale or not to scale: the principles of dose extrapolation
Sharma, Vijay; McNeill, John H
2009-01-01
The principles of inter-species dose extrapolation are poorly understood and applied. We provide an overview of the principles underlying dose scaling for size and dose adjustment for size-independent differences. Scaling of a dose is required in three main situations: the anticipation of first-in-human doses for clinical trials, dose extrapolation in veterinary practice and dose extrapolation for experimental purposes. Each of these situations is discussed. Allometric scaling of drug doses is commonly used for practical reasons, but can be more accurate when one takes into account species differences in pharmacokinetic parameters (clearance, volume of distribution). Simple scaling of drug doses can be misleading for some drugs; correction for protein binding, physicochemical properties of the drug or species differences in physiological time can improve scaling. However, differences in drug transport and metabolism, and in the dose–response relationship, can override the effect of size alone. For this reason, a range of modelling approaches have been developed, which combine in silico simulations with data obtained in vitro and/or in vivo. Drugs that are unlikely to be amenable to simple allometric scaling of their clearance or dose include drugs that are highly protein-bound, drugs that undergo extensive metabolism and active transport, drugs that undergo significant biliary excretion (MW > 500, ampiphilic, conjugated), drugs whose targets are subject to inter-species differences in expression, affinity and distribution and drugs that undergo extensive renal secretion. In addition to inter-species dose extrapolation, we provide an overview of dose extrapolation within species, discussing drug dosing in paediatrics and in the elderly. PMID:19508398
How to Appropriately Extrapolate Costs and Utilities in Cost-Effectiveness Analysis.
Bojke, Laura; Manca, Andrea; Asaria, Miqdad; Mahon, Ronan; Ren, Shijie; Palmer, Stephen
2017-05-03
Costs and utilities are key inputs into any cost-effectiveness analysis. Their estimates are typically derived from individual patient-level data collected as part of clinical studies the follow-up duration of which is often too short to allow a robust quantification of the likely costs and benefits a technology will yield over the patient's entire lifetime. In the absence of long-term data, some form of temporal extrapolation-to project short-term evidence over a longer time horizon-is required. Temporal extrapolation inevitably involves assumptions regarding the behaviour of the quantities of interest beyond the time horizon supported by the clinical evidence. Unfortunately, the implications for decisions made on the basis of evidence derived following this practice and the degree of uncertainty surrounding the validity of any assumptions made are often not fully appreciated. The issue is compounded by the absence of methodological guidance concerning the extrapolation of non-time-to-event outcomes such as costs and utilities. This paper considers current approaches to predict long-term costs and utilities, highlights some of the challenges with the existing methods, and provides recommendations for future applications. It finds that, typically, economic evaluation models employ a simplistic approach to temporal extrapolation of costs and utilities. For instance, their parameters (e.g. mean) are typically assumed to be homogeneous with respect to both time and patients' characteristics. Furthermore, costs and utilities have often been modelled to follow the dynamics of the associated time-to-event outcomes. However, cost and utility estimates may be more nuanced, and it is important to ensure extrapolation is carried out appropriately for these parameters.
André, M; Malmström, M E; Neretnieks, I
2009-11-03
Permanent storage of spent nuclear fuel in crystalline bedrock is investigated in several countries. For this storage scenario, the host rock is the third and final barrier for radionuclide migration. Sorption reactions in the crystalline rock matrix have strong retardative effects on the transport of radionuclides. To assess the barrier properties of the host rock it is important to have sorption data representative of the undisturbed host rock conditions. Sorption data is in the majority of reported cases determined using crushed rock. Crushing has been shown to increase a rock samples sorption capacity by creating additional surfaces. There are several problems with such an extrapolation. In studies where this problem is addressed, simple models relating the specific surface area to the particle size are used to extrapolate experimental data to a value representative of the host rock conditions. In this article, we report and compare surface area data of five size fractions of crushed granite and of 100 mm long drillcores as determined by the Brunauer Emmet Teller (BET)-method using N(2)-gas. Special sample holders that could hold large specimen were developed for the BET measurements. Surface area data on rock samples as large as the drillcore has not previously been published. An analysis of this data show that the extrapolated value for intact rock obtained from measurements on crushed material was larger than the determined specific surface area of the drillcores, in some cases with more than 1000%. Our results show that the use of data from crushed material and current models to extrapolate specific surface areas for host rock conditions can lead to over estimation interpretations of sorption ability. The shortcomings of the extrapolation model are discussed and possible explanations for the deviation from experimental data are proposed.
Levy, Aharon; Cohen, Giora; Gilat, Eran; Kapon, Joseph; Dachir, Shlomit; Abraham, Shlomo; Herskovitz, Miriam; Teitelbaum, Zvi; Raveh, Lily
2007-05-01
The extrapolation from animal data to therapeutic effects in humans, a basic pharmacological issue, is especially critical in studies aimed to estimate the protective efficacy of drugs against nerve agent poisoning. Such efficacy can only be predicted by extrapolation of data from animal studies to humans. In pretreatment therapy against nerve agents, careful dose determination is even more crucial than in antidotal therapy, since excessive doses may lead to adverse effects or performance decrements. The common method of comparing dose per body weight, still used in some studies, may lead to erroneous extrapolation. A different approach is based on the comparison of plasma concentrations at steady state required to obtain a given pharmacodynamic endpoint. In the present study, this approach was applied to predict the prophylactic efficacy of the anticholinergic drug caramiphen in combination with pyridostigmine in man based on animal data. In two species of large animals, dogs and monkeys, similar plasma concentrations of caramiphen (in the range of 60-100 ng/ml) conferred adequate protection against exposure to a lethal-dose of sarin (1.6-1.8 LD(50)). Pharmacokinetic studies at steady state were required to achieve the correlation between caramiphen plasma concentrations and therapeutic effects. Evaluation of total plasma clearance values was instrumental in establishing desirable plasma concentrations and minimizing the number of animals used in the study. Previous data in the literature for plasma levels of caramiphen that do not lead to overt side effects in humans (70-100 ng/ml) enabled extrapolation to expected human protection. The method can be applied to other drugs and other clinical situations, in which human studies are impossible due to ethical considerations. When similar dose response curves are obtained in at least two animal models, the extrapolation to expected therapeutic effects in humans might be considered more reliable.
SU-D-204-02: BED Consistent Extrapolation of Mean Dose Tolerances
Perko, Z; Bortfeld, T; Hong, T; Wolfgang, J; Unkelbach, J
2016-06-15
Purpose: The safe use of radiotherapy requires the knowledge of tolerable organ doses. For experimental fractionation schemes (e.g. hypofractionation) these are typically extrapolated from traditional fractionation schedules using the Biologically Effective Dose (BED) model. This work demonstrates that using the mean dose in the standard BED equation may overestimate tolerances, potentially leading to unsafe treatments. Instead, extrapolation of mean dose tolerances should take the spatial dose distribution into account. Methods: A formula has been derived to extrapolate mean physical dose constraints such that they are mean BED equivalent. This formula constitutes a modified BED equation where the influence of the spatial dose distribution is summarized in a single parameter, the dose shape factor. To quantify effects we analyzed 14 liver cancer patients previously treated with proton therapy in 5 or 15 fractions, for whom also photon IMRT plans were available. Results: Our work has two main implications. First, in typical clinical plans the dose distribution can have significant effects. When mean dose tolerances are extrapolated from standard fractionation towards hypofractionation they can be overestimated by 10–15%. Second, the shape difference between photon and proton dose distributions can cause 30–40% differences in mean physical dose for plans having the same mean BED. The combined effect when extrapolating proton doses to mean BED equivalent photon doses in traditional 35 fraction regimens resulted in up to 7–8 Gy higher doses than when applying the standard BED formula. This can potentially lead to unsafe treatments (in 1 of the 14 analyzed plans the liver mean dose was above its 32 Gy tolerance). Conclusion: The shape effect should be accounted for to avoid unsafe overestimation of mean dose tolerances, particularly when estimating constraints for hypofractionated regimens. In addition, tolerances established for a given treatment modality cannot
Variable focal length microlenses
NASA Astrophysics Data System (ADS)
L. G., Commander; Day, S. E.; Selviah, D. R.
2000-04-01
Refractive surface relief microlenses (150 μm diameter) are immersed in nematic liquid crystal in a cell. Application of a variable voltage across the cell effectively varies the refractive index of the liquid crystal and results in a change of the focal length by the lensmakers formula (E. Hecht, Optics, 2nd edn., Addison-Wesley, Reading, Massachusetts, 1987, p. 138). We describe the cell design and construction and demonstrate a range of focal lengths from +490 to +1000 μm for 2 to 12 V applied. A diverging lens results when the voltage is lower. Theoretical models are developed to account for some of the observed aberrations.
Wheeler, Matthew W.; Bailer, A. John
2016-01-01
Experiments with relatively high doses are often used to predict risks at appreciably lower doses. A point of departure (PoD) can be calculated as the dose associated with a specified moderate response level that is often in the range of experimental doses considered. A linear extrapolation to lower doses often follows. An alternative to the PoD method is to develop a model that accounts for the model uncertainty in the dose–response relationship and to use this model to estimate the risk at low doses. Two such approaches that account for model uncertainty are model averaging (MA) and semi-parametric methods. We use these methods, along with the PoD approach in the context of a large animal (40,000+ animal) bioassay that exhibited sub-linearity. When models are fit to high dose data and risks at low doses are predicted, the methods that account for model uncertainty produce dose estimates associated with an excess risk that are closer to the observed risk than the PoD linearization. This comparison provides empirical support to accompany previous simulation studies that suggest methods that incorporate model uncertainty provide viable, and arguably preferred, alternatives to linear extrapolation from a PoD. PMID:23831127
Accurate aging of juvenile salmonids using fork lengths
Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua
2017-01-01
Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.
1993-08-01
m trees 110 - 170 Thom 1972 Pine forest - 20 m trees 128 DeBruin and Moore 1985 Forested plateau, rolling 120 - 130 Ming et al. 1983 Rolling terrain...H. A. R., and C. J. Moore , 1985 , "Zero-Plane Displacement and Roughness Length for Tall Vegetation, Derived from a Simple Mass Conservation
ERIC Educational Resources Information Center
Handley, John C.
1991-01-01
Discussion of sampling methods used in information science research focuses on Fussler's method for sampling catalog cards and on sampling by length. Highlights include simple random sampling, sampling with probability equal to size without replacement, sampling with replacement, and examples of estimating the number of books on shelves in certain…
The K+ K+ scattering length from Lattice QCD
Silas Beane; Thomas Luu; Konstantinos Orginos; Assumpta Parreno; Martin Savage; Aaron Torok; Andre Walker-Loud
2007-09-11
The K+K+ scattering length is calculated in fully-dynamical lattice QCD with domain-wall valence quarks on the MILC asqtad-improved gauge configurations with fourth-rooted staggered sea quarks. Three-flavor mixed-action chiral perturbation theory at next-to-leading order, which includes the leading effects of the finite lattice spacing, is used to extrapolate the results of the lattice calculation to the physical value of mK + /fK + . We find mK^+ aK^+ K^+ = â~0.352 Â± 0.016, where the statistical and systematic errors have been combined in quadrature.
Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio
2017-07-01
Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as
Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicit...
Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicit...
NASA Astrophysics Data System (ADS)
Maucher, Fabian; Sutcliffe, Paul
2017-07-01
In this paper, we present extensive numerical simulations of an excitable medium to study the long-term dynamics of knotted vortex strings for all torus knots up to crossing number 11. We demonstrate that FitzHugh-Nagumo evolution preserves the knot topology for all the examples presented, thereby providing a field theory approach to the study of knots. Furthermore, the evolution yields a well-defined minimal length for each knot that is comparable to the ropelength of ideal knots. We highlight the role of the medium boundary in stabilizing the length of the knot and discuss the implications beyond torus knots. We also show that there is not a unique attractor within a given knot topology.
Seismic Reverse Time Migration Using A New Wave-Field Extrapolator and a New Imaging Condition
NASA Astrophysics Data System (ADS)
Moradpouri, Farzad; Moradzadeh, Ali; Pestana, Reynam C.; Soleimani Monfared, Mehrdad
2016-10-01
Prestack reverse time migration (RTM), as a two way wave-field extrapolation method, can image steeply dipping structures without any dip limitation at the expense of potential increase in imaging artifacts. In this paper, an efficient symplectic scheme, called Leapfrog-Rapid Expansion Method (L-REM), is first introduced to extrapolate the wavefield and its derivative in the same time step with high accuracy and free numerical dispersion using a Ricker wavelet of a maximum frequency of 25 Hz. Afterwards, in order to suppress the artifacts as a characteristic of RTM, a new imaging condition based on Poynting vector and a type of weighting function is presented. The capability of the proposed new imaging condition is then tested on synthetic data. The obtained results indicate that the proposed imaging condition is able to suppress the RTM artifacts effectively. They also show the ability of the proposed approach for improving the amplitude and compensate for illumination.
Flux extrapolation models used in the DOT IV discrete ordinates neutron transport code
Tomlinson, E.T.; Rhoades, W.A.; Engle, W.W. Jr.
1980-05-01
The DOT IV code solves the Boltzmann transport equation in two dimensions using the method of discrete ordinates. Special techniques have been incorporated in this code to mitigate the effects of flux extrapolation error in space meshes of practical size. This report presents the flux extrapolation models as they appear in DOT IV. A sample problem is also presented to illustrate the effects of the various models on the resultant flux. Convergence of the various models to a single result as the mesh is refined is also examined. A detailed comparison with the widely used TWOTRAN II code is reported. The features which cause DOT and TWOTRAN to differ in the converged results are completely observed and explained.
New method of extrapolation of the resistance of a model planing boat to full size
NASA Technical Reports Server (NTRS)
Sottorf, W
1942-01-01
The previously employed method of extrapolating the total resistance to full size with lambda(exp 3) (model scale) and thereby foregoing a separate appraisal of the frictional resistance, was permissible for large models and floats of normal size. But faced with the ever increasing size of aircraft a reexamination of the problem of extrapolation to full size is called for. A method is described by means of which, on the basis of an analysis of tests on planing surfaces, the variation of the wetted surface over the take-off range is analytically obtained. The friction coefficients are read from Prandtl's curve for turbulent boundary layer with laminar approach. With these two values a correction for friction is obtainable.
Applications of Non-Force-Free Solar Coronal Magnetic Field Extrapolation
NASA Astrophysics Data System (ADS)
Alexander, A.; Hu, Q.; Zheng, J.; Heerikhuisen, J.
2016-12-01
Modeling our Sun's magnetic field continues to be a challenging and necessary task. While many have attempted with various methods and theories in the past, none has been able to yield an accurate characterization of the global 3D coronal magnetic field. We propose to develop a Non-Force-Free-Field Model (NFFF) for the global coronal magnetic field extrapolation based on synoptic vector magnetograms. Taking into account both radial and transverse components from magnetic field vectors on the solar photosphere, the NFFF model extrapolates a three-dimensional magnetic field from these `boundary conditions'. It seems that, when compared to the results of previous theories, Potential Field (PFFS) or Linear Force Free Field (LFFF), the calculated error from NFFF is significantly less. With a more accurate understanding of the structure of the coronal magnetic field we are closer to finding answers regarding the coronal heating problem, predicting space weather, and protecting technology, spacecraft and astronauts.
Agarwal, Amit B; McBride, Ali
2016-08-01
The World Health Organization defines a biosimilar as "a biotherapeutic product which is similar in terms of quality, safety and efficacy to an already licensed reference biotherapeutic product." Biosimilars are biologic medical products that are very distinct from small-molecule generics, as their active substance is a biological agent derived from a living organism. Approval processes are highly regulated, with guidance issued by the European Medicines Agency and US Food and Drug Administration. Approval requires a comparability exercise consisting of extensive analytical and preclinical in vitro and in vivo studies, and confirmatory clinical studies. Extrapolation of biosimilars from their original indication to another is a feasible but highly stringent process reliant on rigorous scientific justification. This review focuses on the processes involved in gaining biosimilar approval and extrapolation and details the comparability exercise undertaken in the European Union between originator erythropoietin-stimulating agent, Eprex(®), and biosimilar, Retacrit™.
Latychevskaia, Tatiana; Fink, Hans-Werner
2015-01-12
Previously reported crystalline structures obtained by an iterative phase retrieval reconstruction of their diffraction patterns seem to be free from displaying any irregularities or defects in the lattice, which appears to be unrealistic. We demonstrate here that the structure of a nanocrystal including its atomic defects can unambiguously be recovered from its diffraction pattern alone by applying a direct phase retrieval procedure not relying on prior information of the object shape. Individual point defects in the atomic lattice are clearly apparent. Conventional phase retrieval routines assume isotropic scattering. We show that when dealing with electrons, the quantitatively correct transmission function of the sample cannot be retrieved due to anisotropic, strong forward scattering specific to electrons. We summarize the conditions for this phase retrieval method and show that the diffraction pattern can be extrapolated beyond the original record to even reveal formerly not visible Bragg peaks. Such extrapolated wave field pattern leads to enhanced spatial resolution in the reconstruction.
New allometric scaling relationships and applications for dose and toxicity extrapolation.
Cao, Qiming; Yu, Jimmy; Connell, Des
2014-01-01
Allometric scaling between metabolic rate, size, body temperature, and other biological traits has found broad applications in ecology, physiology, and particularly in toxicology and pharmacology. Basal metabolic rate (BMR) was observed to scale with body size and temperature. However, the mass scaling exponent was increasingly debated whether it should be 2/3, 3/4, or neither, and scaling with body temperature also attracted recent attention. Based on thermodynamic principles, this work reports 2 new scaling relationships between BMR, size, temperature, and biological time. Good correlations were found with the new scaling relationships, and no universal scaling exponent can be obtained. The new scaling relationships were successfully validated with external toxicological and pharmacological studies. Results also demonstrated that individual extrapolation models can be built to obtain scaling exponent specific to the interested group, which can be practically applied for dose and toxicity extrapolations.
Beta skin dose determination using TLDs, Monte-Carlo calculations, and extrapolation chamber.
Ben-Shachar, B; Levine, S H; Hoffman, J M
1989-12-01
The beta doses produced by 90Sr-Y and 204 Tl beta sources were determined using three methods: Monte-Carlo calculations, measurements with TLDs, and measurements with an extrapolation chamber. Excellent agreement was obtained by all three methods, except a TLD nonlinear response to beta s was observed, which gives doses approximately 20% high for the 90Sr-Y source and 5% low for the 204Tl source. Also, analyses performed with low-energy beta s using these methods can determine errors in shield thickness covering TLD elements. Direct measurement of skin dose is not possible by the TLDs because the minimum shield thickness for the elements is 13 mg cm-2. A thinner shield for the elements must be used or the data must be extrapolated. Presently, thinner shields for TLD elements are not available, and the thick shields can lead to significant errors in skin dose when exposed to low-energy beta s.
Extrapolation of Nystrom solution for two dimensional nonlinear Fredholm integral equations
NASA Astrophysics Data System (ADS)
Guoqiang, Han; Jiong, Wang
2001-09-01
In this paper, we analyze the existence of asymptotic error expansion of the Nystrom solution for two-dimensional nonlinear Fredholm integral equations of the second kind. We show that the Nystrom solution admits an error expansion in powers of the step-size h and the step-size k. For a special choice of the numerical quadrature, the leading terms in the error expansion for the Nystrom solution contain only even powers of h and k, beginning with terms h2p and k2q. These expansions are useful for the application of Richardson extrapolation and for obtaining sharper error bounds. Numerical examples show that how Richardson extrapolation gives a remarkable increase of precision, in addition to faster convergence.
Shah, D.K.; Chen, D.J.; Chan, W.S.
1994-12-31
This paper applies the finite element least-square extrapolation and smoothing technique to demonstrate its advantages in evaluation of interfacial stress distributions in composite laminates. The analysis uses the quasi-3D finite element modeling technique and complete 3-D analysis using ABAQUS to investigate the stress distributions in graphite/epoxy laminates. Linear (2 point integration) and quadratic (3 point integration) least square fits in 8-node quadrilaterals and 20 node solid isoparametric elements are demonstrated. The evaluation of transformation matrix from gaussian stresses to nodal stresses was performed using symbolic mathematics on `Mathematica`. The results show that use of extrapolation and smoothing offer better estimates of stress distributions and the interfacial stresses in composite laminates.
NASA Astrophysics Data System (ADS)
Lu, Zhao
2007-01-01
Berreman's 4×4 matrix approach has been generally applied to calculating light propagation in one-dimensional (1-D) inhomogeneous anisotropic media. In numerical calculations the propagator (propagation matrix) of whole 1-D inhomogeneous media is approximated by a stack of N homogeneous slab propagators. We analyze the error of the slab propagator in this slab approximation and show it is correct through the order 1/N2. By using the extrapolation approach, we eliminate the leading error terms of the product (total propagator) of N homogeneous slab propagators successively. Numerical tests for a cholesteric liquid crystal show that the total propagator constructed through extrapolation is of higher accuracy and efficiency than Berreman's and Abdulhalim's or faster 4×4 total propagators.
NASA Astrophysics Data System (ADS)
Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch
2017-09-01
A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.
Dom, Nathalie; Knapen, Dries; Blust, Ronny
2012-01-01
The present study was developed to assess the chronic toxicity predictions and extrapolations for a set of chlorinated anilines (aniline (AN), 4-chloroaniline (CA), 3,5-dichloroaniline (DCA) and 2,3,4-trichloroaniline (TCA)). Daphnia magna 21 d chronic experimental data was compared to the chronic toxicity predictions made by the US EPA ECOSAR QSAR tools and to acute-to-chronic extrapolations. Additionally, Species Sensitivity Distributions (SSDs) were constructed to assess the chronic toxicity variability among different species and to investigate the acute versus chronic toxicity in a multi-species context. Since chlorinated anilines are structural analogues with a designated polar narcotic mode of action, similar toxicity responses were assumed. However, rather large interchemical and interspecies differences in toxicity were observed. Compared to the other three test compounds, TCA exposure had a significantly larger impact on growth and reproduction of D. magna. Furthermore, this study illustrated that QSARs or a fixed ACR are not able to account for these interchemical and interspecies differences. Consequently, ECOSAR was found to be inadequate to predict the chronic toxicity of the anilines and the use of a fixed ACR (of 10) led to under of certain species. The experimental ACRs determined in D. magna were substantially different among the four aromatic amines (ACR of 32 for AN, 16.9 for CA, 5.7 for DCA and 60.8 for TCA). Furthermore, the SSDs illustrated that Danio rerio was rather insensitive to AN in comparison to another fish species, Phimphales promelas. It was therefore suggested that available toxicity data should be used in an integrative multi-species way, rather than using individual-based toxicity extrapolations. In this way, a relevant overview of the differences in species sensitivity is given, which in turn can serve as the basis for acute to chronic extrapolations.
Latychevskaia, Tatiana Fink, Hans-Werner
2013-11-11
Conventional microscopic records represent intensity distributions whereby local sample information is mapped onto local information at the detector. In coherent microscopy, the superposition principle of waves holds; field amplitudes are added, not intensities. This non-local representation is spread out in space and interference information combined with wave continuity allows extrapolation beyond the actual detected data. Established resolution criteria are thus circumvented and hidden object details can retrospectively be recovered from just a fraction of an interference pattern.
A model for the data extrapolation of greenhouse gas emissions in the Brazilian hydroelectric system
NASA Astrophysics Data System (ADS)
Pinguelli Rosa, Luiz; Aurélio dos Santos, Marco; Gesteira, Claudio; Elias Xavier, Adilson
2016-06-01
Hydropower reservoirs are artificial water systems and comprise a small proportion of the Earth’s continental territory. However, they play an important role in the aquatic biogeochemistry and may affect the environment negatively. Since the 90s, as a result of research on organic matter decay in manmade flooded areas, some reports have associated greenhouse gas emissions with dam construction. Pioneering work carried out in the early period challenged the view that hydroelectric plants generate completely clean energy. Those estimates suggested that GHG emissions into the atmosphere from some hydroelectric dams may be significant when measured per unit of energy generated and should be compared to GHG emissions from fossil fuels used for power generation. The contribution to global warming of greenhouse gases emitted by hydropower reservoirs is currently the subject of various international discussions and debates. One of the most controversial issues is the extrapolation of data from different sites. In this study, the extrapolation from a site sample where measurements were made to the complete set of 251 reservoirs in Brazil, comprising a total flooded area of 32 485 square kilometers, was derived from the theory of self-organized criticality. We employed a power law for its statistical representation. The present article reviews the data generated at that time in order to demonstrate how, with the help of mathematical tools, we can extrapolate values from one reservoir to another without compromising the reliability of the results.
Study on Two Methods for Nonlinear Force-Free Extrapolation Based on Semi-Analytical Field
NASA Astrophysics Data System (ADS)
Liu, S.; Zhang, H. Q.; Su, J. T.; Song, M. T.
2011-03-01
In this paper, two semi-analytical solutions of force-free fields (Low and Lou, Astrophys. J. 352, 343, 1990) have been used to test two nonlinear force-free extrapolation methods. One is the boundary integral equation (BIE) method developed by Yan and Sakurai ( Solar Phys. 195, 89, 2000), and the other is the approximate vertical integration (AVI) method developed by Song et al. ( Astrophys. J. 649, 1084, 2006). Some improvements have been made to the AVI method to avoid the singular points in the process of calculation. It is found that the correlation coefficients between the first semi-analytical field and extrapolated field using the BIE method, and also that obtained by the improved AVI method, are greater than 90% below a height 10 of the 64×64 lower boundary. For the second semi-analytical field, these correlation coefficients are greater than 80% below the same relative height. Although differences between the semi-analytical solutions and the extrapolated fields exist for both the BIE and AVI methods, these two methods can give reliable results for heights of about 15% of the extent of the lower boundary.
NASA Astrophysics Data System (ADS)
He, Han; Wang, Huaning
2008-05-01
The boundary integral equation (BIE) method was first proposed by Yan and Sakurai (2000) and used to extrapolate the nonlinear force-free magnetic field in the solar atmosphere. Recently, Yan and Li (2006) improved the BIE method and proposed the direct boundary integral equation (DBIE) formulation, which represents the nonlinear force-free magnetic field by direct integration of the magnetic field on the bottom boundary surface. On the basis of this new method, we devised a practical calculation scheme for the nonlinear force-free field extrapolation above solar active regions. The code of the scheme was tested by the analytical solutions of Low and Lou (1990) and was applied to the observed vector magnetogram of solar active region NOAA 9077. The results of the calculations show that the improvement of the new computational scheme to the scheme of Yan and Li (2006) is significant, and the force-free and divergence-free constraints are well satisfied in the extrapolated fields. The calculated field lines for NOAA 9077 present the X-shaped structure and can be helpful for understanding the magnetic configuration of the filament channel as well as the magnetic reconnection process during the Bastille Day flare on 14 July 2000.
NASA Astrophysics Data System (ADS)
Steinhausen, Heinz C.; Martín, Rodrigo; den Brok, Dennis; Hullin, Matthias B.; Klein, Reinhard
2015-03-01
Numerous applications in computer graphics and beyond benefit from accurate models for the visual appearance of real-world materials. Data-driven models like photographically acquired bidirectional texture functions (BTFs) suffer from limited sample sizes enforced by the common assumption of far-field illumination. Several materials like leather, structured wallpapers or wood contain structural elements on scales not captured by typical BTF measurements. We propose a method extending recent research by Steinhausen et al. to extrapolate BTFs for large-scale material samples from a measured and compressed BTF for a small fraction of the material sample, guided by a set of constraints. We propose combining color constraints with surface descriptors similar to normal maps as part of the constraints guiding the extrapolation process. This helps narrowing down the search space for suitable ABRDFs per texel to a large extent. To acquire surface descriptors for nearly at materials, we build upon the idea of photometrically estimating normals. Inspired by recent work by Pan and Skala, we obtain images of the sample in four different rotations with an off-the-shelf flatbed scanner and derive surface curvature information from these. Furthermore, we simplify the extrapolation process by using a pixel-based texture synthesis scheme, reaching computational efficiency similar to texture optimization.
NASA Astrophysics Data System (ADS)
Guo, J. Y.; Li, Y. B.; Dai, C. L.; Shum, C. K.
2013-10-01
We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.
Landis, W.G.
1995-12-31
One of the central problems in environmental toxicology has been the extrapolation from laboratory tests to the field and from biomonitoring results to ecological impacts. The crossing of the boundary from molecular mechanisms to population impacts has always been difficult. Perhaps the problem in extrapolation is not so much the effects of physical scale as much as the transition boundary between two different types of systems, organismal and non-organismal. The basic properties of these systems are quite distinct. Organismal systems possess a central core of information, subject to natural selection, that can impose homeostasis (body temperature) or diversity (immune system) upon the constituents of that system. Unless there are changes in the genetic structure of the germ line, impacts to the somatic cells and structure of the organism are erased upon the establishment of a new generation. The integrity of the germplasm means that organismal systems are largely a historical. In contrast, non-organismal systems contain no central and inheritable repository of information, analogous to the genome, that serves as the blueprint for an ecological system. Non-organismal systems are historical in the terminology of complex systems. The irreversibility and historical nature of ecological systems has also been observed experimentally. Historical events and the derived heterogeneity in the field must be taken into account when the extrapolations are conducted. Genetic structure of the populations, the current spatial distribution of species, and the physical structure of the system must all be taken into account if accurate forecasts from experimental results can be made.
A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems
NASA Astrophysics Data System (ADS)
Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong
2017-09-01
In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.
Multi-State Extrapolation of Uv/vis Absorption Spectra with Qm/qm Hybrid Methods
NASA Astrophysics Data System (ADS)
Ren, Sijin; Caricato, Marco
2017-06-01
In this work, we present a simple approach to obtain absorption spectra from hybrid QM/QM calculations. The goal is to obtain reliable spectra for compounds that are too large to be treated entirely at a high level of theory. The approach is based on the extrapolation of the entire absorption spectrum obtained by individual subcalculations. Our program locates the main spectral features in each subcalculation, e.g. band peaks and shoulders, and fits them to Gaussian functions. Each Gaussian is then extrapolated with a formula similar to that of ONIOM (Our own N-layered Integrated molecular Orbital molecular Mechanics). However, information about individual excitations is not necessary so that difficult state-matching across subcalculations is avoided. This multi-state extrapolation thus requires relatively low implementation effort while affording maximum flexibility in the choice of methods to be combined in the hybrid approach. The test calculations show the efficacy and robustness of this methodology in reproducing the spectrum computed for the entire molecule at a high level of theory.
NASA Astrophysics Data System (ADS)
Ilieva, T.; Iliev, I.; Pashov, A.
2016-12-01
In the traditional description of electronic states of diatomic molecules by means of molecular constants or Dunham coefficients, one of the important fitting parameters is the value of the zero point energy - the minimum of the potential curve or the energy of the lowest vibrational-rotational level - E00 . Their values are almost always the result of an extrapolation and it may be difficult to estimate their uncertainties, because they are connected not only with the uncertainty of the experimental data, but also with the distribution of experimentally observed energy levels and the particular realization of set of Dunham coefficients. This paper presents a comprehensive analysis based on Monte Carlo simulations, which aims to demonstrate the influence of all these factors on the uncertainty of the extrapolated minimum of the potential energy curve U (Re) and the value of E00 . The very good extrapolation properties of the Dunham coefficients are quantitatively confirmed and it is shown that for a proper estimate of the uncertainties, the ambiguity in the composition of the Dunham coefficients should be taken into account.
Biosimilar epoetin zeta: extrapolation of indications and real world utilization experience.
Dingermann, Theodor; Scotte, Florian
2016-07-01
There is an essential need for clinicians to understand the development and approval process of biosimilars. Extrapolation of efficacy and safety data from one indication to another may be considered by a comprehensive comparability program including safety, efficacy and immunogenicity, which detect potentially clinically relevant differences. This article specifically discusses the approval of epoetin zeta (Retacrit™, Hospira, a Pfizer company) and the EMA reasoning for extrapolation of indications. Additionally, the results of the ongoing utilization surveillance program that was approved in 2007 and has analyzed over 120 million patient days of epoetin zeta treatment are presented. At the time of approval, uncertainty of safety and efficacy is much less for biosimilars than for new innovative products. Approval of indications based on extrapolation of data is based on sound and objective scientific criteria and a logical consequence of the biosimilar concept that has been successfully implemented in the European Union. Biosimilar epoetin has been used extensively in patients in Europe for nine years. Following a review of the known risks and ADR information received in almost 120 million patient-days' worth of experience, the risks associated with treatment with epoetin zeta remain similar to those of the reference product.
NASA Astrophysics Data System (ADS)
Moraitis, Kostas; Archontis, Vasilis; Tziotziou, Konstantinos; Georgoulis, Manolis K.
We calculate the instantaneous free magnetic energy and relative magnetic helicity of solar active regions using two independent approaches: a) a non-linear force-free (NLFF) method that requires only a single photospheric vector magnetogram, and b) well known semi-analytical formulas that require the full three-dimensional (3D) magnetic field structure. The 3D field is obtained either from MHD simulations, or from observed magnetograms via respective NLFF field extrapolations. We find qualitative agreement between the two methods and, quantitatively, a discrepancy not exceeding a factor of 4. The comparison of the two methods reveals, as a byproduct, two independent tests for the quality of a given force-free field extrapolation. We find that not all extrapolations manage to achieve the force-free condition in a valid, divergence-free, magnetic configuration. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.
Testing magnetofrictional extrapolation with the Titov-Démoulin model of solar active regions
NASA Astrophysics Data System (ADS)
Valori, G.; Kliem, B.; Török, T.; Titov, V. S.
2010-09-01
We examine the nonlinear magnetofrictional extrapolation scheme using the solar active region model by Titov and Démoulin as test field. This model consists of an arched, line-tied current channel held in force-free equilibrium by the potential field of a bipolar flux distribution in the bottom boundary. A modified version with a parabolic current density profile is employed here. We find that the equilibrium is reconstructed with very high accuracy in a representative range of parameter space, using only the vector field in the bottom boundary as input. Structural features formed in the interface between the flux rope and the surrounding arcade - “hyperbolic flux tube” and “bald patch separatrix surface” - are reliably reproduced, as are the flux rope twist and the energy and helicity of the configuration. This demonstrates that force-free fields containing these basic structural elements of solar active regions can be obtained by extrapolation. The influence of the chosen initial condition on the accuracy of reconstruction is also addressed, confirming that the initial field that best matches the external potential field of the model quite naturally leads to the best reconstruction. Extrapolating the magnetogram of a Titov-Démoulin equilibrium in the unstable range of parameter space yields a sequence of two opposing evolutionary phases, which clearly indicate the unstable nature of the configuration: a partial buildup of the flux rope with rising free energy is followed by destruction of the rope, losing most of the free energy.
SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy
Liu, C; Kumarasiri, A; Chetvertkov, M; Gordon, J; Chetty, I; Siddiqui, F; Kim, J
2014-06-01
Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformably registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (μ±σ) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.
Back-extrapolation of estimates of exposure from current land-use regression models
NASA Astrophysics Data System (ADS)
Chen, Hong; Goldberg, Mark S.; Crouse, Dan L.; Burnett, Richard T.; Jerrett, Michael; Villeneuve, Paul J.; Wheeler, Amanda J.; Labrèche, France; Ross, Nancy A.
2010-11-01
Land use regression has been used in epidemiologic studies to estimate long-term exposure to air pollution within cities. The models are often developed toward the end of the study using recent air pollution data. Given that there may be spatially-dependent temporal trends in urban air pollution and that there is interest for epidemiologists in assessing period-specific exposures, especially early-life exposure, methods are required to extrapolate these models back in time. We present herein three new methods to back-extrapolate land use regression models. During three two-week periods in 2005-2006, we monitored nitrogen dioxide (NO 2) at about 130 locations in Montreal, Quebec, and then developed a land-use regression (LUR) model. Our three extrapolation methods entailed multiplying the predicted concentrations of NO 2 by the ratio of past estimates of concentrations from fixed-site monitors, such that they reflected the change in the spatial structure of NO 2 from measurements at fixed-site monitors. The specific methods depended on the availability of land use and traffic-related data, and we back-extrapolated the LUR model to 10 and 20 years into the past. We then applied these estimates to residential information from subjects enrolled in a case-control study of postmenopausal breast cancer that was conducted in 1996. Observed and predicted concentrations of NO 2 in Montreal decreased and were correlated in time. The estimated concentrations using the three extrapolation methods had similar distributions, except that one method yielded slightly lower values. The spatial distributions varied slightly between methods. In the analysis of the breast cancer study, the odds ratios were insensitive to the method but varied with time: for a 5 ppb increase in NO 2 using the 2006 LUR the odds ratio (OR) was about 1.4 and the ORs in predicted past concentrations of NO 2 varied (OR˜1.2 for 1985 and OR˜1.3-1.5 for 1996). Thus, the ORs per unit exposure increased with
NASA Technical Reports Server (NTRS)
Lueck, Dale E. (Inventor)
1994-01-01
Payload customers for the Space Shuttle have recently expressed concerns about the possibility of their payloads at an adjacent pad being contaminated by plume effluents from a shuttle at an active pad as they await launch on an inactive pad. As part of a study to satisfy such concerns a ring of inexpensive dosimeters was deployed around the active pad at the inter-pad distance. However, following a launch, dosimeters cannot be read for several hours after the exposure. As a consequence factors such as different substrates, solvent systems, and possible volatilization of HCl from the badges were studied. This observation led to the length of stain (LOS) dosimeters of this invention. Commercial passive LOS dosimeters are sensitive only to the extent of being capable of sensing 2 ppm to 20 ppm if the exposure is 8 hours. To map and quantitate the HCl generated by Shuttle launches, and in the atmosphere within a radius of 1.5 miles from the active pad, a sensitivity of 2 ppm HCl in the atmospheric gases on an exposure of 5 minutes is required. A passive length of stain dosimeter has been developed having a sensitivity rendering it capable of detecting a gas in a concentration as low as 2 ppm on an exposure of five minutes.
Small angle x-ray scattering of chromatin. Radius and mass per unit length depend on linker length
Williams, S.P.; Langmore, J.P. )
1991-03-01
Analyses of low angle x-ray scattering from chromatin, isolated by identical procedures but from different species, indicate that fiber diameter and number of nucleosomes per unit length increase with the amount of nucleosome linker DNA. Experiments were conducted at physiological ionic strength to obtain parameters reflecting the structure most likely present in living cells. Guinier analyses were performed on scattering from solutions of soluble chromatin from Necturus maculosus erythrocytes (linker length 48 bp), chicken erythrocytes (linker length 64 bp), and Thyone briareus sperm (linker length 87 bp). The results were extrapolated to infinite dilution to eliminate interparticle contributions to the scattering. Cross-sectional radii of gyration were found to be 10.9 {plus minus} 0.5, 12.1 {plus minus} 0.4, and 15.9 {plus minus} 0.5 nm for Necturus, chicken, and Thyone chromatin, respectively, which are consistent with fiber diameters of 30.8, 34.2, and 45.0 nm. Mass per unit lengths were found to be 6.9 {plus minus} 0.5, 8.3 {plus minus} 0.6, and 11.8 {plus minus} 1.4 nucleosomes per 10 nm for Necturus, chicken, and Thyone chromatin, respectively. The geometrical consequences of the experimental mass per unit lengths and radii of gyration are consistent with a conserved interaction among nucleosomes. Cross-linking agents were found to have little effect on fiber external geometry, but significant effect on internal structure. The absolute values of fiber diameter and mass per unit length, and their dependencies upon linker length agree with the predictions of the double-helical crossed-linker model. A compilation of all published x-ray scattering data from the last decade indicates that the relationship between chromatin structure and linker length is consistent with data obtained by other investigators.
Latychevskaia, Tatiana Fink, Hans-Werner; Chushkin, Yuriy; Zontone, Federico
2015-11-02
Coherent diffraction imaging is a high-resolution imaging technique whose potential can be greatly enhanced by applying the extrapolation method presented here. We demonstrate the enhancement in resolution of a non-periodical object reconstructed from an experimental X-ray diffraction record which contains about 10% missing information, including the pixels in the center of the diffraction pattern. A diffraction pattern is extrapolated beyond the detector area and as a result, the object is reconstructed at an enhanced resolution and better agreement with experimental amplitudes is achieved. The optimal parameters for the iterative routine and the limits of the extrapolation procedure are discussed.
NASA Astrophysics Data System (ADS)
Borsányi, Szabolcs; Fodor, Zoltán; Katz, Sándor D.; Pásztor, Attila; Szabó, Kálmán K.; Török, Csaba
2015-04-01
We study the correlators of Polyakov loops, and the corresponding gauge invariant free energy of a static quark-antiquark pair in 2+1 flavor QCD at finite temperature. Our simulations were carried out on N t = 6 , 8 , 10 , 12 , 16 lattices using a Symanzik improved gauge action and a stout improved staggered action with physical quark masses. The free energies calculated from the Polyakov loop correlators are extrapolated to the continuum limit. For the free energies we use a two step renormalization procedure that only uses data at finite temperature. We also measure correlators with definite Euclidean time reversal and charge conjugation symmetry to extract two different screening masses, one in the magnetic, and one in the electric sector, to distinguish two different correlation lengths in the full Polyakov loop correlator.
Neural extrapolation of motion for a ball rolling down an inclined plane.
La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka
2014-01-01
It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion.
Neural Extrapolation of Motion for a Ball Rolling Down an Inclined Plane
La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka
2014-01-01
It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion. PMID:24940874
De Vore, Karl W; Fatahi, Nadia M; Sass, John E
2016-08-01
Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.
Mathieson, A McL; Stevenson, A W
2002-03-01
To ensure that a true zero-extinction kinematical limit value has been attained by extrapolation of a series of measurements on one reflection, the proper dependence of a function of F versus the function of the physical variable involved in the measurements has to be identified. To demonstrate this point, the multiwavelength gamma-ray data on seven reflections of NiF(2) reported by Palmer & Jauch [Acta Cryst. (1995), A51, 662--667] have been utilized. A new physical component has been introduced into the relationship between diffracted intensity and wavelength--that due to the decrease in angular divergence of diffraction from crystallites with decrease in wavelength. For gamma-rays, this leads to a function of F(2) in respect of wavelength, viz F(2) = F(0)(2) - alphalambda + betalambda(2), which is different from that derived from Zachariasen-type models, viz F(2) = F(0)(2) - klambda(2). Comparison of the limit values according to Palmer & Jauch and according to Mathieson & Stevenson demonstrates the advantage of the functional dependence proposed in this study.
Scotcher, Daniel; Jones, Christopher; Posada, Maria; Galetin, Aleksandra; Rostami-Hodjegan, Amin
2016-09-01
It is envisaged that application of mechanistic models will improve prediction of changes in renal disposition due to drug-drug interactions, genetic polymorphism in enzymes and transporters and/or renal impairment. However, developing and validating mechanistic kidney models is challenging due to the number of processes that may occur (filtration, secretion, reabsorption and metabolism) in this complex organ. Prediction of human renal drug disposition from preclinical species may be hampered by species differences in the expression and activity of drug metabolising enzymes and transporters. A proposed solution is bottom-up prediction of pharmacokinetic parameters based on in vitro-in vivo extrapolation (IVIVE), mediated by recent advances in in vitro experimental techniques and application of relevant scaling factors. This review is a follow-up to the Part I of the report from the 2015 AAPS Annual Meeting and Exhibition (Orlando, FL; 25th-29th October 2015) which focuses on IVIVE and mechanistic prediction of renal drug disposition. It describes the various mechanistic kidney models that may be used to investigate renal drug disposition. Particular attention is given to efforts that have attempted to incorporate elements of IVIVE. In addition, the use of mechanistic models in prediction of renal drug-drug interactions and potential for application in determining suitable adjustment of dose in kidney disease are discussed. The need for suitable clinical pharmacokinetics data for the purposes of delineating mechanistic aspects of kidney models in various scenarios is highlighted.
NASA Astrophysics Data System (ADS)
Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan
2013-04-01
Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation
RF-sheath heat flux estimates on Tore Supra and JET ICRF antennae. Extrapolation to ITER
Colas, L.; Portafaix, C.; Goniche, M.; Jacquet, Ph.
2009-11-26
RF-sheath induced heat loads are identified from infrared thermography measurements on Tore Supra ITER-like prototype and JET A2 antennae, and are quantified by fitting thermal calculations. Using a simple scaling law assessed experimentally, the estimated heat fluxes are then extrapolated to the ITER ICRF launcher delivering 20 MW RF power for several plasma scenarios. Parallel heat fluxes up to 6.7 MW/m{sup 2} are expected very locally on ITER antenna front face. The role of edge density on operation is stressed as a trade-off between easy RF coupling and reasonable heat loads. Sources of uncertainty on the results are identified.
Resolution enhancement in digital holography by self-extrapolation of holograms.
Latychevskaia, Tatiana; Fink, Hans-Werner
2013-03-25
It is generally believed that the resolution in digital holography is limited by the size of the captured holographic record. Here, we present a method to circumvent this limit by self-extrapolating experimental holograms beyond the area that is actually captured. This is done by first padding the surroundings of the hologram and then conducting an iterative reconstruction procedure. The wavefront beyond the experimentally detected area is thus retrieved and the hologram reconstruction shows enhanced resolution. To demonstrate the power of this concept, we apply it to simulated as well as experimental holograms.
3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer
NASA Technical Reports Server (NTRS)
Lane, John
2012-01-01
Determining the Z-R relationship (where Z is the radar reflectivity factor and R is rainfall rate) from disdrometer data has been and is a common goal of cloud physicists and radar meteorology researchers. The usefulness of this quantity has traditionally been limited since radar represents a volume measurement, while a disdrometer corresponds to a point measurement. To solve that problem, a 3D-DSD (drop-size distribution) method of determining an equivalent 3D Z-R was developed at the University of Central Florida and tested at the Kennedy Space Center, FL. Unfortunately, that method required a minimum of three disdrometers clustered together within a microscale network (.1-km separation). Since most commercial disdrometers used by the radar meteorology/cloud physics community are high-cost instruments, three disdrometers located within a microscale area is generally not a practical strategy due to the limitations of these kinds of research budgets. A relatively simple modification to the 3D-DSD algorithm provides an estimate of the 3D-DSD and therefore, a 3D Z-R measurement using a single disdrometer. The basis of the horizontal extrapolation is mass conservation of a drop size increment, employing the mass conservation equation. For vertical extrapolation, convolution of a drop size increment using raindrop terminal velocity is used. Together, these two independent extrapolation techniques provide a complete 3DDSD estimate in a volume around and above a single disdrometer. The estimation error is lowest along a vertical plane intersecting the disdrometer position in the direction of wind advection. This work demonstrates that multiple sensors are not required for successful implementation of the 3D interpolation/extrapolation algorithm. This is a great benefit since it is seldom that multiple sensors in the required spatial arrangement are available for this type of analysis. The original software (developed at the University of Central Florida, 1998.- 2000) has
NASA Astrophysics Data System (ADS)
Poirier, Marc; Gagnon, Martin; Tahan, Antoine; Coutu, André; Chamberland-lauzon, Joël
2017-01-01
In this paper, we present the application of cyclostationary modelling for the extrapolation of short stationary load strain samples measured in situ on hydraulic turbine blades. Long periods of measurements allow for a wide range of fluctuations representative of long-term reality to be considered. However, sampling over short periods limits the dynamic strain fluctuations available for analysis. The purpose of the technique presented here is therefore to generate a representative signal containing proper long term characteristics and expected spectrum starting with a much shorter signal period. The final objective is to obtain a strain history that can be used to estimate long-term fatigue behaviour of hydroelectric turbine runners.
Impact of new collider data on fits and extrapolations of cross sections and slopes
Block, M.M.; Cahn, R.N.
1985-08-01
The latest Collider data are compared with our earlier extrapolations. Fits that include the new data are made. Those for which sigma/sub tot/ grows as log/sup 2/(s/s/sub o/) indefinitely give a significantly poorer chi/sup 2/ than those for which sigma/sub tot/ eventually levels out. For the proposed SSC energy for the former fits predict sigma/sub tot/(..sqrt..s = 40 TeV) approx. =200 mb while the latter give sigma/sub tot/(..sqrt..s = 40 TeV) approx. = 100 mb. 6 refs.
Analyzing and Critiquing Occupational Therapy Practice Models Using Mosey's Extrapolation Method.
Ikiugu, Moses N
2010-07-01
Over time, there has been a persistent gap between theory and practice in occupational therapy. In this paper, it is suggested that this gap could be decreased by enhancing therapists' knowledge and understanding of the nature of theory. Mosey's (1996a) 9-step extrapolation method of developing theoretical conceptual practice models is proposed as one way of improving clinicians' understanding of the structure of theoretical conceptual practice models and knowing how to analyze and critique them to determine their usefulness in specific clinical contexts. This understanding will hopefully translate into increased utilization of theoretical conceptual practice models to guide every day practice.
R-matrix and Potential Model Extrapolations for NACRE Update and Extension Project
Aikawa, Masayuki; Katsuma, Masahiko; Takahashi, Kohji; Arnould, Marcel; Arai, Koji; Utsunomiya, Hiroaki
2006-07-12
NACRE, the 'nuclear astrophysics compilation of reaction rates', has been widely utilized in stellar evolution and nucleosynthesis studies. Its update and extension programme started within a Konan-Universite Libre de Bruxelles (ULB) collaboration. At the present moment, experimental data in refereed journals have been collected, and their theoretical extrapolations are being performed using the R-matrix or potential models. For the 3H(d,n)4He and 2H(p,{gamma})3He reactions, we present preliminary results that could well reproduce the experimental data.
NASA Technical Reports Server (NTRS)
Sidi, Avram
1987-01-01
In a recent work by the author an extrapolation method, the W-transformation, was developed, by which a large class of oscillatory infinite integrals can be computed very efficiently. The results of this work are extended to a class of divergent oscillatory infinite integrals in the present paper. It is shown in particular that these divergent integrals exist in the sense of Abel summability and that the W-transformation can be applied to them without any modifications. Convergence results are stated and numerical examples given.
A study of alternative schemes for extrapolation of secular variation at observatories
Alldredge, L.R.
1976-01-01
The geomagnetic secular variation is not well known. This limits the useful life of geomagnetic models. The secular variation is usually assumed to be linear with time. It is found that attenative schemes that employ quasiperiodic variations from internal and external sources can improve the extrapolation of secular variation at high-quality observatories. Although the schemes discussed are not yet fully applicable in worldwide model making, they do suggest some basic ideas that may be developed into useful tools in future model work. ?? 1976.
MEGA16 - Computer program for analysis and extrapolation of stress-rupture data
NASA Technical Reports Server (NTRS)
Ensign, C. R.
1981-01-01
The computerized form of the minimum commitment method of interpolating and extrapolating stress versus time to failure data, MEGA16, is described. Examples are given of its many plots and tabular outputs for a typical set of data. The program assumes a specific model equation and then provides a family of predicted isothermals for any set of data with at least 12 stress-rupture results from three different temperatures spread over reasonable stress and time ranges. It is written in FORTRAN 4 using IBM plotting subroutines and its runs on an IBM 370 time sharing system.
Model of a realistic InP surface quantum dot extrapolated from atomic force microscopy results.
Barettin, Daniele; De Angelis, Roberta; Prosposito, Paolo; Auf der Maur, Matthias; Casalboni, Mauro; Pecchia, Alessandro
2014-05-16
We report on numerical simulations of a zincblende InP surface quantum dot (QD) on In₀.₄₈Ga₀.₅₂ buffer. Our model is strictly based on experimental structures, since we extrapolated a three-dimensional dot directly by atomic force microscopy results. Continuum electromechanical, [Formula: see text] bandstructure and optical calculations are presented for this realistic structure, together with benchmark calculations for a lens-shape QD with the same radius and height of the extrapolated dot. Interesting similarities and differences are shown by comparing the results obtained with the two different structures, leading to the conclusion that the use of a more realistic structure can provide significant improvements in the modeling of QDs fact, the remarkable splitting for the electron p-like levels of the extrapolated dot seems to prove that a realistic experimental structure can reproduce the right symmetry and a correct splitting usually given by atomistic calculations even within the multiband [Formula: see text] approach. Moreover, the energy levels and the symmetry of the holes are strongly dependent on the shape of the dot. In particular, as far as we know, their wave function symmetries do not seem to resemble to any results previously obtained with simulations of zincblende ideal structures, such as lenses or truncated pyramids. The magnitude of the oscillator strengths is also strongly dependent on the shape of the dot, showing a lower intensity for the extrapolated dot, especially for the transition between the electrons and holes ground state, as a result of a relevant reduction of the wave functions overlap. We also compare an experimental photoluminescence spectrum measured on an homogeneous sample containing about 60 dots with a numerical ensemble average derived from single dot calculations. The broader energy range of the numerical spectrum motivated us to perform further verifications, which have clarified some aspects of the experimental
Challenges for In vitro to in Vivo Extrapolation of Nanomaterial Dosimetry for Human Risk Assessment
Smith, Jordan N.
2013-11-01
The proliferation in types and uses of nanomaterials in consumer products has led to rapid application of conventional in vitro approaches for hazard identification. Unfortunately, assumptions pertaining to experimental design and interpretation for studies with chemicals are not generally appropriate for nanomaterials. The fate of nanomaterials in cell culture media, cellular dose to nanomaterials, cellular dose to nanomaterial byproducts, and intracellular fate of nanomaterials at the target site of toxicity all must be considered in order to accurately extrapolate in vitro results to reliable predictions of human risk.
Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...
Mueller, David S.
2013-01-01
proﬁles from the entire cross section and multiple transects to determine a mean proﬁle for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by ﬁeld hydrographers has demonstrated that extrap is a more accurate and efﬁcient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers’ software.
Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...
Spackman, Peter R.; Karton, Amir
2015-05-15
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.
Yamamoto, Tetsuya
2007-06-01
A novel test fixture operating at a millimeter-wave band using an extrapolation range measurement technique was developed at the National Metrology Institute of Japan (NMIJ). Here I describe the measurement system using a Q-band test fixture. I measured the relative insertion loss as a function of antenna separation distance and observed the effects of multiple reflections between the antennas. I also evaluated the antenna gain at 33 GHz using the extrapolation technique.
Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer.
Le, Guigao; Oulaid, Othmane; Zhang, Junfeng
2015-03-01
In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.
Space-time hole filling with random walks in view extrapolation for 3D video.
Choi, Sunghwan; Ham, Bumsub; Sohn, Kwanghoon
2013-06-01
In this paper, a space-time hole filling approach is presented to deal with a disocclusion when a view is synthesized for the 3D video. The problem becomes even more complicated when the view is extrapolated from a single view, since the hole is large and has no stereo depth cues. Although many techniques have been developed to address this problem, most of them focus only on view interpolation. We propose a space-time joint filling method for color and depth videos in view extrapolation. For proper texture and depth to be sampled in the following hole filling process, the background of a scene is automatically segmented by the random walker segmentation in conjunction with the hole formation process. Then, the patch candidate selection process is formulated as a labeling problem, which can be solved with random walks. The patch candidates that best describe the hole region are dynamically selected in the space-time domain, and the hole is filled with the optimal patch for ensuring both spatial and temporal coherence. The experimental results show that the proposed method is superior to state-of-the-art methods and provides both spatially and temporally consistent results with significantly reduced flicker artifacts.
Holland, Ariel T.; Palaniappan, Latha P.
2015-01-01
Asian-American citizens are the fastest growing racial/ethnic group in the United States. Nevertheless, data on Asian American health are scarce, and many health disparities for this population remain unknown. Much of our knowledge of Asian American health has been determined by studies in which investigators have either grouped Asian-American subjects together or examined one subgroup alone (e.g., Asian Indian, Chinese, Filipino, Japanese, Korean, Vietnamese). National health surveys that collect information on Asian-American race/ethnicity frequently omit this population in research reports. When national health data are reported for Asian-American subjects, it is often reported for the aggregated group. This aggregation may mask differences between Asian-American subgroups. When health data are reported by Asian American subgroup, it is generally reported for one subgroup alone. In the Ni-Hon-San study, investigators examined cardiovascular disease in Japanese men living in Japan (Nippon; Ni), Honolulu, Hawaii (Hon), and San Francisco, CA (San). The findings from this study are often incorrectly extrapolated to other Asian-American subgroups. Recommendations to correct the errors associated with omission, aggregation, and extrapolation include: oversampling of Asian Americans, collection and reporting of race/ ethnicity data by Asian-American subgroup, and acknowledgement of significant heterogeneity among Asian American subgroups when interpreting data. PMID:22625997
Kudritskii, V.D.; Atamanyuk, I.P.; Ivashchenko, E.N.
1995-09-01
Control problems often require predicting the future state of the controlled plant given its present and past state. The practical relevance of such prediction problems has spurred many studies and led to the development of various methods of solution. These methods can be divided into two large directions: deductive methods, which assume that in addition to the sample the researcher also has some prior information, and inductive methods, where the main heuristic is the choice of an external performance criterion. Each of these directions has its strengths and weaknesses, and is characterized by a specific domain of application. An obvious advantage of the inductive approach is that it requires a minimum of information (in the limit, the problem is solved using a single observed realization, which is not feasible with any other method). However, the heuristic choice of the external criterion, substantially influence the accuracy of extrapolation. Deductive methods, in their turn, ensure a guaranteed, prespecified extrapolation accuracy, but their application requires preliminary, fairly time consuming and costly accumulation of empirical data about the observed phenomenon. The two main directions are mutually complementary, and the use of a particular direction in applications is mainly determined by the volume of data that have been accumulated up to the relevant time.
NASA Astrophysics Data System (ADS)
Gauthier, P.-A.; Camier, C.; Pasco, Y.; Berry, A.; Chambatte, E.; Lapointe, R.; Delalay, M.-A.
2011-11-01
For sound field reproduction using multichannel spatial sound systems such as Wave Field Synthesis and Ambisonics, sound field extrapolation is a useful tool for the measurement, description and characterization of a sound environment to be reproduced in a listening area. In this paper, the inverse problem theory is adapted to sound field extrapolation around a microphone array for further spatial sound and sound environment reproduction. A general review of inverse problem theory and analysis tools is given and used for the comparative evaluation of various microphone array configurations. Classical direct regularization methods such as truncated singular value decomposition and Tikhonov regularization are recalled. On the basis of the reviewed background, a new regularization method adapted to the problem at hand is introduced. This method involves the use of an a priori beamforming measurement to define a data-dependent discrete smoothing norm for the regularization of the inverse problem. This method which represents the main contribution of this paper shows promising results and opens new research avenues.
The role of strange sea quarks in chiral extrapolations on the lattice
NASA Astrophysics Data System (ADS)
Descotes-Genon, S.
2005-03-01
Since the strange quark has a light mass of order O(Λ_{{QCD}}), fluctuations of sea sbar{s} pairs may play a special role in the low-energy dynamics of QCD by inducing significantly different patterns of chiral symmetry breaking in the chiral limits N f = 2 ( m u = m d = 0, m s physical) and N f = 3 ( m u = m d = m s = 0). This effect of vacuum fluctuations of sbar{s} pairs is related to the violation of the Zweig rule in the scalar sector, described through the two O( p 4) low-energy constants L 4 and L 6 of the three-flavour strong chiral lagrangian. In the case of significant vacuum fluctuations, three-flavour chiral expansions might exhibit numerical competition between leading- and next-to-leading-order terms according to the chiral counting, and chiral extrapolations should be handled with special care. We investigate the impact of the fluctuations of sbar{s} pairs on chiral extrapolations in the case of lattice simulations with three dynamical flavours in the isospin limit. Information on the size of the vacuum fluctuations can be obtained from the dependence of the masses and decay constants of pions and kaons on the light quark masses. Even in the case of large fluctuations, corrections due to the finite size of spatial dimensions can be kept under control for large enough boxes (L˜ 2.5 fm).
Image reconstruction: a unifying model for resolution enhancement and data extrapolation. Tutorial
NASA Astrophysics Data System (ADS)
Shieh, Hsin M.; Byrne, Charles L.; Fiddy, Michael A.
2006-02-01
In reconstructing an object function F(r) from finitely many noisy linear-functional values ∫F(r)Gn(r)dr we face the problem that finite data, noisy or not, are insufficient to specify F(r) uniquely. Estimates based on the finite data may succeed in recovering broad features of F(r), but may fail to resolve important detail. Linear and nonlinear, model-based data extrapolation procedures can be used to improve resolution, but at the cost of sensitivity to noise. To estimate linear-functional values of F(r) that have not been measured from those that have been, we need to employ prior information about the object F(r), such as support information or, more generally, estimates of the overall profile of F(r). One way to do this is through minimum-weighted-norm (MWN) estimation, with the prior information used to determine the weights. The MWN approach extends the Gerchberg-Papoulis band-limited extrapolation method and is closely related to matched-filter linear detection, the approximation of the Wiener filter, and to iterative Shannon-entropy-maximization algorithms. Nonlinear versions of the MWN method extend the noniterative, Burg, maximum-entropy spectral-estimation procedure.
Image reconstruction: a unifying model for resolution enhancement and data extrapolation. Tutorial.
Shieh, Hsin M; Byrne, Charles L; Fiddy, Michael A
2006-02-01
In reconstructing an object function F(r) from finitely many noisy linear-functional values integral of F(r)Gn(r)dr we face the problem that finite data, noisy or not, are insufficient to specify F(r) uniquely. Estimates based on the finite data may succeed in recovering broad features of F(r), but may fail to resolve important detail. Linear and nonlinear, model-based data extrapolation procedures can be used to improve resolution, but at the cost of sensitivity to noise. To estimate linear-functional values of F(r) that have not been measured from those that have been, we need to employ prior information about the object F(r), such as support information or, more generally, estimates of the overall profile of F(r). One way to do this is through minimum-weighted-norm (MWN) estimation, with the prior information used to determine the weights. The MWN approach extends the Gerchberg-Papoulis band-limited extrapolation method and is closely related to matched-filter linear detection, the approximation of the Wiener filter, and to iterative Shannon-entropy-maximization algorithms. Non-linear versions of the MWN method extend the noniterative, Burg, maximum-entropy spectral-estimation procedure.
Wadsworth, Ian; Jaki, Thomas; Sills, Graeme J; Appleton, Richard; Cross, J Helen; Marson, Anthony G; Martland, Tim; McLellan, Ailsa; Smith, Philip E M; Pellock, John M; Hampson, Lisa V
2016-11-01
Data from clinical trials in adults, extrapolated to predict benefits in paediatric patients, could result in fewer or smaller trials being required to obtain a new drug licence for paediatrics. This article outlines the place of such extrapolation in the development of drugs for use in paediatric epilepsies. Based on consensus expert opinion, a proposal is presented for a new paradigm for the clinical development of drugs for focal epilepsies. Phase I data should continue to be collected in adults, and phase II and III trials should simultaneously recruit adults and paediatric patients aged above 2 years. Drugs would be provisionally licensed for children subject to phase IV collection of neurodevelopmental safety data in this age group. A single programme of trials would suffice to license the drug for use as either adjunctive therapy or monotherapy. Patients, clinicians and sponsors would all benefit from this new structure through cost reduction and earlier access to novel treatments. Further work is needed to elicit the views of patients, their parents and guardians as appropriate, regulatory authorities and bodies such as the National Institute for Health and Care Excellence (UK).
Video error concealment using block matching and frequency selective extrapolation algorithms
NASA Astrophysics Data System (ADS)
P. K., Rajani; Khaparde, Arti
2017-06-01
Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.
Image extrapolation for photo stitching using nonlocal patch-based inpainting
NASA Astrophysics Data System (ADS)
Voronin, V. V.; Marchuk, V. I.; Sherstobitov, A. I.; Semenischev, E. A.; Agaian, S.; Egiazarian, K.
2014-05-01
Image alignment and mosaicing are usually performed on a set of overlapping images, using features in the area of overlap for seamless stitching. In many cases such images have different size and shape. So we need to crop panoramas or to use image extrapolation for them. This paper focuses on novel image inpainting method based on modified exemplar-based technique. The basic idea is to find an example (patch) from an image using local binary patterns, and replacing non-existed (`lost') data with it. We propose to use multiple criteria for a patch similarity search since often in practice existed exemplar-based methods produce unsatisfactory results. The criteria for searching the best matching uses several terms, including Euclidean metric for pixel brightness and Chi-squared histogram matching distance for local binary patterns. A combined use of textural geometric characteristics together with color information allows to get more informative description of the patches. In particular, we show how to apply this strategy for image extrapolation for photo stitching. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.
Downscaling and extrapolating dynamic seasonal marine forecasts for coastal ocean users
NASA Astrophysics Data System (ADS)
Vanhatalo, Jarno; Hobday, Alistair J.; Little, L. Richard; Spillman, Claire M.
2016-04-01
Marine weather and climate forecasts are essential in planning strategies and activities on a range of temporal and spatial scales. However, seasonal dynamical forecast models, that provide forecasts in monthly scale, often have low offshore resolution and limited information for inshore coastal areas. Hence, there is increasing demand for methods capable of fine scale seasonal forecasts covering coastal waters. Here, we have developed a method to combine observational data with dynamical forecasts from POAMA (Predictive Ocean Atmosphere Model for Australia; Australian Bureau of Meteorology) in order to produce seasonal downscaled, corrected forecasts, extrapolated to include inshore regions that POAMA does not cover. We demonstrate the method in forecasting the monthly sea surface temperature anomalies in the Great Australian Bight (GAB) region. The resolution of POAMA in the GAB is approximately 2° × 1° (lon. × lat.) and the resolution of our downscaled forecast is approximately 1° × 0.25°. We use data and model hindcasts for the period 1994-2010 for forecast validation. The predictive performance of our statistical downscaling model improves on the original POAMA forecast. Additionally, this statistical downscaling model extrapolates forecasts to coastal regions not covered by POAMA and its forecasts are probabilistic which allows straightforward assessment of uncertainty in downscaling and prediction. A range of marine users will benefit from access to downscaled and nearshore forecasts at seasonal timescales.
On Extrapolating Past the Range of Observed Data When Making Statistical Predictions in Ecology
Conn, Paul B.; Johnson, Devin S.; Boveng, Peter L.
2015-01-01
Ecologists are increasingly using statistical models to predict animal abundance and occurrence in unsampled locations. The reliability of such predictions depends on a number of factors, including sample size, how far prediction locations are from the observed data, and similarity of predictive covariates in locations where data are gathered to locations where predictions are desired. In this paper, we propose extending Cook’s notion of an independent variable hull (IVH), developed originally for application with linear regression models, to generalized regression models as a way to help assess the potential reliability of predictions in unsampled areas. Predictions occurring inside the generalized independent variable hull (gIVH) can be regarded as interpolations, while predictions occurring outside the gIVH can be regarded as extrapolations worthy of additional investigation or skepticism. We conduct a simulation study to demonstrate the usefulness of this metric for limiting the scope of spatial inference when conducting model-based abundance estimation from survey counts. In this case, limiting inference to the gIVH substantially reduces bias, especially when survey designs are spatially imbalanced. We also demonstrate the utility of the gIVH in diagnosing problematic extrapolations when estimating the relative abundance of ribbon seals in the Bering Sea as a function of predictive covariates. We suggest that ecologists routinely use diagnostics such as the gIVH to help gauge the reliability of predictions from statistical models (such as generalized linear, generalized additive, and spatio-temporal regression models). PMID:26496358
Finite-Element Extrapolation of Myocardial Structure Alterations Across the Cardiac Cycle in Rats
David Gomez, Arnold; Bull, David A.; Hsu, Edward W.
2015-01-01
Myocardial microstructures are responsible for key aspects of cardiac mechanical function. Natural myocardial deformation across the cardiac cycle induces measurable structural alteration, which varies across disease states. Diffusion tensor magnetic resonance imaging (DT-MRI) has become the tool of choice for myocardial structural analysis. Yet, obtaining the comprehensive structural information of the whole organ, in 3D and time, for subject-specific examination is fundamentally limited by scan time. Therefore, subject-specific finite-element (FE) analysis of a group of rat hearts was implemented for extrapolating a set of initial DT-MRI to the rest of the cardiac cycle. The effect of material symmetry (isotropy, transverse isotropy, and orthotropy), structural input, and warping approach was observed by comparing simulated predictions against in vivo MRI displacement measurements and DT-MRI of an isolated heart preparation at relaxed, inflated, and contracture states. Overall, the results indicate that, while ventricular volume and circumferential strain are largely independent of the simulation strategy, structural alteration predictions are generally improved with the sophistication of the material model, which also enhances torsion and radial strain predictions. Moreover, whereas subject-specific transversely isotropic models produced the most accurate descriptions of fiber structural alterations, the orthotropic models best captured changes in sheet structure. These findings underscore the need for subject-specific input data, including structure, to extrapolate DT-MRI measurements across the cardiac cycle. PMID:26299478
Regional GPS TEC modeling; Attempted spatial and temporal extrapolation of TEC using neural networks
NASA Astrophysics Data System (ADS)
Habarulema, John Bosco; McKinnell, Lee-Anne; Opperman, Ben D. L.
2011-04-01
In this paper, the potential extrapolation capabilities and limitations of artificial neural networks (ANNs) are investigated. This is primarily done by generating total electron content (TEC) predictions using the regional southern Africa total electron content prediction (SATECP) model based on the Global Positioning System (GPS) data and ANNs with the aid of multiple inputs intended to enable the software to learn and correlate the relationship between their variations and the target parameter, TEC. TEC values are predicted over regions that were not covered in the model's development, although it is difficult to validate their accuracy in some cases. The SATECP model is also used to forecast hourly TEC variability 1 year ahead in order to assess the forecasting capability of ANNs in generalizing TEC patterns. The developed SATECP model has also been independently validated by ionosonde data and TEC values derived from the adapted University of New Brunswick Ionospheric Mapping Technique (UNB-IMT) over southern Africa. From the comparison of prediction results with actual GPS data, it is observed that ANNs extrapolate relatively well during quiet periods while the accuracy is low during geomagnetically disturbed conditions. However, ANNs correctly identify both positive and negative storm effects observed in GPS TEC data analyzed within the input space.
Coaxial atomizer liquid intact lengths
NASA Technical Reports Server (NTRS)
Eroglu, Hasan; Chigier, Norman; Farago, Zoltan
1991-01-01
Average intact lengths of round liquid jets generated by airblast coaxial atomizer were measured from over 1500 photographs. The intact lengths were studied over a jet Reynolds number range of 18,000 and Weber number range of 260. Results are presented for two different nozzle geometries. The intact lengths were found to be strongly dependent on Re and We numbers. An empirical equation was derived as a function of these parameters. A comparison of the intact lengths for round jets and flat sheets shows that round jets generate shorter intact lengths.
CT image construction of a totally deflated lung using deformable model extrapolation
Sadeghi Naini, Ali; Pierce, Greg; Lee, Ting-Yim; and others
2011-02-15
Purpose: A novel technique is proposed to construct CT image of a totally deflated lung from a free-breathing 4D-CT image sequence acquired preoperatively. Such a constructed CT image is very useful in performing tumor ablative procedures such as lung brachytherapy. Tumor ablative procedures are frequently performed while the lung is totally deflated. Deflating the lung during such procedures renders preoperative images ineffective for targeting the tumor. Furthermore, the problem cannot be solved using intraoperative ultrasound (U.S.) images because U.S. images are very sensitive to small residual amount of air remaining in the deflated lung. One possible solution to address these issues is to register high quality preoperative CT images of the deflated lung with their corresponding low quality intraoperative U.S. images. However, given that such preoperative images correspond to an inflated lung, such CT images need to be processed to construct CT images pertaining to the lung's deflated state. Methods: To obtain the CT images of deflated lung, we present a novel image construction technique using extrapolated deformable registration to predict the deformation the lung undergoes during full deflation. The proposed construction technique involves estimating the lung's air volume in each preoperative image automatically in order to track the respiration phase of each 4D-CT image throughout a respiratory cycle; i.e., the technique does not need any external marker to form a respiratory signal in the process of curve fitting and extrapolation. The extrapolated deformation field is then applied on a preoperative reference image in order to construct the totally deflated lung's CT image. The technique was evaluated experimentally using ex vivo porcine lung. Results: The ex vivo lung experiments led to very encouraging results. In comparison with the CT image of the deflated lung we acquired for the purpose of validation, the constructed CT image was very similar. The
Extrapolation of a non-linear autoregressive model of pulmonary mechanics.
Langdon, Ruby; Docherty, Paul D; Chiew, Yeong Shiong; Chase, J Geoffrey
2017-02-01
For patients with acute respiratory distress syndrome (ARDS), mechanical ventilation (MV) is an essential therapy in the intensive care unit (ICU). Suboptimal PEEP levels in MV can cause ventilator induced lung injury, which is associated with increased mortality, extended ICU stay, and high cost. The ability to predict the outcome of respiratory mechanics in response to changes in PEEP would thus provide a critical advantage in personalising and improving care. Testing the potentially dangerous high pressures would not be required to assess their impact. A nonlinear autoregressive (NARX) model was used to predict airway pressure in 19 data sets from 10 mechanically ventilated ARDS patients. Patient-specific NARX models were identified from pressure and flow data over one, two, three, or four adjacent PEEP levels in a recruitment manoeuvre. Extrapolation of NARX model elastance functions allowed prediction of patient responses to PEEP changes to higher or lower pressures. NARX model predictions were more successful than those using a well validated first order model (FOM). The most clinically important results were for extrapolation up one PEEP step of 2cmH2O from the highest PEEP in the training data. When the NARX model was trained on one PEEP level, the mean RMS residual for the extrapolation PEEP level was 0.52 (90% CI: 0.47-0.57) cmH2O, compared to 1.50 (90% CI: 1.38-1.62) cmH2O for the FOM. When trained on four PEEP levels, the NARX result was 0.50 (90% CI: 0.42-0.58) cmH2O, and was 1.95 (90% CI: 1.71-2.19) cmH2O for the FOM. The results suggest that a full recruitment manoeuvre may not be required for the NARX model to obtain a useful estimate of the pressure waveform at higher PEEP levels. The methodology could thus allow clinicians to make informed decisions about ventilator PEEP settings while reducing the risk associated with high PEEP, and subsequent high peak airway pressures.
NASA Astrophysics Data System (ADS)
Poppe, L. J.; Eliason, A. E.; Hastings, M. E.
2004-05-01
Methods that describe and summarize grain-size distributions are important to geologists because of the large amount of information contained in textural data sets. Therefore, to facilitate reduction of sedimentologic data, we have written a computer program (GSSTAT) to generate grain-size statistics and extrapolate particle distributions. Our program is written in Microsoft Visual Basic 6.0, runs on Windows 95/98/ME/NT/2000/XP computers, provides a window to facilitate execution, and allows users to select options with mouse-click events or through interactive dialogue boxes. The program permits users to select output in either inclusive graphics or moment statistics, to extrapolate distributions to the colloidal-clay boundary by three methods, and to convert between frequency and cumulative frequency percentages. Detailed documentation is available within the program. Input files to the program must be comma-delimited ASCII text and have 20 fields that include: sample identifier, latitude, longitude, and the frequency or cumulative frequency percentages of the whole-phi fractions from 11 phi through -5 phi. Individual fields may be left blank, but the sum of the phi fractions must total 100% (+/- 0.2%). The program expects the first line of the input file to be a header showing attribute names; no embedded commas are allowed in any of the fields. Error messages warn the user of potential problems. The program generates an output file in the requested destination directory and allows the user to view results in a display window to determine the occurrence of errors. The output file has a header for its first line, but now has 34 fields; the original descriptor fields plus percentages of gravel, sand, silt and clay, statistics, classification, verbal descriptions, frequency or cumulative frequency percentages of the whole- phi fractions from 13 phi through -5 phi, and a field for error messages. If the user has selected extrapolation, the two additional phi
NASA Astrophysics Data System (ADS)
Fernandes, Ryan I.; Fairweather, Graeme
2012-08-01
An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated Crank-Nicolson OSC method and is algebraically linear. It is efficient, requiring at each time level only O(N) operations where N is the number of unknowns. Moreover, it is shown to produce approximations which are of optimal global accuracy in various norms, and to possess superconvergence properties.
The use of extrapolation concepts to augment the Frequency Separation Technique
NASA Astrophysics Data System (ADS)
Alexiou, Spiros
2015-03-01
The Frequency Separation Technique (FST) is a general method formulated to improve the speed and/or accuracy of lineshape calculations, including strong overlapping collisions, as is the case for ion dynamics. It should be most useful when combined with ultrafast methods, that, however have significant difficulties when the impact regime is approached. These difficulties are addressed by the Frequency Separation Technique, in which the impact limit is correctly recovered. The present work examines the possibility of combining the Frequency Separation Technique with the addition of extrapolation to improve results and minimize errors resulting from the neglect of fast-slow coupling and thus obtain the exact result with a minimum of extra effort. To this end the adequacy of one such ultrafast method, the Frequency Fluctuation Method (FFM) for treating the nonimpact part is examined. It is found that although the FFM is unable to reproduce the nonimpact profile correctly, its coupling with the FST correctly reproduces the total profile.
Variance reduction technique in a beta radiation beam using an extrapolation chamber.
Polo, Ivón Oramas; Souza Santos, William; de Lara Antonio, Patrícia; Caldas, Linda V E
2017-10-01
This paper aims to show how the variance reduction technique "Geometry splitting/Russian roulette" improves the statistical error and reduces uncertainties in the determination of the absorbed dose rate in tissue using an extrapolation chamber for beta radiation. The results show that the use of this technique can increase the number of events in the chamber cavity leading to a closer approximation of simulation result with the physical problem. There was a good agreement among the experimental measurements, the certificate of manufacture and the simulation results of the absorbed dose rate values and uncertainties. The absorbed dose rate variation coefficient using the variance reduction technique "Geometry splitting/Russian roulette" was 2.85%. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaluation of PNEC values: extrapolation from Microtox, algae, daphnid, and fish data to HC5.
Garay, V; Roman, G; Isnard, P
2000-02-01
In order to evaluate the risk to the environment from long term exposure of any discharged substance, toxicity thresholds are estimated, and particularly the Predicted No Effect Concentration (PNEC). This concentration can be estimated by the classic assessment factor approach or by statistical methods. These are more scientifically sound but they require several (at least 5-6) chronic ecotoxicity data, implying greater cost and time. New extrapolation methods derived from the statistical concept but requiring less data have been studied. Results show that methods based on chronic data are more reliable than methods based on acute data but the improvement is quite small. Considering the costs of chronic tests compared to acute tests, approaches based on acute data are an attractive alternative. A simple regression on the mean of the acute data gives the best results.
NASA Astrophysics Data System (ADS)
Florez, W. F.; Portapila, M.; Hill, A. F.; Power, H.; Orsini, P.; Bustamante, C. A.
2015-03-01
The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases.
Ordering of metal-ion toxicities in different species--extrapolation to man
England, M.W.; Turner, J.E.; Hingerty, B.E.; Jacobson, K.B. )
1989-01-01
Our previous attempts to predict the toxicities of 24 metal ions for a given species, using physicochemical parameters associated with the ions, are summarized. In our current attempt we have chosen indicators of toxicity for biological systems of increasing levels of complexity--starting with individual biological molecules and ascending to mice as representative of higher-order animals. The numerical values for these indicators have been normalized to a scale of 100 for Mg{sup 2+} (essentially nontoxic) and 0 for Cd{sup 2+} (very toxic). To give predicted toxicities to humans, extrapolations across biological species have been made for each of the metal ions considered. The predicted values are then compared with threshold limit values (TLV) from the literature. Both methods for predicting toxicities have their advantages and disadvantages, and both have limited success for metal ions. However, the second approach suggests that the TLV for Cu{sup 2+} should be lower than that currently recommended.
Interpolation/extrapolation technique with application to hypervelocity impact of space debris
NASA Technical Reports Server (NTRS)
Rule, William K.
1992-01-01
A new technique for the interpolation/extrapolation of engineering data is described. The technique easily allows for the incorporation of additional independent variables, and the most suitable data in the data base is automatically used for each prediction. The technique provides diagnostics for assessing the reliability of the prediction. Two sets of predictions made for known 5-degree-of-freedom, 15-parameter functions using the new technique produced an average coefficient of determination of 0.949. Here, the technique is applied to the prediction of damage to the Space Station from hypervelocity impact of space debris. A new set of impact data is presented for this purpose. Reasonable predictions for bumper damage were obtained, but predictions of pressure wall and multilayer insulation damage were poor.
High energy hadron-nucleus cross sections and their extrapolation to cosmic ray energies
Ball, J.S.; Pantziris, A.
1996-02-01
Old models of the scattering of composite systems based on the Glauber model of multiple diffraction are applied to hadron-nucleus scattering. We obtain an excellent fit iwht only two free parameters to the highest energy hadron-nucleus data available. Because of the quality of the fit and the simplicity of the model it is argued that it should continue to be reliable up to the highest cosmic ray energies. Logarithmic extrapolations of {ital p}-{ital p} and {bar {ital p}}-{ital p} data are used to calculate the proton-air cross sections at very high energy. Finally, it is observed that if the exponential behavior of the {bar {ital p}}-{ital p} diffraction peak continues into the few TeV energy range it will violate partial wave unitarity. We propose a simple modification that will guarantee unitarity throughout the cosmic ray energy region. {copyright} {ital 1996 The American Physical Society.}
High-accuracy extrapolated ab initio thermochemistry of the vinyl, allyl, and vinoxy radicals.
Tabor, Daniel P; Harding, Michael E; Ichino, Takatoshi; Stanton, John F
2012-07-26
Enthalpies of formation at both 0 and 298 K were calculated according to the HEAT (High-accuracy Extrapolated Ab initio Thermochemistry) protocol for the title molecules, all of which play important roles in combustion chemistry. At the HEAT345-(Q) level of theory, recommended enthalpies of formation at 0 K are 301.5 ± 1.3, 180.3 ± 1.8, and 23.4 ± 1.5 kJ mol(-1) for vinyl, allyl, and vinoxy, respectively. At 298 K, the corresponding values are 297.3, 168.6, and 16.1 kJ mol(-1), with the same uncertainties. The calculated values for the three radicals are in excellent agreement with the corresponding experimental values, but the uncertainties associated with the HEAT values for vinoxy are considerably smaller than those based on experimental studies.
NASA Astrophysics Data System (ADS)
Takei, K.; Kumai, K.; Kobayashi, Y.; Miyashiro, H.; Terada, N.; Iwahori, T.; Tanaka, T.
The testing methods to estimate the life cycles of lithium ion batteries for a short period, have been developed using a commercialized cell with LiCoO 2/hard carbon cell system. The degradation reactions with increasing cycles were suggested to occur predominantly above 4 V from the results of operating voltage range divided tests. In the case of the extrapolation method using limited cycle data, the straight line approximation was useful as the cycle performance has the linearity, but the error is at most 40% in using the initial short cycle data. In the case of the accelerated aging tests using the following stress factors, the charge and/or discharge rate, large accelerated coefficients were obtained in the high charge rate and the high temperature thermal stress.
Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc
2013-06-01
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.
Removal of lipid artifacts in 1H spectroscopic imaging by data extrapolation.
Haupt, C I; Schuff, N; Weiner, M W; Maudsley, A A
1996-05-01
Proton MR spectroscopic imaging (MRSI) of human cerebral cortex is complicated by the presence of an intense signal from subcutaneous lipids, which, if not suppressed before Fourier reconstruction, causes ringing and signal contamination throughout the metabolite images as a result of limited k-space sampling. In this article, an improved reconstruction of the lipid region is obtained using the Papoulis-Gerchberg algorithm. This procedure makes use of the narrow-band-limited nature of the subcutaneous lipid signal to extrapolate to higher k-space values without alteration of the metabolite signal region. Using computer simulations and in vivo experimental studies, the implementation and performance of this algorithm were examined. This method was found to permit MRSI brain spectra to be obtained without applying any lipid suppression during data acquisition, at echo times of 50 ms and longer. When applied together with optimized acquisition methods, this provides an effective procedure for imaging metabolite distributions in cerebral cortical surface regions.
Considerations for extrapolating in vivo bioequivalence data across species and routes.
Modric, S; Bermingham, E; Heit, M; Lainesse, C; Thompson, C
2012-04-01
The purpose of this article is to discuss the numerous species-specific and route-specific factors that can influence the peak and extent of exposure of an active pharmaceutical ingredient as they relate to the demonstration of bioequivalence between veterinary drug products (test and reference formulations). Evaluation of potential circumstances when species-to-species or route-to-route extrapolations of bioequivalence data could be considered is provided, together with suggestions for alternative statistical analysis. It is concluded that further research is much needed in this area to establish an appropriate scientific basis for across-species and across-route comparisons. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.
On shrinkage and model extrapolation in the evaluation of clinical center performance
Varewyck, Machteld; Goetghebeur, Els; Eriksson, Marie; Vansteelandt, Stijn
2014-01-01
We consider statistical methods for benchmarking clinical centers based on a dichotomous outcome indicator. Borrowing ideas from the causal inference literature, we aim to reveal how the entire study population would have fared under the current care level of each center. To this end, we evaluate direct standardization based on fixed versus random center effects outcome models that incorporate patient-specific baseline covariates to adjust for differential case-mix. We explore fixed effects (FE) regression with Firth correction and normal mixed effects (ME) regression to maintain convergence in the presence of very small centers. Moreover, we study doubly robust FE regression to avoid outcome model extrapolation. Simulation studies show that shrinkage following standard ME modeling can result in substantial power loss relative to the considered alternatives, especially for small centers. Results are consistent with findings in the analysis of 30-day mortality risk following acute stroke across 90 centers in the Swedish Stroke Register. PMID:24812420
Chiral extrapolations of the ρ(770) meson in Nf=2+1 lattice QCD simulations
Hu, B.; Molina, R.; Döring, M.; ...
2017-08-24
Recentmore » $$N_f=2+1$$ lattice data for meson-meson scattering in $p$-wave and isospin $I=1$ are analyzed using a unitarized model inspired by Chiral Perturbation Theory in the inverse-amplitude formulation for two and three flavors. We perform chiral extrapolations that postdict phase shifts extracted from experiment quite well. Additionally, the low-energy constants are compared to the ones from a recent analysis of $$N_f=2$$ lattice QCD simulations to check for the consistency of the hadronic model used here. Some inconsistencies are detected in the fits to $$N_f=2+1$$ data, in contrast to the previous analysis of $$N_f=2$$ data.« less
Dowding, Kevin J.; Hills, Richard Guy
2005-04-01
Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.
Harding, M. E.; Vazquez, J.; Ruscic, B.; Wilson, A. K.; Gauss, J.; Stanton, J. F.; Chemical Sciences and Engineering Division; Univ. t Mainz; The Univ. of Texas; Univ. of North Texas
2008-01-01
Effects of increased basis-set size as well as a correlated treatment of the diagonal Born-Oppenheimer approximation are studied within the context of the high-accuracy extrapolated ab initio thermochemistry (HEAT) theoretical model chemistry. It is found that the addition of these ostensible improvements does little to increase the overall accuracy of HEAT for the determination of molecular atomization energies. Fortuitous cancellation of high-level effects is shown to give the overall HEAT strategy an accuracy that is, in fact, higher than most of its individual components. In addition, the issue of core-valence electron correlation separation is explored; it is found that approximate additive treatments of the two effects have limitations that are significant in the realm of <1 kJ mol{sup -1} theoretical thermochemistry.
Uncertainties of mass extrapolations in Hartree-Fock-Bogoliubov mass models
NASA Astrophysics Data System (ADS)
Goriely, S.; Capote, R.
2014-05-01
Some 27 Hartree-Fock-Bogoliubov (HFB) mass models have been developed by the Brussels-Montreal collaboration. Each of these models has been obtained with different model prescriptions or corresponds to a significantly different minimum in the parameter space. The corresponding uncertainties in the mass extrapolation are discussed. In addition, for each of these models, uncertainties associated with local variations of the model parameters exist. Those are estimated for the HFB-24 mass model using a variant of the backward-forward Monte Carlo method to propagate the uncertainties on the masses of exotic nuclei far away from the experimentally known regions. The resulting uncertainties are found to be significantly lower than those arising from the 27 HFB mass models. In addition, the derived correlations between the calculated masses and between model parameters are analyzed.
King, A W
1991-12-31
A general procedure for quantifying regional carbon dynamics by spatial extrapolation of local ecosystem models is presented Monte Carlo simulation to calculate the expected value of one or more local models, explicitly integrating the spatial heterogeneity of variables that influence ecosystem carbon flux and storage. These variables are described by empirically derived probability distributions that are input to the Monte Carlo process. The procedure provides large-scale regional estimates based explicitly on information and understanding acquired at smaller and more accessible scales.Results are presented from an earlier application to seasonal atmosphere-biosphere CO{sub 2} exchange for circumpolar ``subarctic`` latitudes (64{degree}N-90{degree}N). Results suggest that, under certain climatic conditions, these high northern ecosystems could collectively release 0.2 Gt of carbon per year to the atmosphere. I interpret these results with respect to questions about global biospheric sinks for atmospheric CO{sub 2} .
Determination of the electronic structure of thiophene oligomers and extrapolation to polythiophene
Jones, D.; Guerra, M.; Favaretto, L. ); Modelli, A. ); Fabrizio, M. ); Distefano, G. )
1990-07-26
Ionization energies, attachment energies, and electrochemical reduction potentials of thiophene oligomers (n {le} 5) have been determined experimentally (ultraviolet photoelectron and electron transmission spectroscopies and cyclic voltammetry) and theoretically (ionization and attachment energies by MINDO/3). The geometrical parameters of the most stable conformation of 2,2{prime}-bithienyl have been computed at the ab initio STO-3G level with complete relaxation. A short extrapolation of the energy data to the polymer provided accurate and reliable values for important properties of (gas phase) polythiophene, namely, ionization energy (6.9 eV), valence bandwidth (3.2 eV), electron affinity (0.9-1.1 eV), HOMO-LUMO energy gap (5.9 eV), and {lambda}{sub max} (2.7 eV).
Nandedkar, Sanjeev D; Sanders, Donald B; Hobson-Webb, Lisa D; Billakota, Santoshi; Barkhaus, Paul E; Stålberg, Erik V
2017-02-09
Reference values (RVs) are required to separate normal from abnormal values obtained in electrodiagnostic (EDx) testing. However, it is frequently impractical to perform studies on control subjects to obtain RVs. The Extrapolated Reference Values (E-Ref) procedure extracts RVs from data obtained during clinically indicated EDx testing. We compared the E-Ref results with established RVs in several sets of EDx data. The mathematical basis for E-Ref was explored to develop an algorithm for the E-Ref procedure. To test the validity of this algorithm, it was applied to simulated and real jitter measurements from control subjects and patients with myasthenia gravis, and to nerve conduction studies from patients with various conditions referred for EDx studies. There was good concordance between E-Ref and RVs for all evaluated data sets. E-Ref is a promising method to develop RVs. Muscle Nerve, 2017. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Yuan, Shihao; Fuji, Nobuaki; Singh, Satish; Borisov, Dmitry
2017-06-01
We present a methodology to invert seismic data for a localized area by combining source-side wavefield injection and receiver-side extrapolation method. Despite the high resolving power of seismic full waveform inversion, the computational cost for practical scale elastic or viscoelastic waveform inversion remains a heavy burden. This can be much more severe for time-lapse surveys, which require real-time seismic imaging on a daily or weekly basis. Besides, changes of the structure during time-lapse surveys are likely to occur in a small area rather than the whole region of seismic experiments, such as oil and gas reservoir or CO2 injection wells. We thus propose an approach that allows to image effectively and quantitatively the localized structure changes far deep from both source and receiver arrays. In our method, we perform both forward and back propagation only inside the target region. First, we look for the equivalent source expression enclosing the region of interest by using the wavefield injection method. Second, we extrapolate wavefield from physical receivers located near the Earth's surface or on the ocean bottom to an array of virtual receivers in the subsurface by using correlation-type representation theorem. In this study, we present various 2-D elastic numerical examples of the proposed method and quantitatively evaluate errors in obtained models, in comparison to those of conventional full-model inversions. The results show that the proposed localized waveform inversion is not only efficient and robust but also accurate even under the existence of errors in both initial models and observed data.
Gorokhovich, Yuri; Reid, Matthew; Mignone, Erica; Voros, Andrew
2003-10-01
Coal mine reclamation projects are very expensive and require coordination of local and federal agencies to identify resources for the most economic way of reclaiming mined land. Location of resources for mine reclamation is a spatial problem. This article presents a methodology that allows the combination of spatial data on resources for the coal mine reclamation and uses GIS analysis to develop a priority list of potential mine reclamation sites within contiguous United States using the method of extrapolation. The extrapolation method in this study was based on the Bark Camp reclamation project. The mine reclamation project at Bark Camp, Pennsylvania, USA, provided an example of the beneficial use of fly ash and dredged material to reclaim 402,600 sq mi of a mine abandoned in the 1980s. Railroads provided transportation of dredged material and fly ash to the site. Therefore, four spatial elements contributed to the reclamation project at Bark Camp: dredged material, abandoned mines, fly ash sources, and railroads. Using spatial distribution of these data in the contiguous United States, it was possible to utilize GIS analysis to prioritize areas where reclamation projects similar to Bark Camp are feasible. GIS analysis identified unique occurrences of all four spatial elements used in the Bark Camp case for each 1 km of the United States territory within 20, 40, 60, 80, and 100 km radii from abandoned mines. The results showed the number of abandoned mines for each state and identified their locations. The federal or state governments can use these results in mine reclamation planning.
The role of de-excitation electrons in measurements with graphite extrapolation chambers.
Kramer, H M; Grosswendt, B
2002-03-07
A method is described for determining the absorbed dose to graphite formedium energy x-rays (50-300 kV). The experimental arrangement consists of an extrapolation chamber which is part of a cylindrical graphite phantom of 30 cm diameter and 13 cm depth. The method presented is an extension of the so-called two-component model. In this model the absorbed dose to graphite is derived from the absorbed dose to the air of the cavity formed by the measuring volume. Considering separately the contributions of the absorbed dose to air in the cavity from electrons produced in Compton and photoelectric interactions this dose can be converted to the absorbed dose to graphite in the limit of zero plate separation. The extension of the two-component model proposed in this paper consists of taking into account the energy transferred to de-excitation electrons, i.e. Auger electrons, which are produced as a consequence of a photoelectric interaction or a Compton scattering process. For the system considered, these electrons have energies in the range between about 200 eV and 3 keV and hence a range in air at atmospheric pressure of 0.2 mm or less. As the amount of energy transferred to the de-excitation electrons is different per unit mass in air and in graphite, there is a region, about 0.2 mm thick, of disturbed electronic equilibrium at the graphite-to-air interface. By means of the extension proposed, the x-ray tube voltage range over which a graphite extrapolation chamber can be used is lowered from 100 kV in the case of the two-component model down to at least 50 kV.
Mannocci, Laura; Roberts, Jason J; Miller, David L; Halpin, Patrick N
2016-10-24
As human activities expand beyond national jurisdictions to the high seas, there is increasing need to consider anthropogenic impacts to species that inhabit these waters. The current scarcity of scientific observations of cetaceans in the high seas impedes the assessment of population-level impacts of these activities. This study is directed towards an important management need in the high seas-the development of plausible density estimates to facilitate a quantitative assessment of anthropogenic impacts on cetacean populations in these waters. Our study region extends from a well-surveyed region within the United States Exclusive Economic Zone into a large region of the western North Atlantic sparsely surveyed for cetaceans. We modeled densities of 15 cetacean taxa using available line transect survey data and habitat covariates and extrapolated predictions to sparsely surveyed regions. We formulated models carefully to reduce the extent of extrapolation beyond covariate ranges, and constrained them to model simple and generalizable relationships. To evaluate confidence in the predictions, we performed several qualitative assessments, such as mapping where predictions were made outside sampled covariate ranges, and comparing them with maps of sightings from a variety of sources that could not be integrated into our models. Our study revealed a range of confidence levels for the model results depending on the taxon and geographic area, and highlights the need for additional surveying in environmentally distinct areas. Combined with their explicit confidence levels and necessary caution, our density estimates can inform a variety of management needs in the high seas, such as the quantification of potential cetacean interactions with military training exercises, shipping, fisheries, deep-sea mining, as well as delineation of areas of special biological significance in international waters. Our approach is generally applicable to other marine taxa and geographic
EVIDENCE FOR SOLAR TETHER-CUTTING MAGNETIC RECONNECTION FROM CORONAL FIELD EXTRAPOLATIONS
Liu, Chang; Deng, Na; Lee, Jeongwoo; Wang, Haimin; Wiegelmann, Thomas; Moore, Ronald L.
2013-12-01
Magnetic reconnection is one of the primary mechanisms for triggering solar eruptive events, but direct observation of this rapid process has been a challenge. In this Letter, using a nonlinear force-free field (NLFFF) extrapolation technique, we present a visualization of field line connectivity changes resulting from tether-cutting reconnection over about 30 minutes during the 2011 February 13 M6.6 flare in NOAA AR 11158. Evidence for the tether-cutting reconnection was first collected through multiwavelength observations and then by analysis of the field lines traced from positions of four conspicuous flare 1700 Å footpoints observed at the event onset. Right before the flare, the four footpoints are located very close to the regions of local maxima of the magnetic twist index. In particular, the field lines from the inner two footpoints form two strongly twisted flux bundles (up to ∼1.2 turns), which shear past each other and reach out close to the outer two footpoints, respectively. Immediately after the flare, the twist index of regions around the footpoints diminishes greatly and the above field lines become low-lying and less twisted (≲0.6 turns), overarched by loops linking the two flare ribbons formed later. About 10% of the flux (∼3 × 10{sup 19} Mx) from the inner footpoints undergoes a footpoint exchange. This portion of flux originates from the edge regions of the inner footpoints that are brightened first. These rapid changes of magnetic field connectivity inferred from the NLFFF extrapolation are consistent with the tether-cutting magnetic reconnection model.
On the effectiveness of CCSD(T) complete basis set extrapolations for atomization energies.
Feller, David; Peterson, Kirk A; Hill, J Grant
2011-07-28
The leading cause of error in standard coupled cluster theory calculations of thermodynamic properties such as atomization energies and heats of formation originates with the truncation of the one-particle basis set expansion. Unfortunately, the use of finite basis sets is currently a computational necessity. Even with basis sets of quadruple zeta quality, errors can easily exceed 8 kcal/mol in small molecules, rendering the results of little practical use. Attempts to address this serious problem have led to a wide variety of proposals for simple complete basis set extrapolation formulas that exploit the regularity in the correlation consistent sequence of basis sets. This study explores the effectiveness of six formulas for reproducing the complete basis set limit. The W4 approach was also examined, although in lesser detail. Reference atomization energies were obtained from standard coupled-cluster singles, doubles, and perturbative triples (CCSD(T)) calculations involving basis sets of 6ζ or better quality for a collection of 141 molecules. In addition, a subset of 51 atomization energies was treated with explicitly correlated CCSD(T)-F12b calculations and very large basis sets. Of the formulas considered, all proved reliable at reducing the one-particle expansion error. Even the least effective formulas cut the error in the raw values by more than half, a feat requiring a much larger basis set without the aid of extrapolation. The most effective formulas cut the mean absolute deviation by a further factor of two. Careful examination of the complete body of statistics failed to reveal a single choice that out performed the others for all basis set combinations and all classes of molecules. © 2011 American Institute of Physics
Jiang Chaowei; Feng Xueshang E-mail: fengx@spaceweather.ac.cn
2012-04-20
The magnetic field in the solar corona is usually extrapolated from a photospheric vector magnetogram using a nonlinear force-free field (NLFFF) model. NLFFF extrapolation needs considerable effort to be devoted to its numerical realization. In this paper, we present a new implementation of the magnetohydrodynamics (MHD) relaxation method for NLFFF extrapolation. The magnetofrictional approach, which is introduced for speeding the relaxation of the MHD system, is realized for the first time by the spacetime conservation-element and solution-element scheme. A magnetic field splitting method is used to further improve the computational accuracy. The bottom boundary condition is prescribed by incrementally changing the transverse field to match the magnetogram, and all other artificial boundaries of the computational box are simply fixed. We examine the code using two types of NLFFF benchmark tests, the Low and Lou semi-analytic force-free solutions and a more realistic solar-like case constructed by van Ballegooijen et al. The results show that our implementation is successful and versatile for extrapolations of either the relatively simple cases or the rather complex cases that need significant rebuilding of the magnetic topology, e.g., a flux rope. We also compute a suite of metrics to quantitatively analyze the results and demonstrate that the performance of our code in extrapolation accuracy basically reaches the same level of the present best-performing code, i.e., that developed by Wiegelmann.
Extrapolation of G0W0 energy levels from small basis sets for elements from H to Cl
NASA Astrophysics Data System (ADS)
Zhu, Tong; Blum, Volker
G0W0 calculations based on orbitals from a density-functional theory reference are widely used to predict carrier levels in molecular and inorganic materials. Their computational feasibility, however, is limited by the need to evaluate slow-converging sums over unoccupied states, requiring large basis sets paired with unfavorable scaling exponents to evaluate the self-energy. In the quantum chemistry literature, complete basis set (CBS) extrapolation strategies have been used successfully to overcome this problem for total energies. We here apply the principle of basis set extrapolation to G0W0 energy levels. For a set of 49 small molecules and clusters containing the elements H, Li through F, and Na through Cl, we test established extrapolation strategies based on Dunning's correlation-consistent (cc) basis sets (aug)-cc-pVNZ (N=2-5), as well as numeric atom-centered NAO-VCC-nZ (n=2-5) basis sets in the FHI-aims all-electron code. For the occupied and lowest unoccupied levels, different extrapolation strategies agree within +/-50 meV based on large 4Z and 5Z basis sets. We show that extrapolation based on much smaller 2Z and 3Z basis sets with largest errors +/- 100 meV based on a refinement of the NAO-VCC-nZ basis sets.
The proton-deuteron scattering length in pionless EFT
NASA Astrophysics Data System (ADS)
König, Sebastian; Hammer, Hans-Werner
2015-10-01
We present a fully perturbative calculation of the quartet-channel proton-deuteron scattering length up to next-to-next-to-leading order (NNLO) in pionless effective field theory. In particular, we use a framework that consistently extracts the Coulomb-modified effective range function for a screened Coulomb potential in momentum space and allows for a clear linear extrapolation back to the physical limit without screening. We find a natural convergence pattern as we go to higher orders in the EFT expansion. Our NNLO result of (10 . 9 +/- 0 . 4) fm agrees with older experimental determinations but deviates from more recent results around 14 fm. As a resolution of this discrepancy, we discuss the scheme dependence of Coulomb subtractions in a three-body system. Supported in part by the NSF, DOE (NUCLEI SciDAC), as well as by the DFG and BMBF.
Reinisch, Walter; Louis, Edouard; Danese, Silvio
2015-01-01
Extrapolation of clinical data from other indications is an important concept in the development of biosimilars. This process depends on strict comparability exercises to establish similarity to the reference medicinal product. However, the extrapolation paradigm has prompted a fierce scientific debate. CT-P13 (Remsima(®), Inflectra(®)), an infliximab biosimilar, is a TNF antagonist used to treat immune-mediated inflammatory diseases. On the basis of totality of similarity data, the EMA approved CT-P13 for all indications held by its reference medicinal product (Remicade(®)) including inflammatory bowel disease. This article reviews the mechanisms of action of TNF antagonists in immune-mediated inflammatory diseases and illustrates the comparable profiles of CT-P13 and reference medicinal product on which the extrapolation of indications including inflammatory bowel disease is based.
NASA Astrophysics Data System (ADS)
Antonio, Patrícia L.; Xavier, Marcos; Caldas, Linda V. E.
2014-11-01
The Calibration Laboratory (LCI) at the Instituto de Pesquisas Energéticas e Nucleares (IPEN) is going to establish a Böhm extrapolation chamber as a primary standard system for the dosimetry and calibration of beta radiation sources and detectors. This chamber was already tested in beta radiation beams with an aluminized Mylar entrance window, and now, it was characterized with an original Hostaphan entrance window. A comparison between the results of the extrapolation chamber with the two entrance windows was performed. The results showed that this extrapolation chamber presents the same effectiveness in beta radiation fields as a primary standard system with both entrance windows, showing that any one of them may be utilized.
Rothe, R.E.
1997-12-01
Sixty-nine critical configurations of up to 186 kg of uranium are reported from very early experiments (1960s) performed at the Rocky Flats Critical Mass Laboratory near Denver, Colorado. Enriched (93%) uranium metal spherical and hemispherical configurations were studied. All were thick-walled shells except for two solid hemispheres. Experiments were essentially unreflected; or they included central and/or external regions of mild steel. No liquids were involved. Critical parameters are derived from extrapolations beyond subcritical data. Extrapolations, rather than more precise interpolations between slightly supercritical and slightly subcritical configurations, were necessary because experiments involved manually assembled configurations. Many extrapolations were quite long; but the general lack of curvature in the subcritical region lends credibility to their validity. In addition to delayed critical parameters, a procedure is offered which might permit the determination of prompt critical parameters as well for the same cases. This conjectured procedure is not based on any strong physical arguments.
The hippocampus extrapolates beyond the view in scenes: An fMRI study of boundary extension
Chadwick, Martin J.; Mullally, Sinéad L.; Maguire, Eleanor A.
2013-01-01
Boundary extension (BE) is a pervasive phenomenon whereby people remember seeing more of a scene than was present in the physical input, because they extrapolate beyond the borders of the original stimulus. This automatic embedding of a scene into a wider context supports our experience of a continuous and coherent world, and is therefore highly adaptive. BE, whilst occurring rapidly, is nevertheless thought to comprise two stages. The first involves the active extrapolation of the scene beyond its physical boundaries, and is constructive in nature. The second phase occurs at retrieval, where the initial extrapolation beyond the original scene borders is revealed by a subsequent memory error. The brain regions associated with the initial, and crucial, extrapolation of a scene beyond the view have never been investigated. Here, using functional MRI (fMRI) and a classic BE paradigm, we found that this extrapolation of scenes occurred rapidly around the time a scene was first viewed, and was associated with engagement of the hippocampus (HC) and parahippocampal cortex (PHC). Using connectivity analyses we determined that the HC in particular seemed to drive the BE effect, exerting top–down influence on PHC and indeed as far back down the processing stream as early visual cortex (VC). These cortical regions subsequently displayed activity profiles that tracked the trial-by-trial subjective perception of the scenes, rather than physical reality, thereby reflecting the behavioural expression of the BE error. Together our results show that the HC is involved in the active extrapolation of scenes beyond their physical borders. This information is then automatically and rapidly channelled through the scene processing hierarchy as far back as early VC. This suggests that the anticipation and construction of scenes is a pervasive and important aspect of our online perception, with the HC playing a central role. PMID:23276398
Line Lengths and Starch Scores.
ERIC Educational Resources Information Center
Moriarty, Sandra E.
1986-01-01
Investigates readability of different line lengths in advertising body copy, hypothesizing a normal curve with lower scores for shorter and longer lines, and scores above the mean for lines in the middle of the distribution. Finds support for lower scores for short lines and some evidence of two optimum line lengths rather than one. (SKC)
Comparison of fiber length analyzers
Don Guay; Nancy Ross Sutherland; Walter Rantanen; Nicole Malandri; Aimee Stephens; Kathleen Mattingly; Matt Schneider
2005-01-01
In recent years, several fiber new fiber length analyzers have been developed and brought to market. The new instruments provide faster measurements and the capability of both laboratory and on-line analysis. Do the various fiber analyzers provide the same length, coarseness, width, and fines measurements for a given fiber sample? This paper provides a comparison of...
Line Lengths and Starch Scores.
ERIC Educational Resources Information Center
Moriarty, Sandra E.
1986-01-01
Investigates readability of different line lengths in advertising body copy, hypothesizing a normal curve with lower scores for shorter and longer lines, and scores above the mean for lines in the middle of the distribution. Finds support for lower scores for short lines and some evidence of two optimum line lengths rather than one. (SKC)
Gestation length in farmed reindeer.
Shipka, M P; Rowell, J E
2010-01-01
Reindeer (Rangifer tarandus tarundus) are the only cervids indigenous to the arctic environment. In Alaska, reindeer are a recognized agricultural species and an economic mainstay for many native populations. Traditionally raised in extensive free-ranging systems, a recent trend toward intensive farming requires a more in-depth knowledge of reproductive management. Reported gestation length in reindeer varies, ranging from 198 to 229 d in studies performed at the University of Alaska Fairbanks. A switchback study that manipulated only breeding date demonstrated a mean increase in gestation length of 8.5 d among females bred early in the season. The negative correlation between conception date and gestation length is consistent with reindeer research at other locations and reports of variable gestation length in a growing number of domestic and non-domestic species. This paper reviews the phenomenon in reindeer and discusses some of the factors known to affect gestation length as well as possible areas for future research.
New extrapolation method for low-lying states of nuclei in the sd and the pf shells
Shen, J. J.; Zhao, Y. M.; Arima, A.; Yoshinaga, N.
2011-04-15
We study extrapolation approaches to evaluate energies of low-lying states for nuclei in the sd and pf shells, by sorting the diagonal matrix elements of the nuclear shell-model Hamiltonian. We introduce an extrapolation method with perturbation and apply our new method to predict both low-lying state energies and E2 transition rates between low-lying states. Our predicted results arrive at an accuracy of the root-mean-squared deviations {approx}40-60 keV for low-lying states of these nuclei.
Tencheva, J; Velinov, G; Budevsky, O
1979-01-01
A new approach to the graphic extrapolation procedure is proposed for the determination of pK-value of pharmaceuticals poorly soluble in water. It is based on a direct potentiometric method for the determination of acid-base constants in non-aqueous and mixed solvents in which no preliminary calibration of the galvanic cell is needed (glass and calomel electrodes). In this way the applicability of the graphic extrapolation procedure is enlarged for a great number of organic solvents miscible with water for which no buffer pH-standards are available.
Singh, N.P.; Zimmerman, C.J.; Taylor, G.N.; Wrenn, M.E.
1988-03-01
The concentrations and the organ distribution patterns of 228Th, 230Th and 232Th in two 9-y-old dogs of our beagle colony were determined. The dogs were exposed only to background environmental levels of Th isotopes through ingestion (food and water) and inhalation as are humans. The organ distribution patterns of the isotopes in the beagles were compared to the organ distribution patterns in humans to determine if it is appropriate to extrapolate the beagle organ burden data to humans. Among soft tissues, only the lungs, lymph nodes, kidney and liver, and skeleton contained measurable amounts of Th isotopes. The organ distribution pattern of Th isotopes in humans and dog are similar, the majority of Th being in the skeleton of both species. The average skeletal concentrations of 228Th in dogs were 30 to 40 times higher than the average skeletal concentrations of the parent 232Th, whereas the concentration of 228Th in human skeleton was only four to five times higher than 232Th. This suggests that dogs have a higher intake of 228Ra through food than humans. There is a similar trend in the accumulations of 232Th, 230Th and 228Th in the lungs of dog and humans. The percentages of 232Th, 230Th and 228Th in human lungs are 26, 9.7 and 4.8, respectively, compared to 4.2, 2.6 and 0.48, respectively, in dog lungs. The larger percentages of Th isotopes in human lungs may be due simply to the longer life span of humans. If the burdens of Th isotopes in human lungs are normalized to an exposure time of 9.2 y (mean age of dogs at the time of sacrifice), the percent burden of 232Th, 230Th and 228Th in human lungs are estimated to be 3.6, 1.3 and 0.66, respectively. These results suggest that the beagle may be an appropriate experimental animal for extrapolating the organ distribution pattern of Th in humans.
SU-E-T-99: An Analysis of the Accuracy of TPS Extrapolation of Commissioning Data
Alkhatib, H; Oves, S; Gebreamlak, W; Mihailidis, D
2015-06-15
Purpose: To investigate discrepancies between measured percent depth dose curves of a linear accelerator at depths beyond the commissioning data and those generated by the treatment planning system (TPS) via extrapolation. Methods: Relative depth doses were measured on an Elekta Synergy™ linac for photon beams of 6 -MV and 10-MV. SSDs for all curves were 100-cm and field sizes ranged from 4×4 to 35×35-cm{sup 2}. As most scanning tanks cannot provide depths greater than about 30-cm, percent depth dose measurements, extending 45-cm depths, were performed in Solid Water™ using a 0.125-cc ionization chamber (PTW model TN31012). The buildup regions of the curves were acquired with a parallel plate chamber (PTW model TN34001). Extrapolated curves were generated by the TPS (Phillips Pinnacle{sup 3} v. 9.6) by applying beams to CT images of 50-cm of Solid Water™ with density override set to 1.0-g/cc. Results: Percent difference between the two sets of curves (measured and TPS) was investigated. There is significant discrepancy in the buildup region to a depth of 7-mm. Beyond this depth, the two sets show good agreement. When analyzing the tail end of the curves, we saw percent difference of between 1.2% and 3.2%. The highest disagreement for the 6-MV curves was 10×10-cm{sup 2} (3%) and for the 10-MV curves it was the 35×35-cm{sup 2} (3.2%). Conclusion: A qualitative analysis of the measured data versus PDD curves generated by the TPS shows generally good agreement beyond 1-cm. However, a measurable percent difference was observed when comparing curves at depths beyond that provided by the commissioning data and at depths in the buildup region. Possible explanations for this include inaccuracies in modeling of the Solid Water™ or drift in beam energy since commissioning. Additionally, closer attention must be paid for measurements in the buildup region.
Measurement and extrapolation of total cross sections of 12C+16O fusion at stellar energies
NASA Astrophysics Data System (ADS)
Fang, Xiao
Carbon burning and oxygen burning in massive stars (M ≥ 8M[special character omitted]) are important burning phases in late stellar evolution following helium burning. They determined the nucleosynthesis phases and the initial matter distribution. Hydrostatic burning of 12C and 16O at lower temperatures remains an important feature. The critical reactions are the 12C+12C, 12C+ 16O and 16O+16O fusion processes. Extensive effort, both experimentally and theoretically, has been invested in the determination of the reaction rates for all reaction channels. Despite this effort, there remain large uncertainties in the predicted results that rely primarily on the extrapolation of the data into the Gamow range. The predicted results depend sensitively on the adopted model parameters, hindrance effects, and the possibility of resonances at relevant energies. The astrophysical important energy range of the 12C+12C fusion reaction spans from 1.0 MeV to 3.0 MeV. However, its cross section has not been determined with enough precision, despite numerous studies, due to the extremely low reaction cross sections and the large experimental background. The 12C+16O is difficult for experimental measurement due to the same reason. To allow measurements of the 12C+ 12C and 12C+16O fusions at astrophysical energies, a large-area silicon strip detector array was developed. The total cross section of the 12C+16O fusion has been measured at low energies using the St Ana 5MV accelerator at the University of Notre Dame. A high-intensity oxygen beam was produced impinging on a thick ultra-pure graphite target. Protons and gamma-rays have been measured simultaneously in the center-of-mass energy range of 3.64 to 4.93 MeV, using silicon and HPGe detectors. Statistical model calculations were employed to interpret the experimental results. This provides a more reliable extrapolation for the 12C+16O fusion cross section reducing substantially the uncertainty for stellar model simulations.
Hazards in determination and extrapolation of depositional rates of recent sediments
Isphording, W.C. . Dept. of Geology-Geography); Jackson, R.B. )
1992-01-01
Calculation of depositional rates for the past 250 years in estuarine sediments at sites in the Gulf of Mexico have been carried out by measuring changes that have taken place on bathymetric charts. Depositional rates during the past 50 to 100 years can similarly be estimated by this method and may be often confirmed by relatively abrupt changes at depth in the content of certain heavy metals in core samples. Analysis of bathymetric charts of Mobile Bay, Alabama, dating back to 1858, disclosed an essentially constant sedimentation rate of 3.9 mm/year. Apalachicola Bay, Florida, similarly, was found to have a rate of 5.4 mm/year. Though, in theory, these rates should provide reliable estimates of the influx of sediment into the estuaries, considerable caution must be used in attempting to extrapolate them to any depth in core samples. The passage of hurricanes in the Gulf of Mexico is a common event and can rapidly, and markedly, alter the bathymetry of an estuary. The passage of Hurricane Elena near Apalachicola Bay in 1985, for example, removed over 84 million tons of sediment from the bay and caused an average deepening of nearly 50 cm. The impact of Hurricane Frederick on Mobile Bay in 1979 was more dramatic. During the approximate 7 hour period when winds from this storm impacted the estuary, nearly 290 million tons of sediment was driven out of the bay and an average deepening of 46 cm was observed. With such weather events common in the Gulf Coast, it is not surprising that when radioactive age dating methods were used to obtain dates of approximately 7,500 years for organic remains in cores from Apalachicola Bay, that the depths at which the dated materials were obtained in the cores corresponded to depositional rates of only 0.4 mm/year, or one-tenth that obtained from historic bathymetric data. Because storm scour effects are a common occurrence in the Gulf, no attempt should be made to extrapolate bathymetric-derived rates to beyond the age of the charts.
Improving Predictions with Reliable Extrapolation Schemes and Better Understanding of Factorization
NASA Astrophysics Data System (ADS)
More, Sushant N.
New insights into the inter-nucleon interactions, developments in many-body technology, and the surge in computational capabilities has led to phenomenal progress in low-energy nuclear physics in the past few years. Nonetheless, many calculations still lack a robust uncertainty quantification which is essential for making reliable predictions. In this work we investigate two distinct sources of uncertainty and develop ways to account for them. Harmonic oscillator basis expansions are widely used in ab-initio nuclear structure calculations. Finite computational resources usually require that the basis be truncated before observables are fully converged, necessitating reliable extrapolation schemes. It has been demonstrated recently that errors introduced from basis truncation can be taken into account by focusing on the infrared and ultraviolet cutoffs induced by a truncated basis. We show that a finite oscillator basis effectively imposes a hard-wall boundary condition in coordinate space. We accurately determine the position of the hard-wall as a function of oscillator space parameters, derive infrared extrapolation formulas for the energy and other observables, and discuss the extension of this approach to higher angular momentum and to other localized bases. We exploit the duality of the harmonic oscillator to account for the errors introduced by a finite ultraviolet cutoff. Nucleon knockout reactions have been widely used to study and understand nuclear properties. Such an analysis implicitly assumes that the effects of the probe can be separated from the physics of the target nucleus. This factorization between nuclear structure and reaction components depends on the renormalization scale and scheme, and has not been well understood. But it is potentially critical for interpreting experiments and for extracting process-independent nuclear properties. We use a class of unitary transformations called the similarity renormalization group (SRG) transformations to
Does femur length affect the stride length? Forensic implications.
Krishan, Kewal
2010-01-01
All the long bones in the human body have a linear and positive relationship with stature. This principle has been used by forensic scientists and anthropologists to estimate stature in many kinds of medico-legal and forensic examinations. The present proposition states that the femur length may have a positive relationship with stride length. This relationship can help the forensic scientist to estimate stride length from the length of the femur and vice versa which can further be extended to formulate opinion in person's gait, biomechanics, movement, and posture analysis of the suspects in forensic case work. This can further be used to give opinion while handling certain evidences like closed-circuit television footage and video surveillance system in crime scene investigation.
Assessment of the dislocation bias in fcc metals and extrapolation to austenitic steels
NASA Astrophysics Data System (ADS)
Chang, Zhongwen; Sandberg, Nils; Terentyev, Dmitry; Samuelsson, Karl; Bonny, Giovanni; Olsson, Pär
2015-10-01
A systematic study of dislocation bias has been performed using a method that combines atomistic and elastic dislocation-point defect interaction models with a numerical solution of the diffusion equation with a drift term. Copper, nickel and aluminium model lattices are used in this study, covering a wide range of shear moduli and stacking fault energies. It is found that the dominant parameter for the dislocation bias in fcc metals is the width of the stacking fault ribbon. The variation in elastic constants does not strongly impact the dislocation bias value. As a result of this analysis and its extrapolation, the dislocation bias of the widely applied austenitic stainless steels of 316 type is predicted to be about 0.1 at temperature close to the swelling peak (815 K) and typical dislocation density of 1014 m-2. This is in line with the bias calculated using the elastic interaction model, which implies that the prediction method can be used readily in other fcc systems even without EAM potentials. By comparing the bias values obtained using atomistic- and elastic interaction energies, about 20% discrepancy is found, therefore a more realistic bias value for the 316 type alloy is 0.08 in these conditions.
NASA Astrophysics Data System (ADS)
Caldwell, J.; Shakibi, B.; Moles, M.; Sinclair, A. N.
2013-01-01
Phased array inspection was conducted on a V-butt welded steel sample with multiple shallow flaws of varying depths. The inspection measurements were processed using Wiener filtering and Autoregressive Spectral Extrapolation (AS) to enhance the signals. Phased array inspections were conducted using multiple phased array probes of varying nominal central frequencies (2.25, 4, 5 and 10 MHz). This paper describes the measured results, which show high accuracy, typically in the range of 0.1-0.2 mm. The results concluded that: 1. There was no statistical difference between the calculated flaw depths from phased array inspections at different flaw tip angles. 2. There was no statistical difference in flaw depths calculated using phased array data collected from either side of the weld. 3. Flaws with depths less than the estimated probe signal shear wavelength could not be sized. 4. Finally, there was no statistical difference in the calculated flaw depths using phased array probes with different sampling frequencies and destructive measurements of the flaws.
Cui, Jie; Krems, Roman V.; Li, Zhiying
2015-10-21
We consider a problem of extrapolating the collision properties of a large polyatomic molecule A–H to make predictions of the dynamical properties for another molecule related to A–H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A–X. We assume that the effect of the −H →−X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C{sub 6}H{sub 5}CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C{sub 6}H{sub 6} collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C{sub 6}H{sub 5}CN with He.
Cross-Species Extrapolation of Models for Predicting Lead Transfer from Soil to Wheat Grain
Liu, Ke; Lv, Jialong; Dai, Yunchao; Zhang, Hong; Cao, Yingfei
2016-01-01
The transfer of Pb from the soil to crops is a serious food hygiene security problem in China because of industrial, agricultural, and historical contamination. In this study, the characteristics of exogenous Pb transfer from 17 Chinese soils to a popular wheat variety (Xiaoyan 22) were investigated. In addition, bioaccumulation prediction models of Pb in grain were obtained based on soil properties. The results of the analysis showed that pH and OC were the most important factors contributing to Pb uptake by wheat grain. Using a cross-species extrapolation approach, the Pb uptake prediction models for cultivar Xiaoyan 22 in different soil Pb levels were satisfactorily applied to six additional non-modeled wheat varieties to develop a prediction model for each variety. Normalization of the bioaccumulation factor (BAF) to specific soil physico-chemistry is essential, because doing so could significantly reduce the intra-species variation of different wheat cultivars in predicted Pb transfer and eliminate the influence of soil properties on ecotoxicity parameters for organisms of interest. Finally, the prediction models were successfully verified against published data (including other wheat varieties and crops) and used to evaluate the ecological risk of Pb for wheat in contaminated agricultural soils. PMID:27518712
Octet baryon masses and sigma terms from an SU(3) chiral extrapolation
Young, Ross; Thomas, Anthony
2009-01-01
We analyze the consequences of the remarkable new results for octet baryon masses calculated in 2+1- avour lattice QCD using a low-order expansion about the SU(3) chiral limit. We demonstrate that, even though the simulation results are clearly beyond the power-counting regime, the description of the lattice results by a low-order expansion can be significantly improved by allowing the regularisation scale of the effective field theory to be determined by the lattice data itself. The model dependence of our analysis is demonstrated to be small compared with the present statistical precision. In addition to the extrapolation of the absolute values of the baryon masses, this analysis provides a method to solve the difficult problem of fine-tuning the strange-quark mass. We also report a determination of the sigma terms for all of the octet baryons, including an accurate value of the pion-nucleon sigma term and the first determination of the strangeness sigma term based on 2+1-flavour l
Rogers, Richard
2004-02-01
The overriding objective is a critical examination of Munchausen syndrome by proxy (MSBP) and its closely-related alternative, factitious disorder by proxy (FDBP). Beyond issues of diagnostic validity, assessment methods and potential detection strategies are explored. A painstaking analysis was conducted of the MSBP and FDBP literature as it relates diagnostic and assessment issues. Given the limitations of this literature, extrapolations were provided from the extensive theory and research on malingering as a related response style. Diagnostic formulations for both MSBP and FDBP de-emphasize the clinical characteristics of the perpetrator. In the case of FDBP, inferential judgments about motivation (e.g., adoption of a sick role) are challenging on conceptual and clinical grounds. When explanatory models from malingering are applied, most research has focused pathogenic models, often allied with psychodynamic thought. Finally, clinical methods for the assessment of MSBP and FDBP are not well developed. Refinements in the conceptualization of MSBP and FDBP can be provided through prototypical analysis. Drawing from malingering research, explanatory models should be expanded to include adaptational and criminological models. Finally, detection strategies for MSBP and FDBP must be formally operationalized and rigorously validated.
NASA Astrophysics Data System (ADS)
Cui, Jie; Li, Zhiying; Krems, Roman V.
2015-10-01
We consider a problem of extrapolating the collision properties of a large polyatomic molecule A-H to make predictions of the dynamical properties for another molecule related to A-H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A-X. We assume that the effect of the -H →-X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C6H5CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C6H6 collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C6H5CN with He.
Hartree-Fock mass formulas and extrapolation to new mass data
NASA Astrophysics Data System (ADS)
Goriely, S.; Samyn, M.; Heenen, P.-H.; Pearson, J. M.; Tondeur, F.
2002-08-01
The two previously published Hartree-Fock (HF) mass formulas, HFBCS-1 and HFB-1 (HF-Bogoliubov), are shown to be in poor agreement with new Audi-Wapstra mass data. The problem lies first with the prescription adopted for the cutoff of the single-particle spectrum used with the δ-function pairing force, and second with the Wigner term. We find an optimal mass fit if the spectrum is cut off both above EF+15 MeV and below EF-15 MeV, EF being the Fermi energy of the nucleus in question. In addition to the Wigner term of the form VW exp(-λ|N-Z|/A) already included in the two earlier HF mass formulas, we find that a second Wigner term linear in |N-Z| leads to a significant improvement in lighter nuclei. These two features are incorporated into our new Hartree-Fock-Bogoliubov model, which leads to much improved extrapolations. The 18 parameters of the model are fitted to the 2135 measured masses for N,Z>=8 with an rms error of 0.674 MeV. With this parameter set a complete mass table, labeled HFB-2, has been constructed, going from one drip line to the other, up to Z=120. The new pairing-cutoff prescription favored by the new mass data leads to weaker neutron-shell gaps in neutron-rich nuclei.
Extrapolated Tikhonov method and inversion of 3D density images of gravity data
NASA Astrophysics Data System (ADS)
Wang, Zhu-Wen; Xu, Shi; Liu, Yin-Ping; Liu, Jing-Hua
2014-06-01
Tikhonov regularization (TR) method has played a very important role in the gravity data and magnetic data process. In this paper, the Tikhonov regularization method with respect to the inversion of gravity data is discussed. and the extrapolated TR method (EXTR) is introduced to improve the fitting error. Furthermore, the effect of the parameters in the EXTR method on the fitting error, number of iterations, and inversion results are discussed in details. The computation results using a synthetic model with the same and different densities indicated that. compared with the TR method, the EXTR method not only achieves the a priori fitting error level set by the interpreter but also increases the fitting precision, although it increases the computation time and number of iterations. And the EXTR inversion results are more compact than the TR inversion results, which are more divergent. The range of the inversion data is closer to the default range of the model parameters, and the model features and default model density distribution agree well.
NASA Astrophysics Data System (ADS)
Chicrala, André; Dallaqua, Renato Sergio; Antunes Vieira, Luis Eduardo; Dal Lago, Alisson; Rodríguez Gómez, Jenny Marcela; Palacios, Judith; Coelho Stekel, Tardelli Ronan; Rezende Costa, Joaquim Eduardo; da Silva Rockenbach, Marlos
2017-10-01
The behavior of Active Regions (ARs) is directly related to the occurrence of some remarkable phenomena in the Sun such as solar flares or coronal mass ejections (CME). In this sense, changes in the magnetic field of the region can be used to uncover other relevant features like the evolution of the ARs magnetic structure and the plasma flow related to it. In this work we describe the evolution of the magnetic structure of the active region AR NOAA12443 observed from 2015/10/30 to 2015/11/10, which may be associated with several X-ray flares of classes C and M. The analysis is based on observations of the solar surface and atmosphere provided by HMI and AIA instruments on board of the SDO spacecraft. In order to investigate the magnetic energy buildup and release of the ARs, we shall employ potential and linear force free extrapolations based on the solar surface magnetic field distribution and the photospheric velocity fields.
Chenglin, L.; Charpentier, R.R.
2010-01-01
The U.S. Geological Survey procedure for the estimation of the general form of the parent distribution requires that the parameters of the log-geometric distribution be calculated and analyzed for the sensitivity of these parameters to different conditions. In this study, we derive the shape factor of a log-geometric distribution from the ratio of frequencies between adjacent bins. The shape factor has a log straight-line relationship with the ratio of frequencies. Additionally, the calculation equations of a ratio of the mean size to the lower size-class boundary are deduced. For a specific log-geometric distribution, we find that the ratio of the mean size to the lower size-class boundary is the same. We apply our analysis to simulations based on oil and gas pool distributions from four petroleum systems of Alberta, Canada and four generated distributions. Each petroleum system in Alberta has a different shape factor. Generally, the shape factors in the four petroleum systems stabilize with the increase of discovered pool numbers. For a log-geometric distribution, the shape factor becomes stable when discovered pool numbers exceed 50 and the shape factor is influenced by the exploration efficiency when the exploration efficiency is less than 1. The simulation results show that calculated shape factors increase with those of the parent distributions, and undiscovered oil and gas resources estimated through the log-geometric distribution extrapolation are smaller than the actual values. ?? 2010 International Association for Mathematical Geology.
The 'extrapolated center of mass' concept suggests a simple control of balance in walking.
Hof, At L
2008-02-01
Next to position x and velocity v of the whole body center of mass (CoM) the 'extrapolated center of mass' (XcoM) can be introduced: xi = chi + nu/omega 0, where omega 0 is a constant related to stature. Based on the inverted pendulum model of balance, the XcoM enables to formulate the requirements for stable walking in a relatively simple form. In a very simple walking model, with the effects of foot roll-over neglected, the trajectory of the XcoM is a succession of straight lines, directed in the line from center of pressure (CoP) to the XcoM at the time of foot contact. The CoM follows the XcoM in a more sinusoidal trajectory. A simple rule is sufficient for stable walking: at foot placement the CoP should be placed at a certain distance behind and outward of the XcoM at the time of foot contact. In practice this means that a disturbance which results in a CoM velocity change Deltav can be compensated by a change in foot position (CoP) equal to Deltav/omega 0 in the same direction. Similar simple rules could be formulated for starting and stopping and for making a turn.
Probing QCD perturbation theory at high energies with continuum extrapolated lattice data
NASA Astrophysics Data System (ADS)
Sint, Stefan
2017-03-01
Precision tests of QCD perturbation theory are not readily available from experimental data. The main reasons are systematic uncertainties due to the confinement of quarks and gluons, as well as kinematical constraints which limit the accessible energy scales. We here show how continuum extrapolated lattice data may overcome such problems and provide excellent probes of renormalized perturbation theory. This work corresponds to an essential step in the ALPHA collaboration's project to determine the Λ-parameter in 3-flavour QCD. I explain the basic techniques used in the high energy regime, namely the use of mass-independent renormalization schemes for the QCD coupling constant in a finite Euclidean space time volume. When combined with finite size techniques this allows one to iteratively step up the energy scale by factors of 2, thereby quickly covering two orders of magnitude in scale. We may then compare perturbation theory (with β-functions available up to 3-loop order) to our non-perturbative data for a 1-parameter family of running couplings. We conclude that a target precision of 3 percent for the Λ-parameter requires non-perturbative data up to scales where αs ≈ 0.1, whereas the apparent precision obtained from applying perturbation theory around αs ≈ 0.2 can be misleading. This should be taken as a general warning to practitioners of QCD perturbation theory.
Caution warranted in extrapolating from Boston Naming Test item gradation construct.
Beattey, Robert A; Murphy, Hilary; Cornwell, Melinda; Braun, Thomas; Stein, Victoria; Goldstein, Martin; Bender, Heidi Allison
2017-01-01
The Boston Naming Test (BNT) was designed to present items in order of difficulty based on word frequency. Changes in word frequencies over time, however, would frustrate extrapolation in clinical and research settings based on the theoretical construct because performance on the BNT might reflect changes in ecological frequency of the test items, rather than performance across items of increasing difficulty. This study identifies the ecological frequency of BNT items at the time of publication using the American Heritage Word Frequency Book and determines changes in frequency over time based on the frequency distribution of BNT items across a current corpus, the Corpus of Contemporary American English. Findings reveal an uneven distribution of BNT items across 2 corpora and instances of negligible differentiation in relative word frequency across test items. As BNT items are not presented in order from least to most frequent, clinicians and researchers should exercise caution in relying on the BNT as presenting items in increasing order of difficulty. A method is proposed for distributing confrontation-naming items to be explicitly measured against test items that are normally distributed across the corpus of a given language.
A Novel Ensemble Method for Imbalanced Data Learning: Bagging of Extrapolation-SMOTE SVM
Feng, YangHe; Liu, Zhong
2017-01-01
Class imbalance ubiquitously exists in real life, which has attracted much interest from various domains. Direct learning from imbalanced dataset may pose unsatisfying results overfocusing on the accuracy of identification and deriving a suboptimal model. Various methodologies have been developed in tackling this problem including sampling, cost-sensitive, and other hybrid ones. However, the samples near the decision boundary which contain more discriminative information should be valued and the skew of the boundary would be corrected by constructing synthetic samples. Inspired by the truth and sense of geometry, we designed a new synthetic minority oversampling technique to incorporate the borderline information. What is more, ensemble model always tends to capture more complicated and robust decision boundary in practice. Taking these factors into considerations, a novel ensemble method, called Bagging of Extrapolation Borderline-SMOTE SVM (BEBS), has been proposed in dealing with imbalanced data learning (IDL) problems. Experiments on open access datasets showed significant superior performance using our model and a persuasive and intuitive explanation behind the method was illustrated. As far as we know, this is the first model combining ensemble of SVMs with borderline information for solving such condition. PMID:28250765
Kinetics of HMX and CP Decomposition and Their Extrapolation for Lifetime Assessment
Burnham, A K; Weese, R K; Andrzejewski, W J
2004-11-18
Decomposition kinetics are determined for HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate) separately and together. For high levels of thermal stress, the two materials decompose faster as a mixture than individually. This effect is observed both in high-temperature thermal analysis experiments and in long-term thermal aging experiments. An Arrhenius plot of the 10% level of HMX decomposition by itself from a diverse set of experiments is linear from 120 to 260 C, with an apparent activation energy of 165 kJ/mol. Similar but less extensive thermal analysis data for the mixture suggests a slightly lower activation energy for the mixture, and an analogous extrapolation is consistent with the amount of gas observed in the long-term detonator aging experiments, which is about 30 times greater than expected from HMX by itself for 50 months at 100 C. Even with this acceleration, however, it would take {approx}10,000 years to achieve 10% decomposition at {approx}30 C. Correspondingly, negligible decomposition is predicted by this kinetic model for a few decades aging at temperatures slightly above ambient. This prediction is consistent with additional sealed-tube aging experiments at 100-120 C, which are estimated to have an effective thermal dose greater than that from decades of exposure to temperatures slightly above ambient.
Kinetics of HMX and CP Decomposition and Their Extrapolation for Lifetime Assessment
Burnham, A K; Weese, R K; Andrzejewski, W J
2005-03-10
Decomposition kinetics are determined for HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate) separately and together. For high levels of thermal stress, the two materials decompose faster as a mixture than individually. This effect is observed both in high-temperature thermal analysis experiments and in long-term thermal aging experiments. An Arrhenius plot of the 10% level of HMX decomposition by itself from a diverse set of experiments is linear from 120 to 260 C, with an apparent activation energy of 165 kJ/mol. Similar but less extensive thermal analysis data for the mixture suggests a slightly lower activation energy for the mixture, and an analogous extrapolation is consistent with the amount of gas observed in the long-term detonator aging experiments, which is about 30 times greater than expected from HMX by itself for 50 months at 100 C. Even with this acceleration, however, it would take {approx}10,000 years to achieve 10% decomposition at {approx}30 C. Correspondingly, negligible decomposition is predicted by this kinetic model for a few decades aging at temperatures slightly above ambient. This prediction is consistent with additional sealed-tube aging experiments at 100-120 C, which are estimated to have an effective thermal dose greater than that from decades of exposure to temperatures slightly above ambient.
Lu, Sen; Ren, Tusheng; Lu, Yili; Meng, Ping; Sun, Shiyou
2014-01-01
Accurate estimation of soil water retention curve (SWRC) at the dry region is required to describe the relation between soil water content and matric suction from saturation to oven dryness. In this study, the extrapolative capability of two models for predicting the complete SWRC from limited ranges of soil water retention data was evaluated. When the model parameters were obtained from SWRC data in the 0–1500 kPa range, the FX model (Fredlund and Xing, 1994) estimations agreed well with measurements from saturation to oven dryness with RMSEs less than 0.01. The GG model (Groenevelt and Grant, 2004) produced larger errors at the dry region, with significantly larger RMSEs and MEs than the FX model. Further evaluations indicated that when SWRC measurements in the 0–100 kPa suction range was applied for model establishment, the FX model was capable of producing acceptable SWRCs across the entire water content range. For a higher accuracy, the FX model requires soil water retention data at least in the 0- to 300-kPa range to extend the SWRC to oven dryness. Comparing with the Khlosi et al. (2006) model, which requires measurements in the 0–500 kPa range to reproduce the complete SWRCs, the FX model has the advantage of requiring less SWRC measurements. Thus the FX modeling approach has the potential to eliminate the processes for measuring soil water retention in the dry range. PMID:25464503
An extrapolation method for compressive strength prediction of hydraulic cement products
Siqueira Tango, C.E. de
1998-07-01
The basis for the AMEBA Method is presented. A strength-time function is used to extrapolate the predicted cementitious material strength for a late (ALTA) age, based on two earlier age strengths--medium (MEDIA) and low (BAIXA) ages. The experimental basis for the method is data from the IPT-Brazil laboratory and the field, including a long-term study on concrete, research on limestone, slag, and fly-ash additions, and quality control data from a cement factory, a shotcrete tunnel lining, and a grout for structural repair. The method applicability was also verified for high-performance concrete with silica fume. The formula for predicting late age (e.g., 28 days) strength, for a given set of involved ages (e.g., 28,7, and 2 days) is normally a function only of the two earlier ages` (e.g., 7 and 2 days) strengths. This equation has been shown to be independent on materials variations, including cement brand, and is easy to use also graphically. Using the AMEBA method, and only needing to know the type of cement used, it has been possible to predict strengths satisfactorily, even without the preliminary tests which are required in other methods.
Poet, Torka S; Timchalk, Charles; Bartels, Michael J; Smith, Jordan N; McDougal, Robin; Juberg, Daland R; Price, Paul S
2017-02-24
A physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model combined with Monte Carlo analysis of inter-individual variation was used to assess the effects of the insecticide, chlorpyrifos and its active metabolite, chlorpyrifos oxon in humans. The PBPK/PD model has previously been validated and used to describe physiological changes in typical individuals as they grow from birth to adulthood. This model was updated to include physiological and metabolic changes that occur with pregnancy. The model was then used to assess the impact of inter-individual variability in physiology and biochemistry on predictions of internal dose metrics and quantitatively assess the impact of major sources of parameter uncertainty and biological diversity on the pharmacodynamics of red blood cell acetylcholinesterase inhibition. These metrics were determined in potentially sensitive populations of infants, adult women, pregnant women, and a combined population of adult men and women. The parameters primarily responsible for inter-individual variation in RBC acetylcholinesterase inhibition were related to metabolic clearance of CPF and CPF-oxon. Data Derived Extrapolation Factors that address intra-species physiology and biochemistry to replace uncertainty factors with quantitative differences in metrics were developed in these same populations. The DDEFs were less than 4 for all populations. These data and modeling approach will be useful in ongoing and future human health risk assessments for CPF and could be used for other chemicals with potential human exposure.
Cui, Jie; Li, Zhiying; Krems, Roman V
2015-10-21
We consider a problem of extrapolating the collision properties of a large polyatomic molecule A-H to make predictions of the dynamical properties for another molecule related to A-H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A-X. We assume that the effect of the -H →-X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C6H5CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He - C6H6 collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C6H5CN with He.
NASA Astrophysics Data System (ADS)
Reggiannini, Ruggero
2015-12-01
This paper is concerned with spatial properties of linear arrays of antennas spaced less than half wavelength. Possible applications are in multiple-input multiple-output (MIMO) wireless links for the purpose of increasing the spatial multiplexing gain in a scattering environment, as well as in other areas such as sonar and radar. With reference to a receiving array, we show that knowledge of the received field can be extrapolated beyond the actual array size by exploiting the finiteness of the interval of real directions from which the field components impinge on the array. This property permits to increase the performance of the array in terms of angular resolution. A simple signal processing technique is proposed allowing formation of a set of beams capable to cover uniformly the entire horizon with an angular resolution better than that achievable by a classical uniform-weighing half-wavelength-spaced linear array. Results are also applicable to active arrays. As the above approach leads to arrays operating in super-directive regime, we discuss all related critical aspects, such as sensitivity to external and internal noises and to array imperfections, and bandwidth, so as to identify the basic design criteria ensuring the array feasibility.
Minimum length-maximum velocity
NASA Astrophysics Data System (ADS)
Panes, Boris
2012-03-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.
Macsween, A
2001-09-01
While the accepted measure of aerobic power remains the VO2max this test is extremely demanding even for athletes. There are serious practical and ethical concerns in attempting such testing in non-athletic or patient populations. An alternative method of measuring aerobic power in such populations is required. A limited body of work exists evaluating the accuracy of the Astrand-Ryhming nomogram and linear extrapolation of the heart rate/oxygen uptake plot. Issues exist in terms of both equipment employed and sample numbers. Twenty-five normal subjects (mean age 28.6, range 22-50) completed 52 trials (Bruce treadmill protocol) meeting stringent criteria for VO2max performance. Respiratory gases were measured with a portable gas analyser on a five-sec sample period. The data was analysed to allow comparison of the reliability and validity of linear extrapolations to three estimates of heart rate maximum with the Astrand nomogram prediction. Extrapolation was preferable yielding intraclass correlation co-efficients (ICC) of 0.9433 comparable to that of the observed VO2max at 0.9443 and a bias of -1.1 ml x min(-1) x kg(-1) representing a 2.19 percent underestimate. This study provides empirical evidence that extrapolation of submaximal data can be employed with confidence for both clinical monitoring and research purposes. With the use of portable equipment and submaximal testing the scope for future research in numerous populations and non-laboratory environments is considerably increased.
Heil, Tobias; Gralla, Benedikt; Epping, Michael; Kohl, Helmut
2012-07-01
Over the last decades, elemental maps have become a powerful tool for the analysis of the spatial distribution of the elements within specimen. In energy-filtered transmission electron microscopy (EFTEM) one commonly uses two pre-edge and one post-edge image for the calculation of elemental maps. However, this so called three-window method can introduce serious errors into the extrapolated background for the post-edge window. Since this method uses only two pre-edge windows as data points to calculate a background model that depends on two fit parameters, the quality of the extrapolation can be estimated only statistically assuming that the background model is correct. In this paper, we will discuss a possibility to improve the accuracy and reliability of the background extrapolation by using a third pre-edge window. Since with three data points the extrapolation becomes over-determined, this change permits us to estimate not only the statistical uncertainly of the fit, but also the systematic error by using the experimental data. Furthermore we will discuss in this paper the acquisition parameters that should be used for the energy windows to reach an optimal signal-to-noise ratio (SNR) in the elemental maps.
There are a number of risk management decisions, which range from prioritization for testing to quantitative risk assessments. The utility of in vitro studies in these decisions depends on how well the results of such data can be qualitatively and quantitatively extrapolated to i...
Species Differences in Androgen and Estrogen Receptor Structure and Function Among Vertebrates and Invertebrates: Interspecies Extrapolations regarding Endocrine Disrupting Chemicals
VS Wilson1, GT Ankley2, M Gooding 1,3, PD Reynolds 1,4, NC Noriega 1, M Cardon 1, P Hartig1,...
A significant challenge in ecotoxicology has been determining chemical hazards to species with limited or no toxicity data. Currently, extrapolation tools like U.S. EPA’s Web-based Interspecies Correlation Estimation (Web-ICE; www3.epa.gov/webice) models categorize toxicity...
An age-classified projection matrix model has been developed to extrapolate the chronic (28-35d) demographic responses of Americamysis bahia (formerly Mysidopsis bahia) to population-level response. This study was conducted to evaluate the efficacy of this model for predicting t...
There are a number of risk management decisions, which range from prioritization for testing to quantitative risk assessments. The utility of in vitro studies in these decisions depends on how well the results of such data can be qualitatively and quantitatively extrapolated to i...
DOSE-RESPONSE BEHAVIOR OF ANDROGENIC AND ANTIANDROGENIC CHEMICALS: IMPLICATIONS FOR LOW-DOSE EXTRAPOLATION AND CUMULATIVE TOXICITY. LE Gray Jr, C Wolf, J Furr, M Price, C Lambright, VS Wilson and J Ostby. USEPA, ORD, NHEERL, EB, RTD, RTP, NC, USA.
Dose-response behavior of a...
Species Differences in Androgen and Estrogen Receptor Structure and Function Among Vertebrates and Invertebrates: Interspecies Extrapolations regarding Endocrine Disrupting Chemicals
VS Wilson1, GT Ankley2, M Gooding 1,3, PD Reynolds 1,4, NC Noriega 1, M Cardon 1, P Hartig1,...
A significant challenge in ecotoxicology has been determining chemical hazards to species with limited or no toxicity data. Currently, extrapolation tools like U.S. EPA’s Web-based Interspecies Correlation Estimation (Web-ICE; www3.epa.gov/webice) models categorize toxicity...
An age-classified projection matrix model has been developed to extrapolate the chronic (28-35d) demographic responses of Americamysis bahia (formerly Mysidopsis bahia) to population-level response. This study was conducted to evaluate the efficacy of this model for predicting t...
Definition of Magnetic Exchange Length
Abo, GS; Hong, YK; Park, J; Lee, J; Lee, W; Choi, BC
2013-08-01
The magnetostatic exchange length is an important parameter in magnetics as it measures the relative strength of exchange and self-magnetostatic energies. Its use can be found in areas of magnetics including micromagnetics, soft and hard magnetic materials, and information storage. The exchange length is of primary importance because it governs the width of the transition between magnetic domains. Unfortunately, there is some confusion in the literature between the magnetostatic exchange length and a similar distance concerning magnetization reversal mechanisms in particles known as the characteristic length. This confusion is aggravated by the common usage of two different systems of units, SI and cgs. This paper attempts to clarify the situation and recommends equations in both systems of units.
NASA Astrophysics Data System (ADS)
Zhan, Chang-Guo; Zheng, Fang; Dixon, David A.
2003-07-01
Photoelectron spectra of hydrated doubly charged anion clusters, SO42-(H2O)n, have been studied by performing first-principles electronic structure calculations on SO42-(H2O)n (n=3-6, 12, and 13). The calculated adiabatic electron ionization energies are in good agreement with available experimental data. A detailed analysis of the calculated results suggests that for n⩾12 the observed threshold ionization energy of the low binding energy band in the recently reported photoelectron spectra of SO42-(H2O)n is associated with the electron ionization from the solute, SO42-, whereas the observed threshold ionization energy of the high binding energy band is associated with the electron ionization from the water molecules in the first solvation shell of SO42-. For n⩽6, both threshold ionization energies of the low and high binding energy bands are all associated with the electron ionizations from the solute. This shows that the bulk solution value (n→∞) extrapolated from those threshold ionization energies of the high binding energy band of the clusters should refer to the first ionization energy of the water molecules in the first solvation shell of SO42- in aqueous solution and, therefore, should be significantly smaller than the measured threshold ionization energy of liquid water. This differs from the recent result that the value of 10.05 eV extrapolated from the threshold ionization energies of the high binding energy band based on a simple 1/Rc model was nearly identical to the measured threshold ionization energy (10.06 eV) of liquid water. To address this difference, we have used a new approach for the extrapolation of solvated ion cluster data to bulk solution. We show that the new extrapolation approach consistently produces extrapolated bulk solution results in significantly better agreement with those observed directly in bulk solution for the first ionization energies of the ions in SO42-(H2O)n, Br-(H2O)n, and I-(H2O)n. The same extrapolation
Length Invisibilization of Tachyonic Neutrinos
NASA Astrophysics Data System (ADS)
Estakhr, Ahmad Reza
2016-09-01
Faster than the speed of light particle such as tachyonic neutrino due to its superluminal nature disapper and is undetectable. L = iΩ-1Lo where, i =√{ - 1 } is imaginary Number, Ω = 1 /√{βs2 - 1 } is Estakhr's Omega factor, L is the Superluminal Length, Lo is the proper length, βs =Vs / c > 1 is superluminal speed parameter, Vs is Superluminal velocity and c is speed of light.
NASA Astrophysics Data System (ADS)
He, Han; Wang, Huaning; Yan, Yihua
2011-01-01
The Hinode satellite can obtain high-quality photospheric vector magnetograms of solar active regions and the simultaneous coronal loop images in soft X-ray and extreme ultraviolet (EUV) bands. In this paper, we continue the work of He and Wang (2008) and apply the newly developed upward boundary integration computational scheme for the nonlinear force-free field (NLFFF) extrapolation of the coronal magnetic field to the photospheric vector magnetograms acquired by the Spectro-Polarimeter of the Solar Optical Telescope aboard Hinode. Three time series vector magnetograms of the same solar active region, NOAA 10930, are selected for the NLFFF extrapolations, which were observed within the time interval of 26 h during 10-11 December 2006 when the active region crossed the central area of the Sun's disk. Parallel computation of the NLFFF extrapolation code was realized through OpenMP multithreaded, shared memory parallelism and Fortran 95 programming language for the extrapolation calculations. The comparison between the extrapolated field lines and the coronal loop images obtained by the X-Ray Telescope and the EUV Imaging Spectrometer of Hinode shows that, in the central area of the active region, the field line configurations generally agree with the coronal images, and the orientations of the field lines basically coincide with the coronal loop observations for all three successive magnetograms. This result supports the NLFFF model being used for tracing the time series evolution of the 3-D coronal magnetic structures as the responses of the quasi-equilibrium solar atmosphere to the vector magnetic field changes in the photosphere.
Kinetics of HMX and CP Decomposition and Their Extrapolation for Lifetime Assessment
Burnham, A K; Weese, R K; Adrzejewski, W J
2006-09-11
Accelerated aging tests play an important role in assessing the lifetime of manufactured products. There are two basic approaches to lifetime qualification. One tests a product to failure over range of accelerated conditions to calibrate a model, which is then used to calculate the failure time for conditions of use. A second approach is to test a component to a lifetime-equivalent dose (thermal or radiation) to see if it still functions to specification. Both methods have their advantages and limitations. A disadvantage of the 2nd method is that one does not know how close one is to incipient failure. This limitation can be mitigated by testing to some higher level of dose as a safety margin, but having a predictive model of failure via the 1st approach provides an additional measure of confidence. Even so, proper calibration of a failure model is non-trivial, and the extrapolated failure predictions are only as good as the model and the quality of the calibration. This paper outlines results for predicting the potential failure point of a system involving a mixture of two energetic materials, HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate). Global chemical kinetic models for the two materials individually and as a mixture are developed and calibrated from a variety of experiments. These include traditional thermal analysis experiments run on time scales from hours to a couple days, detonator aging experiments with exposures up to 50 months, and sealed-tube aging experiments for up to 5 years. Decomposition kinetics are determined for HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate) separately and together. For high levels of thermal stress, the two materials decompose faster as a mixture than individually. This effect is observed both in high-temperature thermal analysis experiments and in long
Measurement of absorbed dose with a bone-equivalent extrapolation chamber.
DeBlois, François; Abdel-Rahman, Wamied; Seuntjens, Jan P; Podgorsak, Ervin B
2002-03-01
A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water and bone-equivalent material was used for determining absorbed dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain absorbed dose in bone for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC by 0.7% to approximately 2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). In conjunction with appropriate correction factors determined with Monte Carlo techniques, the uncalibrated hybrid PEEC can be used for measuring absorbed dose in bone material to within 2% for high-energy photon and electron beams.
Song, Yang; Hamtaei, Ehsan; Sethi, Sean K; Yang, Guang; Xie, Haibin; Mark Haacke, E
2017-09-01
To introduce a new approach to reconstruct high definition vascular images using COnstrained Data Extrapolation (CODE) and evaluate its capability in estimating vessel area and stenosis. CODE is based on the constraint that the full width half maximum of a vessel can be accurately estimated and, since it represents the best estimate for the width of the object, higher k-space data can be generated from this information. To demonstrate the potential of extracting high definition vessel edges using low resolution data, both simulated and human data were analyzed to better visualize the vessels and to quantify both area and stenosis measurements. The results from CODE using one-fourth of the fully sampled k-space data were compared with a compressed sensing (CS) reconstruction approach using the same total amount of data but spread out between the center of k-space and the outer portions of the original k-space to accelerate data acquisition by a factor of four. For a sufficiently high signal-to-noise ratio (SNR) such as 16 (8), we found that objects as small as 3 voxels in the 25% under-sampled data (6 voxels when zero-filled) could be used for CODE and CS and provide an estimate of area with an error <5% (10%). For estimating up to a 70% stenosis with an SNR of 4, CODE was found to be more robust to noise than CS having a smaller variance albeit a larger bias. Reconstruction times were >200 (30) times faster for CODE compared to CS in the simulated (human) data. CODE was capable of producing sharp sub-voxel edges and accurately estimating stenosis to within 5% for clinically relevant studies of vessels with a width of at least 3pixels in the low resolution images. Copyright © 2017 Elsevier Inc. All rights reserved.
Cross-Species Extrapolation of Prediction Models for Cadmium Transfer from Soil to Corn Grain
Yang, Hua; Li, Zhaojun; Lu, Lu; Long, Jian; Liang, Yongchao
2013-01-01
Cadmium (Cd) is a highly toxic heavy metal for both plants and animals. The presence of Cd in agricultural soils is of great concern regarding its transfer in the soil-plant system. This study investigated the transfer of Cd (exogenous salts) from a wide range of Chinese soils to corn grain (Zhengdan 958). Through multiple stepwise regressions, prediction models were developed, with the combination of Cd bioconcentration factor (BCF) of Zhengdan 958 and soil pH, organic matter (OM) content, and cation exchange capacity (CEC). Moreover, these prediction models from Zhengdan 958 were applied to other non-model corn species through cross-species extrapolation approach. The results showed that the pH of the soil was the most important factor that controlled Cd uptake and lower pH was more favorable for Cd bioaccumulation in corn grain. There was no significant difference among three prediction models in the different Cd levels. When the prediction models were applied to other non-model corn species, the ratio ranges between the predicted BCF values and the measured BCF values were within an interval of 2 folds and close to the solid line of 1∶1 relationship. Furthermore, these prediction models also reduced the measured BCF intra-species variability for all non-model corn species. Therefore, the prediction models established in this study can be applied to other non-model corn species and be useful for predicting the Cd bioconcentration in corn grain and assessing the ecological risk of Cd in different soils. PMID:24324636
Confusion about Cadmium Risks: The Unrecognized Limitations of an Extrapolated Paradigm
Bernard, Alfred
2015-01-01
Background Cadmium (Cd) risk assessment presently relies on tubular proteinuria as a critical effect and urinary Cd (U-Cd) as an index of the Cd body burden. Based on this paradigm, regulatory bodies have reached contradictory conclusions regarding the safety of Cd in food. Adding to the confusion, epidemiological studies implicate environmental Cd as a risk factor for bone, cardiovascular, and other degenerative diseases at exposure levels that are much lower than points of departure used for setting food standards. Objective The objective was to examine whether the present confusion over Cd risks is not related to conceptual or methodological problems. Discussion The cornerstone of Cd risk assessment is the assumption that U-Cd reflects the lifetime accumulation of the metal in the body. The validity of this assumption as applied to the general population has been questioned by recent studies revealing that low-level U-Cd varies widely within and between individuals depending on urinary flow, urine collection protocol, and recent exposure. There is also evidence that low-level U-Cd increases with proteinuria and essential element deficiencies, two potential confounders that might explain the multiple associations of U-Cd with common degenerative diseases. In essence, the present Cd confusion might arise from the fact that this heavy metal follows the same transport pathways as plasma proteins for its urinary excretion and the same transport pathways as essential elements for its intestinal absorption. Conclusions The Cd risk assessment paradigm needs to be rethought taking into consideration that low-level U-Cd is strongly influenced by renal physiology, recent exposure, and factors linked to studied outcomes. Citation Bernard A. 2016. Confusion about cadmium risks: the unrecognized limitations of an extrapolated paradigm. Environ Health Perspect 124:1–5; http://dx.doi.org/10.1289/ehp.1509691 PMID:26058085
Investigative and extrapolative role of microRNAs’ genetic expression in breast carcinoma
Usmani, Ambreen; Shoro, Amir Ali; Shirazi, Bushra; Memon, Zahida
2016-01-01
MicroRNAs (miRs) are non-coding ribonucleic acids consisting of about 18-22 nucleotide bases. Expression of several miRs can be altered in breast carcinomas in comparison to healthy breast tissue, or between various subtypes of breast cancer. These are regulated as either oncogene or tumor suppressors, this shows that their expression is misrepresented in cancers. Some miRs are specifically associated with breast cancer and are affected by cancer-restricted signaling pathways e.g. downstream of estrogen receptor-α or HER2/neu. Connection of multiple miRs with breast cancer, and the fact that most of these post transcript structures may transform complex functional networks of mRNAs, identify them as potential investigative, extrapolative and predictive tumor markers, as well as possible targets for treatment. Investigative tools that are currently available are RNA-based molecular techniques. An additional advantage related to miRs in oncology is that they are remarkably stable and are notably detectable in serum and plasma. Literature search was performed by using database of PubMed, the keywords used were microRNA (52 searches) AND breast cancer (169 searches). PERN was used by database of Bahria University, this included literature and articles from international sources; 2 articles from Pakistan on this topic were consulted (one in international journal and one in a local journal). Of these, 49 articles were shortlisted which discussed relation of microRNA genetic expression in breast cancer. These articles were consulted for this review. PMID:27375730
The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.
Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo
2014-09-01
Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus
Extrapolation of the relative risk of radiogenic neoplasms across mouse strains and to man
Storer, J.B.; Mitchell, T.J.; Fry, R.J.
1988-05-01
We have examined two interrelated questions: is the susceptibility for radiogenic cancer related to the natural incidence, and are the responses of cancer induction by radiation described better by an absolute or a relative risk model. Also, we have examined whether it is possible to extrapolate relative risk estimates across species, from mice to humans. The answers to these questions were obtained from determinations of risk estimates for nine neoplasms in female and male C3Hf/Bd and C57BL/6 Bd mice and from data obtained from previous experiments with female BALB/c Bd and RFM mice. The mice were exposed to /sup 137/Cs gamma rays at 0.4 Gy/min to doses of 0, 0.5, 1.0, or 2.0 Gy. When tumors that were considered the cause of death were examined, both the control and induced mortality rates for the various tumors varied considerably among sexes and strains. The results suggest that in general susceptibility is determined by the control incidence. The relative risk model was significantly superior in five of the tumor types: lung, breast, liver, ovary, and adrenal. Both models appeared to fit myeloid leukemia and Harderian gland tumors, and neither provided good fits for thymic lymphoma and reticulum cell sarcoma. When risk estimates of radiation-induced tumors in humans and mice were compared, it was found that the relative risk estimates for lung, breast, and leukemia were not significantly different between humans and mice. In the case of liver tumors, mice had a higher risk than humans. These results indicate that the relative risk model is the appropriate approach for risk estimation for a number of tumors. The apparent concordance of relative risk estimates between humans and mice for the small number of cancers examined encourages us to undertake further studies.
Extrapolation of the relative risk of radiogenic neoplasms across mouse strains and to man.
Storer, J B; Mitchell, T J; Fry, R J
1988-05-01
We have examined two interrelated questions: is the susceptibility for radiogenic cancer related to the natural incidence, and are the responses of cancer induction by radiation described better by an absolute or a relative risk model. Also, we have examined whether it is possible to extrapolate relative risk estimates across species, from mice to humans. The answers to these questions were obtained from determinations of risk estimates for nine neoplasms in female and male C3Hf/Bd and C57BL/6 Bd mice and from data obtained from previous experiments with female BALB/c Bd and RFM mice. The mice were exposed to 137Cs gamma rays at 0.4 Gy/min to doses of 0, 0.5, 1.0, or 2.0 Gy. When tumors that were considered the cause of death were examined, both the control and induced mortality rates for the various tumors varied considerably among sexes and strains. The results suggest that in general susceptibility is determined by the control incidence. The relative risk model was significantly superior in five of the tumor types: lung, breast, liver, ovary, and adrenal. Both models appeared to fit myeloid leukemia and Harderian gland tumors, and neither provided good fits for thymic lymphoma and reticulum cell sarcoma. When risk estimates of radiation-induced tumors in humans and mice were compared, it was found that the relative risk estimates for lung, breast, and leukemia were not significantly different between humans and mice. In the case of liver tumors, mice had a higher risk than humans. These results indicate that the relative risk model is the appropriate approach for risk estimation for a number of tumors. The apparent concordance of relative risk estimates between humans and mice for the small number of cancers examined encourages us to undertake further studies.
NASA Astrophysics Data System (ADS)
Birgand, F.; Etheridge, J. R.; Burchell, M. R.
2013-12-01
Tidal marshes are among the most dynamic aquatic systems in the world. While astronomical and wind driven tides are the major driver to displace water volumes, rainfall events and evapotranspiration move the overall balance towards water export or import, respectively. Until now, only glimpses of the associated biogeochemical functioning could be obtained, usually at one or several tidal cycles scale, because there was no obvious method to obtain long term water quality data at a high temporal frequency. We have successfully managed, using UV-Vis spectrophotometers in the field, to obtain water quality and flow data on a 15-min frequency for over 20 months in a restored brackish marsh in North Carolina. This marsh was designed to intercept water generated by subsurface drainage of adjacent agricultural land before discharge to the nearby estuary. It is particularly tempting in tidal systems where tides may look very similar from one to the next, to extrapolate results obtained possibly over several days or weeks to a ';seasonal biogeochemical functioning'. The lessons learned from high frequency data at the tidal scale are fascinating, but in the longer term, we have learned that a few and inherently rare rainfall events drove the overall nutrient balance in the marsh. Continuous water quality monitoring is thus essential for two reasons: 1) to observe the short term dynamics, as they are the key to unveil possibly misunderstood biogeochemical processes, and 2) to capture the rare yet essential events which drive the system's response. However, continuous water quality monitoring on a long term basis in harsh coastal environments is not without challenges.
Persistence Length of Stable Microtubules
NASA Astrophysics Data System (ADS)
Hawkins, Taviare; Mirigian, Matthew; Yasar, M. Selcuk; Ross, Jennifer
2011-03-01
Microtubules are a vital component of the cytoskeleton. As the most rigid of the cytoskeleton filaments, they give shape and support to the cell. They are also essential for intracellular traffic by providing the roadways onto which organelles are transported, and they are required to reorganize during cellular division. To perform its function in the cell, the microtubule must be rigid yet dynamic. We are interested in how the mechanical properties of stable microtubules change over time. Some ``stable'' microtubules of the cell are recycled after days, such as in the axons of neurons or the cilia and flagella. We measured the persistence length of freely fluctuating taxol-stabilized microtubules over the span of a week and analyzed them via Fourier decomposition. As measured on a daily basis, the persistence length is independent of the contour length. Although measured over the span of the week, the accuracy of the measurement and the persistence length varies. We also studied how fluorescently-labeling the microtubule affects the persistence length and observed that a higher labeling ratio corresponded to greater flexibility. National Science Foundation Grant No: 0928540 to JLR.
IMF Length Scales and Predictability: The Two Length Scale Medium
NASA Technical Reports Server (NTRS)
Collier, Michael R.; Szabo, Adam; Slavin, James A.; Lepping, R. P.; Kokubun, S.
1999-01-01
We present preliminary results from a systematic study using simultaneous data from three spacecraft, Wind, IMP 8 (Interplanetary Monitoring Platform) and Geotail to examine interplanetary length scales and their implications on predictability for magnetic field parcels in the typical solar wind. Time periods were selected when the plane formed by the three spacecraft included the GSE (Ground Support Equipment) x-direction so that if the parcel fronts were strictly planar, the two adjacent spacecraft pairs would determine the same phase front angles. After correcting for the motion of the Earth relative to the interplanetary medium and deviations in the solar wind flow from radial, we used differences in the measured front angle between the two spacecraft pairs to determine structure radius of curvature. Results indicate that the typical radius of curvature for these IMF parcels is of the order of 100 R (Sub E). This implies that there are two important IMF (Interplanetary Magnetic Field) scale lengths relevant to predictability: (1) the well-established scale length over which correlations observed by two spacecraft decay along a given IMF parcel, of the order of a few tens of Earth radii and (2) the scale length over which two spacecraft are unlikely to even observe the same parcel because of its curvature, of the order of a hundred Earth radii.
When Does Length Cause the Word Length Effect?
ERIC Educational Resources Information Center
Jalbert, Annie; Neath, Ian; Bireta, Tamra J.; Surprenant, Aimee M.
2011-01-01
The word length effect, the finding that lists of short words are better recalled than lists of long words, has been termed one of the benchmark findings that any theory of immediate memory must account for. Indeed, the effect led directly to the development of working memory and the phonological loop, and it is viewed as the best remaining…
When Does Length Cause the Word Length Effect?
ERIC Educational Resources Information Center
Jalbert, Annie; Neath, Ian; Bireta, Tamra J.; Surprenant, Aimee M.
2011-01-01
The word length effect, the finding that lists of short words are better recalled than lists of long words, has been termed one of the benchmark findings that any theory of immediate memory must account for. Indeed, the effect led directly to the development of working memory and the phonological loop, and it is viewed as the best remaining…
CEBAF Upgrade Bunch Length Measurements
Ahmad, Mahmoud
2016-05-01
Many accelerators use short electron bunches and measuring the bunch length is important for efficient operations. CEBAF needs a suitable bunch length because bunches that are too long will result in beam interruption to the halls due to excessive energy spread and beam loss. In this work, bunch length is measured by invasive and non-invasive techniques at different beam energies. Two new measurement techniques have been commissioned; a harmonic cavity showed good results compared to expectations from simulation, and a real time interferometer is commissioned and first checkouts were performed. Three other techniques were used for measurements and comparison purposes without modifying the old procedures. Two of them can be used when the beam is not compressed longitudinally while the other one, the synchrotron light monitor, can be used with compressed or uncompressed beam.
Continuously variable focal length lens
Adams, Bernhard W; Chollet, Matthieu C
2013-12-17
A material preferably in crystal form having a low atomic number such as beryllium (Z=4) provides for the focusing of x-rays in a continuously variable manner. The material is provided with plural spaced curvilinear, optically matched slots and/or recesses through which an x-ray beam is directed. The focal length of the material may be decreased or increased by increasing or decreasing, respectively, the number of slots (or recesses) through which the x-ray beam is directed, while fine tuning of the focal length is accomplished by rotation of the material so as to change the path length of the x-ray beam through the aligned cylindrical slows. X-ray analysis of a fixed point in a solid material may be performed by scanning the energy of the x-ray beam while rotating the material to maintain the beam's focal point at a fixed point in the specimen undergoing analysis.
Long-Period Tidal Variations in the Length of Day
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Erofeeva, Svetlana Y.
2014-01-01
A new model of long-period tidal variations in length of day is developed. The model comprises 80 spectral lines with periods between 18.6 years and 4.7 days, and it consistently includes effects of mantle anelasticity and dynamic ocean tides for all lines. The anelastic properties followWahr and Bergen; experimental confirmation for their results now exists at the fortnightly period, but there remains uncertainty when extrapolating to the longest periods. The ocean modeling builds on recent work with the fortnightly constituent, which suggests that oceanic tidal angular momentum can be reliably predicted at these periods without data assimilation. This is a critical property when modeling most long-period tides, for which little observational data exist. Dynamic ocean effects are quite pronounced at shortest periods as out-of-phase rotation components become nearly as large as in-phase components. The model is tested against a 20 year time series of space geodetic measurements of length of day. The current international standard model is shown to leave significant residual tidal energy, and the new model is found to mostly eliminate that energy, with especially large variance reduction for constituents Sa, Ssa, Mf, and Mt.
Long-Period Tidal Variations in the Length of Day
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Erofeeva, Svetlana Y.
2014-01-01
A new model of long-period tidal variations in length of day is developed. The model comprises 80 spectral lines with periods between 18.6 years and 4.7 days, and it consistently includes effects of mantle anelasticity and dynamic ocean tides for all lines. The anelastic properties followWahr and Bergen; experimental confirmation for their results now exists at the fortnightly period, but there remains uncertainty when extrapolating to the longest periods. The ocean modeling builds on recent work with the fortnightly constituent, which suggests that oceanic tidal angular momentum can be reliably predicted at these periods without data assimilation. This is a critical property when modeling most long-period tides, for which little observational data exist. Dynamic ocean effects are quite pronounced at shortest periods as out-of-phase rotation components become nearly as large as in-phase components. The model is tested against a 20 year time series of space geodetic measurements of length of day. The current international standard model is shown to leave significant residual tidal energy, and the new model is found to mostly eliminate that energy, with especially large variance reduction for constituents Sa, Ssa, Mf, and Mt.
Kondo length in bosonic lattices
NASA Astrophysics Data System (ADS)
Giuliano, Domenico; Sodano, Pasquale; Trombettoni, Andrea
2017-09-01
Motivated by the fact that the low-energy properties of the Kondo model can be effectively simulated in spin chains, we study the realization of the effect with bond impurities in ultracold bosonic lattices at half filling. After presenting a discussion of the effective theory and of the mapping of the bosonic chain onto a lattice spin Hamiltonian, we provide estimates for the Kondo length as a function of the parameters of the bosonic model. We point out that the Kondo length can be extracted from the integrated real-space correlation functions, which are experimentally accessible quantities in experiments with cold atoms.
Continuous lengths of oxide superconductors
Kroeger, Donald M.; List, III, Frederick A.
2000-01-01
A layered oxide superconductor prepared by depositing a superconductor precursor powder on a continuous length of a first substrate ribbon. A continuous length of a second substrate ribbon is overlaid on the first substrate ribbon. Sufficient pressure is applied to form a bound layered superconductor precursor powder between the first substrate ribbon and the second substrate ribbon. The layered superconductor precursor is then heat treated to establish the oxide superconducting phase. The layered oxide superconductor has a smooth interface between the substrate and the oxide superconductor.
Kratochvil, Christopher; Ghuman, Jaswinder; Camporeale, Angelo; Lipsius, Sarah; D'Souza, Deborah; Tanaka, Yoko
2015-01-01
Abstract Objectives: This extrapolation analysis qualitatively compared the efficacy and safety profile of atomoxetine from Lilly clinical trial data in 6–7-year-old patients with attention-deficit/hyperactivity disorder (ADHD) with that of published literature in 4–5-year-old patients with ADHD (two open-label [4–5-year-old patients] and one placebo-controlled study [5-year-old patients]). Methods: The main efficacy analyses included placebo-controlled Lilly data and the placebo-controlled external study (5-year-old patients) data. The primary efficacy variables used in these studies were the ADHD Rating Scale-IV Parent Version, Investigator Administered (ADHD-RS-IV-Parent:Inv) total score, or the Swanson, Nolan and Pelham (SNAP-IV) scale score. Safety analyses included treatment-emergent adverse events (TEAEs) and vital signs. Descriptive statistics (means, percentages) are presented. Results: Acute atomoxetine treatment improved core ADHD symptoms in both 6–7-year-old patients (n=565) and 5-year-old patients (n=37) (treatment effect: −10.16 and −7.42). In an analysis of placebo-controlled groups, the mean duration of exposure to atomoxetine was ∼7 weeks for 6–7-year-old patients and 9 weeks for 5-year-old patients. Decreased appetite was the most common TEAE in atomoxetine-treated patients. The TEAEs observed at a higher rate in 5-year-old versus 6–7-year-old patients were irritability (36.8% vs. 3.6%) and other mood-related events (6.9% each vs. <3.0%). Blood pressure and pulse increased in both 4–5-year-old patients and 6–7-year-old patients, whereas a weight increase was seen only in the 6–7-year-old patients. Conclusions: Although limited by the small sample size of the external studies, these analyses suggest that in 5-year-old patients with ADHD, atomoxetine may improve ADHD symptoms, but possibly to a lesser extent than in older children, with some adverse events occurring at a higher rate in 5-year-old patients. PMID:25265343
NASA Astrophysics Data System (ADS)
Aronson, E. L.; Helliker, B. R.; Strode, S. A.; Pawson, S.
2011-12-01
Global soil methane consumption was estimated using multiple regression-based parameterizations by vegetation type from a meta-dataset created from 780 published methane flux measurements. The average global estimates for soil consumption by extrapolation, without taking snow cover into account, totaled 54-60 Tg annually. The parameterizations were based on air temperature and precipitation output variables reported in the literature and gathered in the meta-dataset. These variables were matched to similar ones reported in the Goddard Earth Observing System (GEOS) global climate model. The methane uptake response to increasing precipitation and temperature varied between vegetation types. The parameterizations for methane fluxes by vegetation type were included in a 20 year, free-running, tagged-methane run of the GEOS-5 model constrained by real observations of sea surface temperature. Snow cover was assumed to block methane diffusion into the soil and therefore result in zero consumption of methane in snow-covered soils. The parameterization estimates was slightly higher than previous estimates of global methane consumption, at around 37 Tg annually. The resultant global surface methane concentration was then compared to observed methane concentrations from NOAA Global Monitoring Division sites worldwide, with varying agreement. The parameterization for the vegetation type "Needleleaf Trees" predicted methane consumption in a study site located in the NJ Pinelands, which was studied in 2009. The estimate of methane consumption by the vegetation type "Broadleaf Evergreen Trees" was found to have the greatest error, which may indicate that the factors on which the parameterization was based are of minor importance in regulating methane flux within this vegetation type. The results were compared to offline runs of the parameterizations without the snow-cover compensation, which resulted in global rates of almost double the methane consumption. Since there have been
Martín-Jiménez, Tomás; Baynes, Ronald E; Craigmill, Arthur; Riviere, Jim E
2002-08-01
The extralabel use of drugs can be defined as the use of drugs in a manner inconsistent with their FDA-approved labeling. The passage of the Animal Medicinal Drug Use Clarification Act (AMDUCA) in 1994 and its implementation by the FDA-Center for Veterinary Medicine in 1996 has allowed food animal veterinarians to use drugs legally in an extralabel manner, as long as an appropriate withdrawal period is established. The present study introduces and validates with simulated and experimental data the Extrapolated Withdrawal-Period Estimator (EWE) Algorithm, a procedure aimed at predicting extralabel withdrawal intervals (WDIs) based on the label and pharmacokinetic literature data contained in the Food Animal Residue Avoidance Databank (FARAD). This is the initial and first attempt at consistently obtaining WDI estimates that encompass a reasonable degree of statistical soundness. Data on the determination of withdrawal times after the extralabel use of the antibiotic oxytetracycline were obtained both with simulated disposition data and from the literature. A withdrawal interval was computed using the EWE Algorithm for an extralabel dose of 25 mg/kg (simulation study) and for a dose of 40 mg/kg (literature data). These estimates were compared with the withdrawal times computed with the simulated data and with the literature data, respectively. The EWE estimates of WDP for a simulated extralabel dose of 25 mg/kg was 39 days. The withdrawal time (WDT) obtained for this dose on a tissue depletion study was 39 days. The EWE estimate of WDP for an extralabel intramuscular dose of 40 mg/kg in cattle, based on the kinetic data contained in the FARAD database, was 48 days. The withdrawal time experimentally obtained for similar use of this drug was 49 days. The EWE Algorithm can obtain WDI estimates that encompass the same degree of statistical soundness as the WDT estimates, provided that the assumptions of the approved dosage regimen hold for the extralabel dosage regimen
Baldwin, David H; Spromberg, Julann A; Collier, Tracy K; Scholz, Nathaniel L
2009-12-01
growth and size at ocean entry of juvenile chinook. The consequent reduction in individual survival over successive years reduces the intrinsic productivity (lambda) of a modeled ocean-type chinook population. Overall, we show that exposures to common pesticides may place important constraints on the recovery of ESA-listed salmon species, and that simple models can be used to extrapolate toxicological impacts across several scales of biological complexity.
Method and apparatus for determining minority carrier diffusion length in semiconductors
Goldstein, Bernard; Dresner, Joseph; Szostak, Daniel J.
1983-07-12
Method and apparatus are provided for determining the diffusion length of minority carriers in semiconductor material, particularly amorphous silicon which has a significantly small minority carrier diffusion length using the constant-magnitude surface-photovoltage (SPV) method. An unmodulated illumination provides the light excitation on the surface of the material to generate the SPV. A manually controlled or automatic servo system maintains a constant predetermined value of the SPV. A vibrating Kelvin method-type probe electrode couples the SPV to a measurement system. The operating optical wavelength of an adjustable monochromator to compensate for the wavelength dependent sensitivity of a photodetector is selected to measure the illumination intensity (photon flux) on the silicon. Measurements of the relative photon flux for a plurality of wavelengths are plotted against the reciprocal of the optical absorption coefficient of the material. A linear plot of the data points is extrapolated to zero intensity. The negative intercept value on the reciprocal optical coefficient axis of the extrapolated linear plot is the diffusion length of the minority carriers.
Bachmann, Talis; Murd, Carolina; Põder, Endel
2012-09-01
One fundamental property of the perceptual and cognitive systems is their capacity for prediction in the dynamic environment; the flash-lag effect has been considered as a particularly suggestive example of this capacity (Nijhawan in nature 370:256-257, 1994, Behav brain sci 31:179-239, 2008). Thus, because of involvement of the mechanisms of extrapolation and visual prediction, the moving object is perceived ahead of the simultaneously flashed static object objectively aligned with the moving one. In the present study we introduce a new method and report experimental results inconsistent with at least some versions of the prediction/extrapolation theory. We show that a stimulus moving in the opposite direction to the reference stimulus by approaching it before the flash does not diminish the flash-lag effect, but rather augments it. In addition, alternative theories (in)capable of explaining this paradoxical result are discussed.
Semiokhina, A F; Ochinskaia, E I; Rubtsova, N B; Pleskacheva, M G; Krushinskiĭ, L V
1985-01-01
Sharp EEG changes are recorded in bioelectrical activity of the dorsal cortex and dorsal ventricular edge in marsh tortoises in conditions of free movement during solving of an extrapolation task (a test of elementary reasoning ability). These changes of a pathological character, accompanied by neurotic states, were observed in some animals having correctly solved the task several times in succession (2-5), beginning with the first presentation. Such changes of EEG and behaviour were not found in tortoises that committed errors at first presentations of the task and only gradually learned correct solving. Formation of the adequate behaviour can proceed by two means: on the basis of elementary reasoning ability and learning. Disturbance of adequate behaviour in the experiment with characteristic changes of EEG testifies to a difficult state of the animal during solving of the extrapolation task.
NASA Astrophysics Data System (ADS)
Los, J. H.; Pellenq, R. J. M.
2010-02-01
We have determined the bulk melting temperature Tm of nickel according to a recent interatomic interaction model via Monte Carlo simulation by two methods: extrapolation from cluster melting temperatures based on the Pavlov model (a variant of the Gibbs-Thompson model) and by calculation of the liquid and solid Gibbs free energies via thermodynamic integration. The result of the latter, which is the most reliable method, gives Tm=2010±35K , to be compared to the experimental value of 1726 K. The cluster extrapolation method, however, gives a 325° higher value of Tm=2335K . This remarkable result is shown to be due to a barrier for melting, which is associated with a nonwetting behavior.
Monte Carlo based approach to the LS-NaI 4πβ-γ anticoincidence extrapolation and uncertainty
Fitzgerald, R.
2016-01-01
The 4πβ-γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone. PMID:27358944
NASA Astrophysics Data System (ADS)
Exl, Lukas; Mauser, Norbert J.; Schrefl, Thomas; Suess, Dieter
2017-10-01
A practical and efficient scheme for the higher order integration of the Landau-Lifschitz-Gilbert (LLG) equation is presented. The method is based on extrapolation of the two-step explicit midpoint rule and incorporates adaptive time step and order selection. We make use of a piecewise time-linear stray field approximation to reduce the necessary work per time step. The approximation to the interpolated operator is embedded into the extrapolation process to keep in step with the hierarchic order structure of the scheme. We verify the approach by means of numerical experiments on a standardized NIST problem and compare with a higher order embedded Runge-Kutta formula. The efficiency of the presented approach increases when the stray field computation takes a larger portion of the costs for the effective field evaluation.
Finite length Taylor Couette flow
NASA Technical Reports Server (NTRS)
Streett, C. L.; Hussaini, M. Y.
1987-01-01
Axisymmetric numerical solutions of the unsteady Navier-Stokes equations for flow between concentric rotating cylinders of finite length are obtained by a spectral collocation method. These representative results pertain to two-cell/one-cell exchange process, and are compared with recent experiments.
Incubation length of dabbling ducks
Wells-Berlin, A. M.; Prince, H.H.; Arnold, T.W.
2005-01-01
We collected unincubated eggs from wild Mallard (Anas platyrhynchos), Gadwall (A. strepera), Blue-winged Teal (A. discors), and Northern Shoveler (A. clypeata) nests and artificially incubated them at 37.5??C. Average incubation lengths of Mallard, Gadwall, and Northern Shoveler eggs did not differ from their wild-nesting counterparts, but artificially incubated Blue-winged Teal eggs required an additional 1.7 days to hatch, suggesting that wild-nesting teal incubated more effectively. A small sample of Mallard, Gadwall, and Northern Shoveler eggs artificially incubated at 38.3??C hatched 1 day sooner, indicating that incubation temperature affected incubation length. Mean incubation length of Blue-winged Teal declined by 1 day for each 11-day delay in nesting, but we found no such seasonal decline among Mallards, Gadwalls, or Northern Shovelers. There is no obvious explanation for the seasonal reduction in incubation length for Blue-winged Teal eggs incubated in a constant environment, and the phenomenon deserves further study. ?? The Cooper Ornithological Society 2005.
Finite length Taylor Couette flow
NASA Technical Reports Server (NTRS)
Streett, C. L.; Hussaini, M. Y.
1987-01-01
Axisymmetric numerical solutions of the unsteady Navier-Stokes equations for flow between concentric rotating cylinders of finite length are obtained by a spectral collocation method. These representative results pertain to two-cell/one-cell exchange process, and are compared with recent experiments.
Persistent Criminality and Career Length
ERIC Educational Resources Information Center
Haapanen, Rudy; Britton, Lee; Croisdale, Tim
2007-01-01
This study is an examination of persistent offending and its implications for the understanding and investigation of desistance and career length. Persistence, especially as it is operationalized using official measures, is characterized as fundamentally a measure of resistance to formal social control: continued crime in the face of increasingly…
Xia, Hong; Luo, Zhendong
2017-01-01
In this study, we devote ourselves to establishing a stabilized mixed finite element (MFE) reduced-order extrapolation (SMFEROE) model holding seldom unknowns for the two-dimensional (2D) unsteady conduction-convection problem via the proper orthogonal decomposition (POD) technique, analyzing the existence and uniqueness and the stability as well as the convergence of the SMFEROE solutions and validating the correctness and dependability of the SMFEROE model by means of numerical simulations.
NASA Astrophysics Data System (ADS)
Miga, Michael I.; Dumpuri, Prashanth; Simpson, Amber L.; Weis, Jared A.; Jarnagin, William R.
2011-03-01
The problem of extrapolating cost-effective relevant information from distinctly finite or sparse data, while balancing the competing goals between workflow and engineering design, and between application and accuracy is the 'sparse data extrapolation problem'. Within the context of open abdominal image-guided liver surgery, one realization of this problem is compensating for non-rigid organ deformations while maintaining workflow for the surgeon. More specifically, rigid organ-based surface registration between CT-rendered liver surfaces and laser-range scanned intraoperative partial surface counterparts resulted in an average closest-point residual 6.1 +/- 4.5 mm with maximumsigned distances ranging from -13.4 to 16.2 mm. Similar to the neurosurgical environment, there is a need to correct for soft tissue deformation to translate image-guided interventions to the abdomen (e.g. liver, kidney, pancreas, etc.). While intraoperative tomographic imaging is available, these approaches are less than optimal solutions to the sparse data extrapolation problem. In this paper, we compare and contrast three sparse data extrapolation methods to that of datarich interpolation for the correction of deformation within a liver phantom containing 43 subsurface targets. The findings indicate that the subtleties in the initial alignment pose following rigid registration can affect correction up to 5- 10%. The best deformation compensation achieved was approximately 54.5% (target registration error of 2.0 +/- 1.6 mm) while the data-rich interpolative method was 77.8% (target registration error of 0.6 +/- 0.5 mm).
1990-01-01
A method for inference and extrapolations in certain dose- response. damage-assessments and accelerated life-testing studies as been proposed by...Meinhold and Singpurwalla in 1986. The method is based on a use of the Kalman-filter algorithm and involves the double lognormal as the distributional...assumption. In this paper we discuss issues pertaining to a practical implementation of this methodology . This involves some insights based on a
Yelles Chaouche, L.; Kuckein, C.; Martinez Pillet, V.; Moreno-Insertis, F.
2012-03-20
The three-dimensional structure of an active region filament is studied using nonlinear force-free field extrapolations based on simultaneous observations at a photospheric and a chromospheric height. To that end, we used the Si I 10827 A line and the He I 10830 A triplet obtained with the Tenerife Infrared Polarimeter at the Vacuum Tower Telescope (Tenerife). The two extrapolations have been carried out independently from each other and their respective spatial domains overlap in a considerable height range. This opens up new possibilities for diagnostics in addition to the usual ones obtained through a single extrapolation from, typically, a photospheric layer. Among those possibilities, this method allows the determination of an average formation height of the He I 10830 A signal of Almost-Equal-To 2 Mm above the surface of the Sun. It allows, as well, a cross-check of the obtained three-dimensional magnetic structures to verify a possible deviation from the force-free condition, especially at the photosphere. The extrapolations yield a filament formed by a twisted flux rope whose axis is located at about 1.4 Mm above the solar surface. The twisted field lines make slightly more than one turn along the filament within our field of view, which results in 0.055 turns Mm{sup -1}. The convex part of the field lines (as seen from the solar surface) constitutes dips where the plasma can naturally be supported. The obtained three-dimensional magnetic structure of the filament depends on the choice of the observed horizontal magnetic field as determined from the 180 Degree-Sign solution of the azimuth. We derive a method to check for the correctness of the selected 180 Degree-Sign ambiguity solution.
Pairing versus quarteting coherence length
NASA Astrophysics Data System (ADS)
Delion, D. S.; Baran, V. V.
2015-02-01
We systematically analyze the coherence length in even-even nuclei. The pairing coherence length in the spin-singlet channel for the effective density-dependent delta (DDD) and Gaussian interaction is estimated. We consider in our calculations bound states as well as narrow resonances. It turns out that the pairing gaps given by the DDD interaction are similar to those of the Gaussian potential if one renormalizes the radial width to the nuclear radius. The correlations induced by the pairing interaction have, in all considered cases, a long-range character inside the nucleus and a decrease towards the surface. The mean coherence length is larger than the geometrical radius for light nuclei and approaches this value for heavy nuclei. The effect of the temperature and states in the continuum is investigated. Strong shell effects are put in evidence, especially for protons. We generalize this concept to quartets by considering similar relations, but between proton and neutron pairs. The quartet coherence length has a similar shape, but with larger values on the nuclear surface. We provide evidence of the important role of proton-neutron correlations by estimating the so-called alpha coherence length, which takes into account the overlap with the proton-neutron part of the α -particle wave function. It turns out that it does not depend on the nuclear size and has a value comparable to the free α -particle radius. We have shown that pairing correlations are mainly concentrated inside the nucleus, while quarteting correlations are connected to the nuclear surface.
Characteristic length of glass transition
NASA Astrophysics Data System (ADS)
Donth, E.
1996-03-01
The characteristic length of the glass transition (ξ _α ) is based on the concept of cooperatively rearranging regions (CRR's) by Adam & Gibbs (1965): ξ _α is the diameter of one CRR. In the theoretical part of the talk a formula is derived how this length can be calculated from calorimetric data of the transformation interval. The approach is based on fluctuations in natural functional subsystems. The corresponding thermodynamics is represented e.g. in a book of the author (E. Donth, Relaxation and Thermodynamics in Polymers. Glass Transition, Akademie-Verlag, Berlin 1992). A typical value for this length is 3 nanometers. In the experimental part several examples are reported to enlarge the experimental evidence for such a length: Squeezing the glass transition in the amorphous layers of partially crystallized PET (C. Schick, Rostock), glass transition of small-molecule glass formers in a series of nanoscaled pores of porous glasses (F. Kremer, Leipzig), comparison with a concentration fluctuation model in homogeneous polymer mixtures (E.W. Fischer, Mainz), and, from our laboratory, backscaling to ξ _α across the main transition from the entanglement spacing in several amorphous polymers such as PVAC, PS, NR, and some polymer networks. Rouse backscaling was possible in the α β splitting region of several poly(n alkyl methacrylates) resulting in small characteristic lengths of order 1 nanometer near the onset of α cooperativity. In a speculative outlook a dynamic density pattern is presented, having a cellular structure with higher density and lower mobility of the cell walls. It will be explained, with the aid of different thermal expansion of wall and clusters, how the clusters within the cells maintain a certain mobility far below the glass temperature.
Precise Determination of the I = 2 Scattering Length from Mixed-Action Lattice QCD
Silas Beane; Paulo Bedaque; Thomas Luu; Konstantinos Orginos; Assumpta Parreno; Martin Savage; Aaron Torok; Andre Walker-Loud
2008-01-01
The I=2 pipi scattering length is calculated in fully-dynamical lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations (with fourth-rooted staggered sea quarks) at four light-quark masses. Two- and three-flavor mixed-action chiral perturbation theory at next-to-leading order is used to perform the chiral and continuum extrapolations. At the physical charged pion mass, we find m_pi a_pipi(I=2) = -0.04330 +- 0.00042, where the error bar combines the statistical and systematic uncertainties in quadrature.
Length-Scale Dependence of the Superconductor-to-Insulator Quantum Phase Transition in One Dimension
Chow, E.; Delsing, P.; Haviland, D.B.
1998-07-01
One-dimensional (1D) arrays of small-capacitance Josephson junctions demonstrate a sharp transition, from Josephson-like behavior to the Coulomb blockade of Cooper-pair tunneling, as the effective Josephson coupling between nearest neighbors is tuned with an externally applied magnetic field. Comparing the zero-bias resistance of three arrays with 255, 127, and 63 junctions, we observe a critical behavior where the resistance, extrapolated to T=0 , is independent of length at a critical magnetic field. Comparison is made with a theory of this T=0 quantum phase transition, which maps to the 2D classical XY model. {copyright} {ital 1998} {ital The American Physical Society}
Scott, Bradley J; Klein, Agnes V; Wang, Jian
2015-03-01
Monoclonal antibodies have become mainstays of treatment for many diseases. After more than a decade on the Canadian market, a number of authorized monoclonal antibody products are facing patent expiry. Given their success, most notably in the areas of oncology and autoimmune disease, pharmaceutical and biotechnology companies are eager to produce their own biosimilar versions and have begun manufacturing and testing for a variety of monoclonal antibody products. In October of 2013, the first biosimilar monoclonal antibody products were approved by the European Medicines Agency (Remsima™ and Inflectra™). These products were authorized by Health Canada shortly after; however, while the EMA allowed for extrapolation to all of the indications held by the reference product, Health Canada limited extrapolation to a subset of the indications held by the reference product, Remicade®. The purpose of this review is to discuss the Canadian regulatory framework for the authorization of biosimilar mAbs with specific discussion around the clinical requirements for establishing (bio)-similarity and to present the principles that are used in the clinical assessment of New Drug Submissions for intended biosimilar monoclonal antibodies. Health Canada's current views regarding indication extrapolation, product interchangeability, and post-market surveillance are discussed as well.
NASA Astrophysics Data System (ADS)
Jiang, Chao-Wei; Feng, Xue-Shang
2016-01-01
In the solar corona, the magnetic flux rope is believed to be a fundamental structure that accounts for magnetic free energy storage and solar eruptions. Up to the present, the extrapolation of the magnetic field from boundary data has been the primary way to obtain fully three-dimensional magnetic information about the corona. As a result, the ability to reliably recover the coronal magnetic flux rope is important for coronal field extrapolation. In this paper, our coronal field extrapolation code is examined with an analytical magnetic flux rope model proposed by Titov & Démoulin, which consists of a bipolar magnetic configuration holding a semi-circular line-tied flux rope in force-free equilibrium. By only using the vector field at the bottom boundary as input, we test our code with the model in a representative range of parameter space and find that the model field can be reconstructed with high accuracy. In particular, the magnetic topological interfaces formed between the flux rope and the surrounding arcade, i.e., the “hyperbolic flux tube” and “bald patch separatrix surface,” are also reliably reproduced. By this test, we demonstrate that our CESE-MHD-NLFFF code can be applied to recovering the magnetic flux rope in the solar corona as long as the vector magnetogram satisfies the force-free constraints.
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
Fallou, Hélène; Cimetière, Nicolas; Giraudet, Sylvain; Wolbert, Dominique; Le Cloirec, Pierre
2016-01-15
Activated carbon fiber cloths (ACFC) have shown promising results when applied to water treatment, especially for removing organic micropollutants such as pharmaceutical compounds. Nevertheless, further investigations are required, especially considering trace concentrations, which are found in current water treatment. Until now, most studies have been carried out at relatively high concentrations (mg L(-1)), since the experimental and analytical methodologies are more difficult and more expensive when dealing with lower concentrations (ng L(-1)). Therefore, the objective of this study was to validate an extrapolation procedure from high to low concentrations, for four compounds (Carbamazepine, Diclofenac, Caffeine and Acetaminophen). For this purpose, the reliability of the usual adsorption isotherm models, when extrapolated from high (mg L(-1)) to low concentrations (ng L(-1)), was assessed as well as the influence of numerous error functions. Some isotherm models (Freundlich, Toth) and error functions (RSS, ARE) show weaknesses to be used as an adsorption isotherms at low concentrations. However, from these results, the pairing of the Langmuir-Freundlich isotherm model with Marquardt's percent standard of deviation was evidenced as the best combination model, enabling the extrapolation of adsorption capacities by orders of magnitude.
Gajewska, M; Worth, A; Urani, C; Briesen, H; Schramm, K-W
2014-06-16
The application of physiologically based toxicokinetic (PBTK) modelling in route-to-route (RtR) extrapolation of three cosmetic ingredients: coumarin, hydroquinone and caffeine is shown in this study. In particular, the oral no-observed-adverse-effect-level (NOAEL) doses of these chemicals are extrapolated to their corresponding dermal values by comparing the internal concentrations resulting from oral and dermal exposure scenarios. The PBTK model structure has been constructed to give a good simulation performance of biochemical processes within the human body. The model parameters are calibrated based on oral and dermal experimental data for the Caucasian population available in the literature. Particular attention is given to modelling the absorption stage (skin and gastrointestinal tract) in the form of several sub-compartments. This gives better model prediction results when compared to those of a PBTK model with a simpler structure of the absorption barrier. In addition, the role of quantitative structure-property relationships (QSPRs) in predicting skin penetration is evaluated for the three substances with a view to incorporating QSPR-predicted penetration parameters in the PBTK model when experimental values are lacking. Finally, PBTK modelling is used, first to extrapolate oral NOAEL doses derived from rat studies to humans, and then to simulate internal systemic/liver concentrations - Area Under Curve (AUC) and peak concentration - resulting from specified dermal and oral exposure conditions. Based on these simulations, AUC-based dermal thresholds for the three case study compounds are derived and compared with the experimentally obtained oral threshold (NOAEL) values.
Bagust, Adrian; Beale, Sophie
2014-04-01
A recent publication includes a review of survival extrapolation methods used in technology appraisals of treatments for advanced cancers. The author of the article also noted shortcomings and inconsistencies in the analytical methods used in appraisals. He then proposed a survival model selection process algorithm to guide modelers' choice of projective models for use in future appraisals. This article examines the proposed algorithm and highlights various shortcomings that involve questionable assumptions, including researchers' access to patient-level data, the relevance of proportional hazards modeling, and the appropriateness of standard probability functions for characterizing risk, which may mislead practitioners into employing biased structures for projecting limited data in decision models. An alternative paradigm is outlined. This paradigm is based on the primacy of the experimental data and adherence to the scientific method through hypothesis formulation and validation. Drawing on extensive experience of survival modeling and extrapolation in the United Kingdom, practical advice is presented on issues of importance when using data from clinical trials terminated without complete follow-up as a basis for survival extrapolation.
Ground state energy of the δ-Bose and Fermi gas at weak coupling from double extrapolation
NASA Astrophysics Data System (ADS)
Prolhac, Sylvain
2017-04-01
We consider the ground state energy of the Lieb–Liniger gas with δ interaction in the weak coupling regime γ \\to 0 . For bosons with repulsive interaction, previous studies gave the expansion {{e}\\text{B}}≤ft(γ \\right)≃ γ -4{γ3/2}/3π +≤ft(1/6-1/{π2}\\right){γ2} . Using a numerical solution of the Lieb–Liniger integral equation discretized with M points and finite strength γ of the interaction, we obtain very accurate numerics for the next orders after extrapolation on M and γ. The coefficient of {γ5/2} in the expansion is found to be approximately equal to -0.001 587 699 865 505 944 989 29 , accurate within all digits shown. This value is supported by a numerical solution of the Bethe equations with N particles, followed by extrapolation on N and γ. It was identified as ≤ft(3\\zeta (3)/8-1/2\\right)/{π3} by G Lang. The next two coefficients are also guessed from the numerics. For balanced spin 1/2 fermions with attractive interaction, the best result so far for the ground state energy has been {{e}\\text{F}}≤ft(γ \\right)≃ {π2}/12-γ /2+{γ2}/6 . An analogue double extrapolation scheme leads to the value -\\zeta (3)/{π4} for the coefficient of {γ3} .
Croom, Edward L.; Shafer, Timothy J.; Evans, Marina V.; Mundy, William R.; Eklund, Chris R.; Johnstone, Andrew F.M.; Mack, Cina M.; Pegram, Rex A.
2015-02-15
Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicity. Lindane cell and media concentrations in vitro, together with in vitro concentration-response data for lindane effects on neuronal network firing rates, were compared to in vivo data and model simulations as an exercise in extrapolation for chemical-induced neurotoxicity in rodents and humans. Time- and concentration-dependent lindane dosimetry was determined in primary cultures of rat cortical neurons in vitro using “faux” (without electrodes) microelectrode arrays (MEAs). In vivo data were derived from literature values, and physiologically based pharmacokinetic (PBPK) modeling was used to extrapolate from rat to human. The previously determined EC{sub 50} for increased firing rates in primary cultures of cortical neurons was 0.6 μg/ml. Media and cell lindane concentrations at the EC{sub 50} were 0.4 μg/ml and 7.1 μg/ml, respectively, and cellular lindane accumulation was time- and concentration-dependent. Rat blood and brain lindane levels during seizures were 1.7–1.9 μg/ml and 5–11 μg/ml, respectively. Brain lindane levels associated with seizures in rats and those predicted for humans (average = 7 μg/ml) by PBPK modeling were very similar to in vitro concentrations detected in cortical cells at the EC{sub 50} dose. PBPK model predictions matched literature data and timing. These findings indicate that in vitro MEA results are predictive of in vivo responses to lindane and demonstrate a successful modeling approach for IVIVE of rat and human neurotoxicity. - Highlights: • In vitro to in vivo extrapolation for lindane neurotoxicity was performed. • Dosimetry of lindane in a micro-electrode array (MEA) test system was assessed. • Cell concentrations at the MEA EC
Patient-bounded extrapolation using low-dose priors for volume-of-interest imaging in C-arm CT
Xia, Y.; Maier, A.; Berger, M.; Hornegger, J.; Bauer, S.
2015-04-15
Purpose: Three-dimensional (3D) volume-of-interest (VOI) imaging with C-arm systems provides anatomical information in a predefined 3D target region at a considerably low x-ray dose. However, VOI imaging involves laterally truncated projections from which conventional reconstruction algorithms generally yield images with severe truncation artifacts. Heuristic based extrapolation methods, e.g., water cylinder extrapolation, typically rely on techniques that complete the truncated data by means of a continuity assumption and thus appear to be ad-hoc. It is our goal to improve the image quality of VOI imaging by exploiting existing patient-specific prior information in the workflow. Methods: A necessary initial step prior to a 3D acquisition is to isocenter the patient with respect to the target to be scanned. To this end, low-dose fluoroscopic x-ray acquisitions are usually applied from anterior–posterior (AP) and medio-lateral (ML) views. Based on this, the patient is isocentered by repositioning the table. In this work, we present a patient-bounded extrapolation method that makes use of these noncollimated fluoroscopic images to improve image quality in 3D VOI reconstruction. The algorithm first extracts the 2D patient contours from the noncollimated AP and ML fluoroscopic images. These 2D contours are then combined to estimate a volumetric model of the patient. Forward-projecting the shape of the model at the eventually acquired C-arm rotation views gives the patient boundary information in the projection domain. In this manner, we are in the position to substantially improve image quality by enforcing the extrapolated line profiles to end at the known patient boundaries, derived from the 3D shape model estimate. Results: The proposed method was evaluated on eight clinical datasets with different degrees of truncation. The proposed algorithm achieved a relative root mean square error (rRMSE) of about 1.0% with respect to the reference reconstruction on
NASA Astrophysics Data System (ADS)
Reeves, J. A.; Knight, R. J.; Zebker, H. A.; Kitanidis, P. K.; Schreuder, W. A.
2013-12-01
A 2004 court decision established that hydraulic head levels within the confined aquifer system of the San Luis Valley (SLV), Colorado be maintained within the range experienced in the years between 1978 and 2000. The current groundwater flow model for this area is not able to predict hydraulic head accurately in the confined aquifer system due to a dearth of calibration points, i.e., hydraulic head measurements, during the time period of interest. The work presented here investigates the extent to which spatially and temporally dense measurements of deformation from Interferometric Synthetic Aperture Radar (InSAR) data could be used to interpolate and extrapolate temporal and spatial gaps in the hydraulic head dataset by performing a calibration at the well locations. We first predicted the magnitude of the seasonal deformation at the confined aquifer well locations by using aquifer thickness/lithology information from well logs and estimates of the aquifer compressibility from the literature. At 11 well locations the seasonal magnitude of the deformation was sufficiently large so as to be reliably measured with InSAR, given the accepted level of uncertainty of the measurement (~ 5 mm). Previous studies in arid or urban areas have shown that high quality InSAR deformation measurements are often collocated with hydraulic head measurements at monitoring wells, making such a calibration approach relatively straightforward. In contrast, the SLV is an agricultural area where many factors, e.g. crop growth, can seriously degrade the quality of the InSAR data. We used InSAR data from the ERS-1 and ERS-2 satellites, which have a temporal sampling of 35 days and a spatial sampling on the order of 10's of meters, and found that the InSAR data were not of sufficiently high quality at any of the 11 selected well locations. Hence, we used geostatistical techniques to analyze the high quality InSAR deformation data elsewhere in the scene and to estimate the deformation at the
Softness Correlations Across Length Scales
NASA Astrophysics Data System (ADS)
Ivancic, Robert; Shavit, Amit; Rieser, Jennifer; Schoenholz, Samuel; Cubuk, Ekin; Durian, Douglas; Liu, Andrea; Riggleman, Robert
In disordered systems, it is believed that mechanical failure begins with localized particle rearrangements. Recently, a machine learning method has been introduced to identify how likely a particle is to rearrange given its local structural environment, quantified by softness. We calculate the softness of particles in simulations of atomic Lennard-Jones mixtures, molecular Lennard-Jones oligomers, colloidal systems and granular systems. In each case, we find that the length scale characterizing spatial correlations of softness is approximately a particle diameter. These results provide a rationale for why localized rearrangements--whose size is presumably set by the scale of softness correlations--might occur in disordered systems across many length scales. Supported by DOE DE-FG02-05ER46199.
Welding arc length control system
NASA Technical Reports Server (NTRS)
Iceland, William F. (Inventor)
1993-01-01
The present invention is a welding arc length control system. The system includes, in its broadest aspects, a power source for providing welding current, a power amplification system, a motorized welding torch assembly connected to the power amplification system, a computer, and current pick up means. The computer is connected to the power amplification system for storing and processing arc weld current parameters and non-linear voltage-ampere characteristics. The current pick up means is connected to the power source and to the welding torch assembly for providing weld current data to the computer. Thus, the desired arc length is maintained as the welding current is varied during operation, maintaining consistent weld penetration.
Variable focal length deformable mirror
Headley, Daniel; Ramsey, Marc; Schwarz, Jens
2007-06-12
A variable focal length deformable mirror has an inner ring and an outer ring that simply support and push axially on opposite sides of a mirror plate. The resulting variable clamping force deforms the mirror plate to provide a parabolic mirror shape. The rings are parallel planar sections of a single paraboloid and can provide an on-axis focus, if the rings are circular, or an off-axis focus, if the rings are elliptical. The focal length of the deformable mirror can be varied by changing the variable clamping force. The deformable mirror can generally be used in any application requiring the focusing or defocusing of light, including with both coherent and incoherent light sources.
Critical Length Limiting Superlow Friction
NASA Astrophysics Data System (ADS)
Ma, Ming; Benassi, Andrea; Vanossi, Andrea; Urbakh, Michael
2015-02-01
Since the demonstration of superlow friction (superlubricity) in graphite at nanoscale, one of the main challenges in the field of nano- and micromechanics was to scale this phenomenon up. A key question to be addressed is to what extent superlubricity could persist, and what mechanisms could lead to its failure. Here, using an edge-driven Frenkel-Kontorova model, we establish a connection between the critical length above which superlubricity disappears and both intrinsic material properties and experimental parameters. A striking boost in dissipated energy with chain length emerges abruptly due to a high-friction stick-slip mechanism caused by deformation of the slider leading to a local commensuration with the substrate lattice. We derived a parameter-free analytical model for the critical length that is in excellent agreement with our numerical simulations. Our results provide a new perspective on friction and nanomanipulation and can serve as a theoretical basis for designing nanodevices with superlow friction, such as carbon nanotubes.
Critical length limiting superlow friction.
Ma, Ming; Benassi, Andrea; Vanossi, Andrea; Urbakh, Michael
2015-02-06
Since the demonstration of superlow friction (superlubricity) in graphite at nanoscale, one of the main challenges in the field of nano- and micromechanics was to scale this phenomenon up. A key question to be addressed is to what extent superlubricity could persist, and what mechanisms could lead to its failure. Here, using an edge-driven Frenkel-Kontorova model, we establish a connection between the critical length above which superlubricity disappears and both intrinsic material properties and experimental parameters. A striking boost in dissipated energy with chain length emerges abruptly due to a high-friction stick-slip mechanism caused by deformation of the slider leading to a local commensuration with the substrate lattice. We derived a parameter-free analytical model for the critical length that is in excellent agreement with our numerical simulations. Our results provide a new perspective on friction and nanomanipulation and can serve as a theoretical basis for designing nanodevices with superlow friction, such as carbon nanotubes.
Stride parameters and hindlimb length in horses fatigued on a treadmill and at an endurance ride.
Wickler, S J; Greene, H M; Egan, K; Astudillo, A; Dutto, D J; Hoyt, D F
2006-08-01
The relationship between fatigue and stride and/or muscle stiffness requires further study. To measure stride parameters in horses undergoing fatigue associated with running at submaximal speeds both on a treadmill and in an endurance ride. Stride frequencies and estimates of hindlimb stiffness would be decreased in fatigued horses. Horses were fatigued using 2 paradigms: run to exhaustion at a treadmill (4.5 m/sec, 6% incline) and finishing an 80 km endurance ride. Videos were digitised before and after fatigue and analysed for stride parameters: hind limb length, stride frequency, time of contact, step length, duty factor and stride length. In fatigued horses, stride durations were 5% longer (P = 0.007) resulting in lower stride frequencies (P = 0.016) and longer stride lengths (P = 0.006). The time of contacts (tc) for stance phase were not different (P = 0.108) nor was duty factor (tc/stride period, P = 0.457). Step length (speed x tc) and hindlimb lengths were also not different (P = 0.104, P = 0.8). For endurance horses, stride data for nonfatigued horses were consistent with data extrapolated to 4.5 m/sec from nonfatigued horses on the treadmill. Endurance horses slowed (P = 0.002) during the race from 4.55 to 4.03 m/sec and stride lengths were shorter. Despite a slower speed, other stride parameters were unchanged. Hindlimb length was shorter in fatigued horses. Horses fatigued on a treadmill and during the natural course of an endurance ride responded differently, biomechanically. On the treadmill, where speed is constrained, stride frequencies decreased and stride lengths increased. During one endurance ride, stride frequencies were the same, although speeds were substantially reduced. Limb length was shorter in fatigued endurance horses. It remains to be determined if these changes in mechanics are advantageous or disadvantageous in terms of energetics or injury. Further examination of endurance rides is also warranted.
The P/Halley: Spatial distribution and scale lengths for C2, CN, NH2, and H2O
NASA Technical Reports Server (NTRS)
Fink, Uwe; Combi, Michael; Disanti, Michael A.
1991-01-01
From P/Halley long slit spectroscopic exposures on 12 dates, extending from Oct. 1985 to May 1986, spatial profiles were obtained for emissions by C2, CN, NH2, and OI(1D). Haser model scale lengths were fitted to these data. The extended time coverage allowed the checking for consistency between the various dates. The time varying production rate of P/Halley severely affected the profiles after perihelion, which is shown in two profile sequences on adjacent dates. Because of the time varying production rate, it was not possible to obtain reliable Haser model scale lengths after perihelion. The pre-perihelion analysis yielded Haser model scale lengths of sufficient consistency that they can be used for production rate determinations, whenever it is necessary to extrapolate from observed column densities within finite observing apertures. Results of scale lengths reduced to 1 AU are given and discussed.
Measurement of scattering lengths using kaon(pi3) decay
NASA Astrophysics Data System (ADS)
Baker, Troy Andrew
2000-10-01
The determination of N-N and pp scattering lengths is of fundamental importance in the studies of hadron dynamics. A direct measurement of pp scattering lengths is impossible due to a lack of processes with just two pions in both the initial and final state. Therefore indirect methods must be used. In the past, pN-->ppN and Ke4 decay[1] have been employed. These analyses are complicated due to problems of (a)extrapolation to threshold, (b)contribution of higher multipoles, and (c)inelasticity effects. In this thesis we present a novel analysis of stopped K+p3 decays (K+-->p+p0 p0) to deduce the scattering lengths ( a00 and a20 ) in a nearly model independent way. The model of Sawyer and Wali[2], incorporating Chew and Mandelstam's[3] result for pp scattering, was used to analyze the data. The data is a kinematically complete determination of Kp3 decays, a byproduct of the T-violation experiment at KEK[4]. It is fit to an amplitude At' (s1,s2,s3 )=-
Feng, Edward H.; Crooks, Gavin E.
2008-08-21
An unresolved problem in physics is how the thermodynamic arrow of time arises from an underlying time reversible dynamics. We contribute to this issue by developing a measure of time-symmetry breaking, and by using the work fluctuation relations, we determine the time asymmetry of recent single molecule RNA unfolding experiments. We define time asymmetry as the Jensen-Shannon divergencebetween trajectory probability distributions of an experiment and its time-reversed conjugate. Among other interesting properties, the length of time's arrow bounds the average dissipation and determines the difficulty of accurately estimating free energy differences in nonequilibrium experiments.
The NIST Length Scale Interferometer
Beers, John S.; Penzes, William B.
1999-01-01
The National Institute of Standards and Technology (NIST) interferometer for measuring graduated length scales has been in use since 1965. It was developed in response to the redefinition of the meter in 1960 from the prototype platinum-iridium bar to the wavelength of light. The history of the interferometer is recalled, and its design and operation described. A continuous program of modernization by making physical modifications, measurement procedure changes and computational revisions is described, and the effects of these changes are evaluated. Results of a long-term measurement assurance program, the primary control on the measurement process, are presented, and improvements in measurement uncertainty are documented.
Ravichandran, R; Binukumar, J P; Sivakumar, S S; Krishnamurthy, K; Davis, C A
2009-01-01
The objective of the present study is to establish radiation standards for absorbed doses, for clinical high energy linear accelerator beams. In the nonavailability of a cobalt-60 beam for arriving at Nd, water values for thimble chambers, we investigated the efficacy of perspex mounted extrapolation chamber (EC) used earlier for low energy x-rays and beta dosimetry. Extrapolation chamber with facility for achieving variable electrode separations 10.5mm to 0.5mm using micrometer screw was used for calibrations. Photon beams 6 MV and 15 MV and electron beams 6 MeV and 15 MeV from Varian Clinac linacs were calibrated. Absorbed Dose estimates to Perspex were converted into dose to solid water for comparison with FC 65 ionisation chamber measurements in water. Measurements made during the period December 2006 to June 2008 are considered for evaluation. Uncorrected ionization readings of EC for all the radiation beams over the entire period were within 2% showing the consistency of measurements. Absorbed doses estimated by EC were in good agreement with in-water calibrations within 2% for photons and electron beams. The present results suggest that extrapolation chambers can be considered as an independent measuring system for absorbed dose in addition to Farmer type ion chambers. In the absence of standard beam quality (Co-60 radiations as reference Quality for Nd,water) the possibility of keeping EC as Primary Standards for absorbed dose calibrations in high energy radiation beams from linacs should be explored. As there are neither Standard Laboratories nor SSDL available in our country, we look forward to keep EC as Local Standard for hospital chamber calibrations. We are also participating in the IAEA mailed TLD intercomparison programme for quality audit of existing status of radiation dosimetry in high energy linac beams. The performance of EC has to be confirmed with cobalt-60 beams by a separate study, as linacs are susceptible for minor variations in dose
Ravichandran, R.; Binukumar, J. P.; Sivakumar, S. S.; Krishnamurthy, K.; Davis, C. A.
2009-01-01
The objective of the present study is to establish radiation standards for absorbed doses, for clinical high energy linear accelerator beams. In the nonavailability of a cobalt-60 beam for arriving at Nd, water values for thimble chambers, we investigated the efficacy of perspex mounted extrapolation chamber (EC) used earlier for low energy x-rays and beta dosimetry. Extrapolation chamber with facility for achieving variable electrode separations 10.5mm to 0.5mm using micrometer screw was used for calibrations. Photon beams 6 MV and 15 MV and electron beams 6 MeV and 15 MeV from Varian Clinac linacs were calibrated. Absorbed Dose estimates to Perspex were converted into dose to solid water for comparison with FC 65 ionisation chamber measurements in water. Measurements made during the period December 2006 to June 2008 are considered for evaluation. Uncorrected ionization readings of EC for all the radiation beams over the entire period were within 2% showing the consistency of measurements. Absorbed doses estimated by EC were in good agreement with in-water calibrations within 2% for photons and electron beams. The present results suggest that extrapolation chambers can be considered as an independent measuring system for absorbed dose in addition to Farmer type ion chambers. In the absence of standard beam quality (Co-60 radiations as reference Quality for Nd,water) the possibility of keeping EC as Primary Standards for absorbed dose calibrations in high energy radiation beams from linacs should be explored. As there are neither Standard Laboratories nor SSDL available in our country, we look forward to keep EC as Local Standard for hospital chamber calibrations. We are also participating in the IAEA mailed TLD intercomparison programme for quality audit of existing status of radiation dosimetry in high energy linac beams. The performance of EC has to be confirmed with cobalt-60 beams by a separate study, as linacs are susceptible for minor variations in dose
Richmond, Orien M W; McEntee, Jay P; Hijmans, Robert J; Brashares, Justin S
2010-09-22
Species distribution models (SDMs) are increasingly used for extrapolation, or predicting suitable regions for species under new geographic or temporal scenarios. However, SDM predictions may be prone to errors if species are not at equilibrium with climatic conditions in the current range and if training samples are not representative. Here the controversial "Pleistocene rewilding" proposal was used as a novel example to address some of the challenges of extrapolating modeled species-climate relationships outside of current ranges. Climatic suitability for three proposed proxy species (Asian elephant, African cheetah and African lion) was extrapolated to the American southwest and Great Plains using Maxent, a machine-learning species distribution model. Similar models were fit for Oryx gazella, a species native to Africa that has naturalized in North America, to test model predictions. To overcome biases introduced by contracted modern ranges and limited occurrence data, random pseudo-presence points generated from modern and historical ranges were used for model training. For all species except the oryx, models of climatic suitability fit to training data from historical ranges produced larger areas of predicted suitability in North America than models fit to training data from modern ranges. Four naturalized oryx populations in the American southwest were correctly predicted with a generous model threshold, but none of these locations were predicted with a more stringent threshold. In general, the northern Great Plains had low climatic suitability for all focal species and scenarios considered, while portions of the southern Great Plains and American southwest had low to intermediate suitability for some species in some scenarios. The results suggest that the use of historical, in addition to modern, range information and randomly sampled pseudo-presence points may improve model accuracy. This has implications for modeling range shifts of organisms in response
Barman, Stephen L; Jean, Gary W; Dinsfriend, William M; Gerber, David E
2016-02-01
The treatment of adults who present with rare pediatric tumors is not characterized well in the literature. We report an instance of a 40-year-old African American woman with a diagnosis of choroid plexus carcinoma admitted to the intensive care unit for severe sepsis seven days after receiving chemotherapy consisting of carboplatin (350 mg/m(2) on Days 1 and 2 plus etoposide 100 mg/m(2) on Days 1-5). Her laboratory results were significant for an absolute neutrophil count of 0/µL and blood cultures positive for Capnocytophagia species. She was supported with broad spectrum antibiotics and myeloid growth factors. She eventually recovered and was discharged in stable condition. The management of adults with malignancies most commonly seen in pediatric populations presents substantial challenges. There are multiple age-specific differences in renal and hepatic function that explain the need for higher dosing in pediatric patients without increasing the risk of toxicity. Furthermore, differences in pharmacokinetic parameters such as absorption, distribution, and clearance are present but are less likely to affect patients. It is expected that the pediatric population will have more bone marrow reserve and, therefore, less susceptible to myelosuppression. The extrapolation of pediatric dosing to an adult presents a problematic situation in treating adults with malignancies that primarily effect pediatric patients. We recommend extrapolating from adult treatment regimens with similar agents rather than extrapolating from pediatric treatment regimens to reduce the risk of toxicity. We also recommend the consideration of adding myeloid growth factors. If the treatment is tolerated without significant toxicity, dose escalation can be considered.
Mangold, Stefanie; Gatidis, Sergios; Luz, Oliver; König, Benjamin; Schabel, Christoph; Bongers, Malte N; Flohr, Thomas G; Claussen, Claus D; Thomas, Christoph
2014-12-01
The objective of this study was to retrospectively determine the potential of virtual monoenergetic (ME) reconstructions for a reduction of metal artifacts using a new-generation single-source computed tomographic (CT) scanner. The ethics committee of our institution approved this retrospective study with a waiver of the need for informed consent. A total of 50 consecutive patients (29 men and 21 women; mean [SD] age, 51.3 [16.7] years) with metal implants after osteosynthetic fracture treatment who had been examined using a single-source CT scanner (SOMATOM Definition Edge; Siemens Healthcare, Forchheim, Germany; consecutive dual-energy mode with 140 kV/80 kV) were selected. Using commercially available postprocessing software (syngo Dual Energy; Siemens AG), virtual ME data sets with extrapolated energy of 130 keV were generated (medium smooth convolution kernel D30) and compared with standard polyenergetic images reconstructed with a B30 (medium smooth) and a B70 (sharp) kernel. For quantification of the beam hardening artifacts, CT values were measured on circular lines surrounding bone and the osteosynthetic device, and frequency analyses of these values were performed using discrete Fourier transform. A high proportion of low frequencies to the spectrum indicates a high level of metal artifacts. The measurements in all data sets were compared using the Wilcoxon signed rank test. The virtual ME images with extrapolated energy of 130 keV showed significantly lower contribution of low frequencies after the Fourier transform compared with any polyenergetic data set reconstructed with D30, B70, and B30 kernels (P < 0.001). Sequential single-source dual-energy CT allows an efficient reduction of metal artifacts using high-energy ME extrapolation after osteosynthetic fracture treatment.
Richmond, Orien M. W.; McEntee, Jay P.; Hijmans, Robert J.; Brashares, Justin S.
2010-01-01
Species distribution models (SDMs) are increasingly used for extrapolation, or predicting suitable regions for species under new geographic or temporal scenarios. However, SDM predictions may be prone to errors if species are not at equilibrium with climatic conditions in the current range and if training samples are not representative. Here the controversial “Pleistocene rewilding” proposal was used as a novel example to address some of the challenges of extrapolating modeled species-climate relationships outside of current ranges. Climatic suitability for three proposed proxy species (Asian elephant, African cheetah and African lion) was extrapolated to the American southwest and Great Plains using Maxent, a machine-learning species distribution model. Similar models were fit for Oryx gazella, a species native to Africa that has naturalized in North America, to test model predictions. To overcome biases introduced by contracted modern ranges and limited occurrence data, random pseudo-presence points generated from modern and historical ranges were used for model training. For all species except the oryx, models of climatic suitability fit to training data from historical ranges produced larger areas of predicted suitability in North America than models fit to training data from modern ranges. Four naturalized oryx populations in the American southwest were correctly predicted with a generous model threshold, but none of these locations were predicted with a more stringent threshold. In general, the northern Great Plains had low climatic suitability for all focal species and scenarios considered, while portions of the southern Great Plains and American southwest had low to intermediate suitability for some species in some scenarios. The results suggest that the use of historical, in addition to modern, range information and randomly sampled pseudo-presence points may improve model accuracy. This has implications for modeling range shifts of organisms in
Howard, Scarlett R; Avarguès-Weber, Aurore; Garcia, Jair; Dyer, Adrian G
2017-04-03
Learning and applying relational concepts to solve novel tasks is considered an indicator of cognitive-like ability. It requires the abstraction of relational concepts to different objects independent to the physical nature of the individual objects. Recent research has revealed the honeybee's ability to rapidly learn and manipulate relations between visual stimuli such as 'same/different', 'above/below', or 'larger/smaller' despite having a miniature-sized brain. While honeybees can solve problems using rule-based relative size comparison, it remains unresolved as to whether bees can apply size rules when stimuli are encountered successively, which requires reliance on working memory for stimuli comparison. Additionally, the potential ability of bees to extrapolate acquired information to novel sizes beyond training sets remains to be investigated. We tested whether individual free-flying honeybees could learn 'larger/smaller' size rules when visual stimuli were presented successively, and whether such rules could then be extrapolated to novel stimulus sizes. Honeybees were individually trained to a set of four sizes such that individual elements might be correct, or incorrect, depending upon the alternative stimulus. In a learning test, bees preferred the correct size relation for their respective learning group. Bees were also able to successfully extrapolate the learnt relation during transfer tests by maintaining the correct size relationships when considering either two smaller, or two larger, novel stimulus sizes. This performance demonstrates that an insect operating in a complex environment has sufficient cognitive capacity to learn rules that can be abstracted to novel problems. We discuss the possible learning mechanisms which allow their success.
Extrapolation of the FOM 1MW free electron maser to a multi-megawatt millimeter microwave source
Caplan, M.; Valentini, M.; Verhoeven, A.; Urbanus, W.; Tulupov, A.
1996-12-01
A Free Electron Maser is now under test at the FOM Institute (Rijnhuizen) Netherlands with the goal of producing 1 MW long pulse to CW microwave output in the range 130 GHz to 250 GHz with wall plug efficiencies of 60%. An extrapolated version of this device is proposed, which would produce microwave power levels of up to 5 MW CW. This would allow for practical applications in such diverse areas as space power beaming, heating of fusion plasmas and hearing of high Mach number wind tunnels.
Rong, Lu; Latychevskaia, Tatiana; Wang, Dayong; Zhou, Xun; Huang, Haochong; Li, Zeyu; Wang, Yunxin
2014-07-14
We report here on terahertz (THz) digital holography on a biological specimen. A continuous-wave (CW) THz in-line holographic setup was built based on a 2.52 THz CO(2) pumped THz laser and a pyroelectric array detector. We introduced novel statistical method of obtaining true intensity values for the pyroelectric array detector's pixels. Absorption and phase-shifting images of a dragonfly's hindwing were reconstructed simultaneously from single in-line hologram. Furthermore, we applied phase retrieval routines to eliminate twin image and enhanced the resolution of the reconstructions by hologram extrapolation beyond the detector area. The finest observed features are 35 μm width cross veins.
Reynaldo, S R; Benavente, J A; Da Silva, T A
2016-11-01
Beta Secondary Standard 2 (BSS 2) provides beta radiation fields with certified values of absorbed dose to tissue and the derived operational radiation protection quantities. As part of the quality assurance, the reliability of the CDTN BSS2 system was verified through measurements in the (90)Sr/(90)Y and (85)Kr beta radiation fields. Absorbed dose rates and their angular variation were measured with a 23392 model PTW extrapolation chamber and with Gafchromic radiochromic films on a PMMA slab phantom. The feasibility of using both methods was analyzed.
Testable scenario for relativity with minimum length
NASA Astrophysics Data System (ADS)
Amelino-Camelia, G.
2001-06-01
I propose a general class of spacetimes whose structure is governed by observer-independent scales of both velocity (/c) and length (Planck length), and I observe that these spacetimes can naturally host a modification of FitzGerald-Lorentz contraction such that lengths which in their inertial rest frame are bigger than a ``minimum length'' are also bigger than the minimum length in all other inertial frames. With an analysis in leading order in the minimum length, I show that this is the case in a specific illustrative example of postulates for relativity with velocity and length observer-independent scales.
NASA Astrophysics Data System (ADS)
Grace, Emily; Butcher, Alistair; Monroe, Jocelyn; Nikkel, James A.
2017-09-01
Large liquid argon detectors have become widely used in low rate experiments, including dark matter and neutrino research. However, the optical properties of liquid argon are not well understood at the large scales relevant for current and near-future detectors. The index of refraction of liquid argon at the scintillation wavelength has not been measured, and current Rayleigh scattering length calculations disagree with measurements. Furthermore, the Rayleigh scattering length and index of refraction of solid argon and solid xenon at their scintillation wavelengths have not been previously measured or calculated. We introduce a new calculation using existing data in liquid and solid argon and xenon to extrapolate the optical properties at the scintillation wavelengths using the Sellmeier dispersion relationship.
NASA Astrophysics Data System (ADS)
Ekin, Jack W.; Cheggour, Najib; Goodrich, Loren; Splett, Jolene
2017-03-01
In Part 2 of these articles, an extensive analysis of pinning-force curves and raw scaling data was used to derive the Extrapolative Scaling Expression (ESE). This is a parameterization of the Unified Scaling Law (USL) that has the extrapolation capability of fundamental unified scaling, coupled with the application ease of a simple fitting equation. Here in Part 3, the accuracy of the ESE relation to interpolate and extrapolate limited critical-current data to obtain complete I c(B,T,ε) datasets is evaluated and compared with present fitting equations. Accuracy is analyzed in terms of root mean square (RMS) error and fractional deviation statistics. Highlights from 92 test cases are condensed and summarized, covering most fitting protocols and proposed parameterizations of the USL. The results show that ESE reliably extrapolates critical currents at fields B, temperatures T, and strains ε that are remarkably different from the fitted minimum dataset. Depending on whether the conductor is moderate-J c or high-J c, effective RMS extrapolation errors for ESE are in the range 2–5 A at 12 T, which approaches the I c measurement error (1–2%). The minimum dataset for extrapolating full I c(B,T,ε) characteristics is also determined from raw scaling data. It consists of one set of I c(B,ε) data at a fixed temperature (e.g., liquid helium temperature), and one set of I c(B,T) data at a fixed strain (e.g., zero applied strain). Error analysis of extrapolations from the minimum dataset with different fitting equations shows that ESE reduces the percentage extrapolation errors at individual data points at high fields, temperatures, and compressive strains down to 1/10th to 1/40th the size of those for extrapolations with present fitting equations. Depending on the conductor, percentage fitting errors for interpolations are also reduced to as little as 1/15th the size. The extrapolation accuracy of the ESE relation offers the prospect of straightforward implementation
Marn, Nina; Klanjscek, Tin; Stokes, Lesley; Jusup, Marko
2015-01-01
Sea turtles face threats globally and are protected by national and international laws. Allometry and scaling models greatly aid sea turtle conservation and research, and help to better understand the biology of sea turtles. Scaling, however, may differ between regions and/or life stages. We analyze differences between (i) two different regional subsets and (ii) three different life stage subsets of the western North Atlantic loggerhead turtles by comparing the relative growth of body width and depth in relation to body length, and discuss the implications. Results suggest that the differences between scaling relationships of different regional subsets are negligible, and models fitted on data from one region of the western North Atlantic can safely be used on data for the same life stage from another North Atlantic region. On the other hand, using models fitted on data for one life stage to describe other life stages is not recommended if accuracy is of paramount importance. In particular, young loggerhead turtles that have not recruited to neritic habitats should be studied and modeled separately whenever practical, while neritic juveniles and adults can be modeled together as one group. Even though morphometric scaling varies among life stages, a common model for all life stages can be used as a general description of scaling, and assuming isometric growth as a simplification is justified. In addition to linear models traditionally used for scaling on log-log axes, we test the performance of a saturating (curvilinear) model. The saturating model is statistically preferred in some cases, but the accuracy gained by the saturating model is marginal.
Marn, Nina; Klanjscek, Tin; Stokes, Lesley; Jusup, Marko
2015-01-01
Introduction Sea turtles face threats globally and are protected by national and international laws. Allometry and scaling models greatly aid sea turtle conservation and research, and help to better understand the biology of sea turtles. Scaling, however, may differ between regions and/or life stages. We analyze differences between (i) two different regional subsets and (ii) three different life stage subsets of the western North Atlantic loggerhead turtles by comparing the relative growth of body width and depth in relation to body length, and discuss the implications. Results and Discussion Results suggest that the differences between scaling relationships of different regional subsets are negligible, and models fitted on data from one region of the western North Atlantic can safely be used on data for the same life stage from another North Atlantic region. On the other hand, using models fitted on data for one life stage to describe other life stages is not recommended if accuracy is of paramount importance. In particular, young loggerhead turtles that have not recruited to neritic habitats should be studied and modeled separately whenever practical, while neritic juveniles and adults can be modeled together as one group. Even though morphometric scaling varies among life stages, a common model for all life stages can be used as a general description of scaling, and assuming isometric growth as a simplification is justified. In addition to linear models traditionally used for scaling on log-log axes, we test the performance of a saturating (curvilinear) model. The saturating model is statistically preferred in some cases, but the accuracy gained by the saturating model is marginal. PMID:26629702
Ligand chain length conveys thermochromism.
Ganguly, Mainak; Panigrahi, Sudipa; Chandrakumar, K R S; Sasmal, Anup Kumar; Pal, Anjali; Pal, Tarasankar
2014-08-14
Thermochromic properties of a series of non-ionic copper compounds have been reported. Herein, we demonstrate that Cu(II) ion with straight-chain primary amine (A) and alpha-linolenic (fatty acid, AL) co-jointly exhibit thermochromic properties. In the current case, we determined that thermochromism becomes ligand chain length-dependent and at least one of the ligands (A or AL) must be long chain. Thermochromism is attributed to a balanced competition between the fatty acids and amines for the copper(II) centre. The structure-property relationship of the non-ionic copper compounds Cu(AL)2(A)2 has been substantiated by various physical measurements along with detailed theoretical studies based on time-dependent density functional theory. It is presumed from our results that the compound would be a useful material for temperature-sensor applications.
Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar
2015-01-01
Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.
Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar
2015-01-01
Purpose Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. Methods DKI was performed in patients with Parkinson’s disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Results Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. Conclusion We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references. PMID:26528541
Tremblay, Gabriel; Livings, Christopher; Crowe, Lydia; Kapetanakis, Venediktos; Briggs, Andrew
2016-01-01
Background Cost-effectiveness models for the treatment of long-term conditions often require information on survival beyond the period of available data. Objectives This paper aims to identify a robust and reliable method for the extrapolation of overall survival (OS) in patients with radioiodine-refractory differentiated thyroid cancer receiving lenvatinib or placebo. Methods Data from 392 patients (lenvatinib: 261, placebo: 131) from the SELECT trial are used over a 34-month period of follow-up. A previously published criterion-based approach is employed to ascertain credible estimates of OS beyond the trial data. Parametric models with and without a treatment covariate and piecewise models are used to extrapolate OS, and a holistic approach, where a series of statistical and visual tests are considered collectively, is taken in determining the most appropriate extrapolation model. Results A piecewise model, in which the Kaplan–Meier survivor function is used over the trial period and an extrapolated tail is based on the Exponential distribution, is identified as the optimal model. Conclusion In the absence of long-term survival estimates from clinical trials, survival estimates often need to be extrapolated from the available data. The use of a systematic method based on a priori determined selection criteria provides a transparent approach and reduces the risk of bias. The extrapolated OS estimates will be used to investigate the potential long-term benefits of lenvatinib in the treatment of radioiodine-refractory differentiated thyroid cancer patients and populate future cost-effectiveness analyses. PMID:27418847
Jing, Ju; Liu, Chang; Lee, Jeongwoo; Wang, Shuo; Xu, Yan; Wang, Haimin; Wiegelmann, Thomas
2014-03-20
Dynamic phenomena indicative of slipping reconnection and magnetic implosion were found in a time series of nonlinear force-free field (NLFFF) extrapolations for the active region 11515, which underwent significant changes in the photospheric fields and produced five C-class flares and one M-class flare over five hours on 2012 July 2. NLFFF extrapolation was performed for the uninterrupted 5 hour period from the 12 minute cadence vector magnetograms of the Helioseismic and Magnetic Imager on board the Solar Dynamic Observatory. According to the time-dependent NLFFF model, there was an elongated, highly sheared magnetic flux rope structure that aligns well with an Hα filament. This long filament splits sideways into two shorter segments, which further separate from each other over time at a speed of 1-4 km s{sup –1}, much faster than that of the footpoint motion of the magnetic field. During the separation, the magnetic arcade arching over the initial flux rope significantly decreases in height from ∼4.5 Mm to less than 0.5 Mm. We discuss the reality of this modeled magnetic restructuring by relating it to the observations of the magnetic cancellation, flares, a filament eruption, a penumbra formation, and magnetic flows around the magnetic polarity inversion line.
Shida, Satomi; Utoh, Masahiro; Murayama, Norie; Shimizu, Makiko; Uno, Yasuhiro; Yamazaki, Hiroshi
2015-01-01
1. Cynomolgus monkeys are widely used in preclinical studies as non-human primate species. Pharmacokinetics of human cytochrome P450 probes determined in cynomolgus monkeys after single oral or intravenous administrations were extrapolated to give human plasma concentrations. 2. Plasma concentrations of slowly eliminated caffeine and R-/S-warfarin and rapidly eliminated omeprazole and midazolam previously observed in cynomolgus monkeys were scaled to human oral biomonitoring equivalents using known species allometric scaling factors and in vitro metabolic clearance data with a simple physiologically based pharmacokinetic (PBPK) model. Results of the simplified human PBPK models were consistent with reported experimental PK data in humans or with values simulated by a fully constructed population-based simulator (Simcyp). 3. Oral administrations of metoprolol and dextromethorphan (human P450 2D probes) in monkeys reportedly yielded plasma concentrations similar to their quantitative detection limits. Consequently, ratios of in vitro hepatic intrinsic clearances of metoprolol and dextromethorphan determined in monkeys and humans were used with simplified PBPK models to extrapolate intravenous PK in monkeys to oral PK in humans. 4. These results suggest that cynomolgus monkeys, despite their rapid clearance of some human P450 substrates, could be a suitable model for humans, especially when used in conjunction with simple PBPK models.
Leung, Louis; Yang, Xin; Strelevitz, Timothy J; Montgomery, Justin; Brown, Matthew F; Zientek, Michael A; Banfield, Christopher; Gilbert, Adam M; Thorarensen, Atli; Dowty, Martin E
2017-01-01
The concept of target-specific covalent enzyme inhibitors appears attractive from both an efficacy and a selectivity viewpoint considering the potential for enhanced biochemical efficiency associated with an irreversible mechanism. Aside from potential safety concerns, clearance prediction of covalent inhibitors represents a unique challenge due to the inclusion of nontraditional metabolic pathways of direct conjugation with glutathione (GSH) or via GSH S-transferase-mediated processes. In this article, a novel pharmacokinetic algorithm was developed using a series of Pfizer kinase selective acrylamide covalent inhibitors based on their in vitro-in vivo extrapolation of systemic clearance in rats. The algorithm encompasses the use of hepatocytes as an in vitro model for hepatic clearance due to oxidative metabolism and GSH conjugation, and the use of whole blood as an in vitro surrogate for GSH conjugation in extrahepatic tissues. Initial evaluations with clinical covalent inhibitors suggested that the scaling algorithm developed from rats may also be useful for human clearance prediction when species-specific parameters, such as hepatocyte and blood stability and blood binding, were considered. With careful consideration of clearance mechanisms, the described in vitro-in vivo extrapolation approach may be useful to facilitate candidate optimization, selection, and prediction of human pharmacokinetic clearance during the discovery and development of targeted covalent inhibitors.
NASA Astrophysics Data System (ADS)
Liu, Ning; Chen, Xiaohong; Yang, Chao
2016-11-01
During the reconstruction of a digital hologram, the reconstructed image is usually degraded by speckle noise, which makes it hard to observe the original object pattern. In this paper, a new reconstructed image enhancement method is proposed, which first reduces the speckle noise using an adaptive Gaussian filter, then calculates the high frequencies that belong to the object pattern based on a frequency extrapolation strategy. The proposed frequency extrapolation first calculates the frequency spectrum of the Fourier-filtered image, which is originally reconstructed from the +1 order of the hologram, and then gives the initial parameters for an iterative solution. The analytic iteration is implemented by continuous gradient threshold convergence to estimate the image level and vertical gradient information. The predicted spectrum is acquired through the analytical iteration of the original spectrum and gradient spectrum analysis. Finally, the reconstructed spectrum of the restoration image is acquired from the synthetic correction of the original spectrum using the predicted gradient spectrum. We conducted our experiment very close to the diffraction limit and used low quality equipment to prove the feasibility of our method. Detailed analysis and figure demonstrations are presented in the paper.
Sanz-Ruiz, P; Paz, E; Abenojar, J; Del Real, J C; Forriol, F; Vaquero, J
2014-01-01
The use of bone cement is widespread in orthopaedic surgery. Most of the mechanical tests are performed in dry medium, making it difficult to extrapolate the results. The objective of this study is to assess if the mechanical properties of polymethylmethacrylate (PMMA), obtained in previous reports, are still present in a liquid medium. An experimental study was designed with antibiotic (vancomycin) loaded PMMA. Four groups were defined according to the medium (dry or liquid) and the pre-conditioning in liquid medium (one week or one month). Wear and flexural strength tests were performed according to ASTM and ISO standards. Volumetric wear, friction coefficient, tensile strength, and Young's modulus were analyzed. All samples were examined by scanning electron microscopy. The samples tested in liquid medium showed lower wear and flexural strength values (P<.05). The kind of wear was modified from abrasive to adhesive in those samples studied in liquid medium. The samples with a pre-conditioning time showed lower values of wear (P<.05). Caution is recommended when extrapolating the results of previous PMMA results. The different mechanical strength of the cement in a liquid medium, observed in saline medium, is much closer to the clinical situation. Copyright © 2013 SECOT. Published by Elsevier Espana. All rights reserved.
Telomere Length in Elite Athletes.
Muniesa, Carlos A; Verde, Zoraida; Diaz-Ureña, Germán; Santiago, Catalina; Gutiérrez, Fernando; Díaz, Enrique; Gómez-Gallego, Félix; Pareja-Galeano, Helios; Soares-Miranda, Luisa; Lucia, Alejandro
2016-12-05
Growing evidence suggests that regular, moderate-intensity physical activity is associated with an attenuation of leucocyte telomere length (LTL) shortening. However, more controversy exists regarding higher exercise loads, such as those imposed by elite sports participation. We have investigated LTL differences between young elite athletes (n=61, 54% men, aged [mean±SD] 27.2±4.9 years) and their healthy non-smoker, physically inactive controls (n=64, 52% men, 28.9±6.3 years) using analysis of variance (ANOVA). Elite athletes had, on average, higher LTL than controls subjects (0.89±0.26 vs 0.78±0.31, p=0.013 for the group effect, with no significant sex [p=0.995] or age effect [p=0.114]). Our results suggest that young elite athletes have longer telomeres than their inactive peers. Further research might assess the LTL of elite athletes of varying ages compared to both age-matched active and inactive individuals, respectively.
NASA Astrophysics Data System (ADS)
Singh, Shailesh Kumar; McMillan, Hilary; Bárdossy, András
2013-01-01
SummaryHydrological models are subject to significant sources of uncertainty including input data, model structure and parameter uncertainty. A key requirement for an operational flow forecasting model is therefore to give accurate estimates of model uncertainty. This estimate is often presented in terms of confidence bounds. The quality and quantity of observed rainfall and flow data available for calibration has a great influence on the identification of hydrological model parameters, and hence the model error distribution and width of the confidence bounds. The information contained in the observed time series is not uniformly distributed, and may not represent all types of behaviour or activation of flow pathways that could occur in the catchment. A model calibrated with data from a given time period could therefore perform well or poorly when evaluated over a new time period, depending on the information content and variability of the calibration data, in relation to the validation period. Our hypothesis is that we can improve the estimate of hydrological predictive uncertainty, based on our knowledge of the range of data available for calibration. If the characteristics of the validation data are similar in information content and variability to those in the calibration period, we term this an "interpolation case", and expect the model errors during calibration to be similar to those in validation. Otherwise, it is an "extrapolation case", where we may expect model errors to be greater. In this study, we developed an algorithm to differentiate cases of 'interpolation' versus 'extrapolation' in the prediction time period. The algorithm is based on the concept of 'data depth', i.e. the location of new data in relation to the convex hull of the calibration data set. Using a case study, we calculated uncertainty bounds for the predictive time period using methods with/without differentiation of interpolation and extrapolation cases. The performance of the
Acoustical Measurement Of Mine-Shaft Length
NASA Technical Reports Server (NTRS)
Heyman, Joseph S.
1988-01-01
Acoustical system proposed to measure depth of a "blind" shaft. Acoustic wave guided by shaft and provides estimate of shaft length, from which volume estimated. Acoustic-generator system determines resonant-frequency difference to measure shaft length.
King, A.W.
1986-01-01
Ecological models of the seasonal exchange of carbon dioxide between the atmosphere and the terrestrial biosphere are needed in the study of changes in atmospheric CO/sub 2/ concentration. In response to this need, a set of site-specific models of seasonal terrestrial carbon dynamics was assembled from open-literature sources. The collection was chosen as a base for the development of biome-level models for each of the earth's principal terrestrial biomes or vegetation complexes. Two methods of extrapolation were tested. The first approach was a simple extrapolation that assumed relative within-biome homogeneity, and generated CO/sub 2/ source functions that differed dramatically from published estimates of CO/sub 2/ exchange. The differences were so great that the simple extrapolation was rejected as a means of incorporating site-specific models in a global CO/sub 2/ source function. The second extrapolation explicitly incorporated within-biome variability in the abiotic variables that drive seasonal biosphere-atmosphere CO/sub 2/ exchange. Simulated site-specific CO/sub 2/ dynamics were treated as a function of multiple random variables. The predicated regional CO/sub 2/ exchange is the computed expected value of simulated site-specific exchanges for that region times the area of the region. The test involved the regional extrapolation of tundra and a coniferous forest carbon exchange model. Comparisons between the CO/sub 2/ exchange estimated by extrapolation and published estimates of regional exchange for the latitude belt support the appropriateness of extrapolation by expected value.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service or...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service or...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...
Study on length distribution of ramie fibers
USDA-ARS?s Scientific Manuscript database
The extra-long length of ramie fibers and the high variation in fiber length has a negative impact on the spinning processes. In order to better study the feature of ramie fiber length, in this research, the probability density function of the mixture model applied in the characterization of cotton...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Length. The linear measurement of cured tobacco leaves from the butt of the midrib to the extreme tip. Length, as an element of quality, does not apply to tobacco in strip form. (See Elements of quality.) [24... 7 Agriculture 2 2010-01-01 2010-01-01 false Length. 29.3037 Section 29.3037 Agriculture...
Code of Federal Regulations, 2013 CFR
2013-01-01
... Length. The linear measurement of cured tobacco leaves from the butt of the midrib to the extreme tip. Length, as an element of quality, does not apply to tobacco in strip form. (See Elements of quality.) ... 7 Agriculture 2 2013-01-01 2013-01-01 false Length. 29.3037 Section 29.3037 Agriculture...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Length. The linear measurement of cured tobacco leaves from the butt of the midrib to the extreme tip. Length, as an element of quality, does not apply to tobacco in strip form. (See Elements of quality.) [24... 7 Agriculture 2 2011-01-01 2011-01-01 false Length. 29.3037 Section 29.3037 Agriculture...
Code of Federal Regulations, 2014 CFR
2014-01-01
... Length. The linear measurement of cured tobacco leaves from the butt of the midrib to the extreme tip. Length, as an element of quality, does not apply to tobacco in strip form. (See Elements of quality.) ... 7 Agriculture 2 2014-01-01 2014-01-01 false Length. 29.3037 Section 29.3037 Agriculture...
NASA Technical Reports Server (NTRS)
Furillo, F. T.; Purushothaman, S.; Tien, J. K.
1977-01-01
The Larson-Miller (L-M) method of extrapolating stress rupture and creep results is based on the contention that the absolute temperature-compensated time function should have a unique value for a given material. This value should depend only on the applied stress level. The L-M method has been found satisfactory in the case of many steels and superalloys. The derivation of the L-M relation is discussed, taking into account a power law creep relationship considered by Dorn (1965) and Barrett et al. (1964), a correlation expression reported by Garofalo et al. (1961), and relations concerning the constant C. Attention is given to a verification of the validity of the considered derivation with the aid of suitable materials.
NASA Astrophysics Data System (ADS)
Kim, Hyun-Tae; Romanelli, M.; Yuan, X.; Kaye, S.; Sips, A. C. C.; Frassinetti, L.; Buchanan, J.; Contributors, JET
2017-06-01
This paper presents for the first time a statistical validation of predictive TRANSP simulations of plasma temperature using two transport models, GLF23 and TGLF, over a database of 80 baseline H-mode discharges in JET-ILW. While the accuracy of the predicted T e with TRANSP-GLF23 is affected by plasma collisionality, the dependency of predictions on collisionality is less significant when using TRANSP-TGLF, indicating that the latter model has a broader applicability across plasma regimes. TRANSP-TGLF also shows a good matching of predicted T i with experimental measurements allowing for a more accurate prediction of the neutron yields. The impact of input data and assumptions prescribed in the simulations are also investigated in this paper. The statistical validation and the assessment of uncertainty level in predictive TRANSP simulations for JET-ILW-DD will constitute the basis for the extrapolation to JET-ILW-DT experiments.
Wang, Zhen; Leung, Kenneth M Y
2015-10-01
Unionised ammonia (NH3) is highly toxic to freshwater organisms. Yet, most of the available toxicity data on NH3 were predominantly generated from temperate regions, while toxicity data on NH3 derived from tropical species were limited. To address this issue, we first conducted standard acute toxicity tests on NH3 using ten tropical freshwater species. Subsequently, we constructed a tropical species sensitivity distribution (SSD) using these newly generated toxicity data and available tropical toxicity data of NH3, which was then compared with the corresponding temperate SSD constructed from documented temperate acute toxicity data. Our results showed that tropical species were generally more sensitive to NH3 than their temperate counterparts. Based on the ratio between temperate and tropical hazardous concentration 10% values, we recommend an extrapolation factor of four to be applied when surrogate temperate toxicity data or temperate water quality guidelines of NH3 are used for protecting tropical freshwater ecosystems.
Claudot, Julien; Lebègue, Sébastien; Rocca, Dario
2017-01-01
By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented. PMID:28595409
Schwahofer, Andrea; Bär, Esther; Kuchenbecker, Stefan; Grossmann, J Günter; Kachelrieß, Marc; Sterzing, Florian
2015-12-01
Metal artifacts in computed tomography CT images are one of the main problems in radiation oncology as they introduce uncertainties to target and organ at risk delineation as well as dose calculation. This study is devoted to metal artifact reduction (MAR) based on the monoenergetic extrapolation of a dual energy CT (DECT) dataset. In a phantom study the CT artifacts caused by metals with different densities: aluminum (ρ Al=2.7 g/cm(3)), titanium (ρ Ti=4.5 g/cm(3)), steel (ρ steel=7.9 g/cm(3)) and tungsten (ρ W=19.3g/cm(3)) have been investigated. Data were collected using a clinical dual source dual energy CT (DECT) scanner (Siemens Sector Healthcare, Forchheim, Germany) with tube voltages of 100 kV and 140 kV(Sn). For each tube voltage the data set in a given volume was reconstructed. Based on these two data sets a voxel by voxel linear combination was performed to obtain the monoenergetic data sets. The results were evaluated regarding the optical properties of the images as well as the CT values (HU) and the dosimetric consequences in computed treatment plans. A data set without metal substitute served as the reference. Also, a head and neck patient with dental fillings (amalgam ρ=10 g/cm(3)) was scanned with a single energy CT (SECT) protocol and a DECT protocol. The monoenergetic extrapolation was performed as described above and evaluated in the same way. Visual assessment of all data shows minor reductions of artifacts in the images with aluminum and titanium at a monoenergy of 105 keV. As expected, the higher the densities the more distinctive are the artifacts. For metals with higher densities such as steel or tungsten, no artifact reduction has been achieved. Likewise in the CT values, no improvement by use of the monoenergetic extrapolation can be detected. The dose was evaluated at a point 7 cm behind the isocenter of a static field. Small improvements (around 1%) can be seen with 105 keV. However, the dose uncertainty remains of the order of 10
Scott, B.R.; Muggenburg, B.A.; Welsh, C.A.; Angerstein, D.A.
1994-11-01
The alpha emitter plutonium-238 ({sup 238}Pu), which is produced in uranium-fueled, light-water reactors, is used as a thermoelectric power source for space applications. Inhalation of a mixed oxide form of Pu is the most likely mode of exposure of workers and the general public. Occupational exposures to {sup 238}PuO{sub 2} have occurred in association with the fabrication of radioisotope thermoelectric generators. Organs and tissue at risk for deterministic and stochastic effects of {sup 238}Pu-alpha irradiation include the lung, liver, skeleton, and lymphatic tissue. Little has been reported about the effects of inhaled {sup 238}PuO{sub 2} on peripheral blood cell counts in humans. The purpose of this study was to investigate hematological responses after a single inhalation exposure of Beagle dogs to alpha-emitting {sup 238}PuO{sub 2} particles and to extrapolate results to humans.
NASA Astrophysics Data System (ADS)
Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario
2017-06-01
By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.
Lee, Yung-Shan; Lo, Justin C; Otton, S Victoria; Moore, Margo M; Kennedy, Chris J; Gobas, Frank A P C
2017-07-01
Incorporating biotransformation in bioaccumulation assessments of hydrophobic chemicals in both aquatic and terrestrial organisms in a simple, rapid, and cost-effective manner is urgently needed to improve bioaccumulation assessments of potentially bioaccumulative substances. One approach to estimate whole-animal biotransformation rate constants is to combine in vitro measurements of hepatic biotransformation kinetics with in vitro to in vivo extrapolation (IVIVE) and bioaccumulation modeling. An established IVIVE modeling approach exists for pharmaceuticals (referred to in the present study as IVIVE-Ph) and has recently been adapted for chemical bioaccumulation assessments in fish. The present study proposes and tests an alternative IVIVE-B technique to support bioaccumulation assessment of hydrophobic chemicals with a log octanol-water partition coefficient (KOW ) ≥ 4 in mammals. The IVIVE-B approach requires fewer physiological and physiochemical parameters than the IVIVE-Ph approach and does not involve interconversions between clearance and rate constants in the extrapolation. Using in vitro depletion rates, the results show that the IVIVE-B and IVIVE-Ph models yield similar estimates of rat whole-organism biotransformation rate constants for hypothetical chemicals with log KOW ≥ 4. The IVIVE-B approach generated in vivo biotransformation rate constants and biomagnification factors (BMFs) for benzo[a]pyrene that are within the range of empirical observations. The proposed IVIVE-B technique may be a useful tool for assessing BMFs of hydrophobic organic chemicals in mammals. Environ Toxicol Chem 2017;36:1934-1946. © 2016 SETAC. © 2016 SETAC.
Leal, Cinira; Bean, Kathy; Thomas, Frédérique; Chaix, Basile
2011-09-01
We investigated whether neighborhood socioeconomic characteristics, measured within person-centered areas (ie, centered on individuals' residences) are associated with body mass index (BMI [kg/m²]) and waist circumference. We used propensity-score matching as a diagnostic and validation tool to examine whether socio-spatial segregation (and related structural confounding) allowed us to estimate neighborhood socioeconomic effects adjusted for individual socioeconomic characteristics without excessive model extrapolations. Using the RECORD (Residential Environment and CORonary heart Disease) Cohort Study, we conducted cross-sectional analyses of 7230 adults from the Paris region. We first estimated the relationships of 3 neighborhood socioeconomic indicators (education, income, real estate prices) with BMI and waist circumference using traditional multilevel regression models adjusted for individual covariates. Second, we examined whether these associations persisted when estimated among participants exchangeable based on their probability of living in low-socioeconomic-status neighborhoods (propensity-score matched samples). After adjustment for covariates, BMI/waist circumference increased with decreasing neighborhood socioeconomic status, especially with neighborhood education measured within 500-m radius buffers around residences; associations were stronger for women. With propensity-score matching techniques, there was some overlap in the odds of exposure between exposed and unexposed populations. As a function of socio-spatial segregation and an indicator of whether the data support inferences, sample size decreased by 17%-59% from the initial to the propensity-score matched samples. Propensity-score matched models confirmed relationships obtained from models in the entire sample. Overall, adjusted associations between neighborhood socioeconomic variables and BMI/waist circumference were empirically estimable in the French context, without excessive model
Margiotta-Casaluci, Luigi; Owen, Stewart F.; Cumming, Rob I.; de Polo, Anna; Winter, Matthew J.; Panter, Grace H.; Rand-Weaver, Mariann; Sumpter, John P.
2014-01-01
Fish are an important model for the pharmacological and toxicological characterization of human pharmaceuticals in drug discovery, drug safety assessment and environmental toxicology. However, do fish respond to pharmaceuticals as humans do? To address this question, we provide a novel quantitative cross-species extrapolation approach (qCSE) based on the hypothesis that similar plasma concentrations of pharmaceuticals cause comparable target-mediated effects in both humans and fish at similar level of biological organization (Read-Across Hypothesis). To validate this hypothesis, the behavioural effects of the anti-depressant drug fluoxetine on the fish model fathead minnow (Pimephales promelas) were used as test case. Fish were exposed for 28 days to a range of measured water concentrations of fluoxetine (0.1, 1.0, 8.0, 16, 32, 64 µg/L) to produce plasma concentrations below, equal and above the range of Human Therapeutic Plasma Concentrations (HTPCs). Fluoxetine and its metabolite, norfluoxetine, were quantified in the plasma of individual fish and linked to behavioural anxiety-related endpoints. The minimum drug plasma concentrations that elicited anxiolytic responses in fish were above the upper value of the HTPC range, whereas no effects were observed at plasma concentrations below the HTPCs. In vivo metabolism of fluoxetine in humans and fish was similar, and displayed bi-phasic concentration-dependent kinetics driven by the auto-inhibitory dynamics and saturation of the enzymes that convert fluoxetine into norfluoxetine. The sensitivity of fish to fluoxetine was not so dissimilar from that of patients affected by general anxiety disorders. These results represent the first direct evidence of measured internal dose response effect of a pharmaceutical in fish, hence validating the Read-Across hypothesis applied to fluoxetine. Overall, this study demonstrates that the qCSE approach, anchored to internal drug concentrations, is a powerful tool to guide the
NASA Astrophysics Data System (ADS)
Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming
2015-03-01
The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.
Tien, Christopher J; Winslow, James F; Hintenlang, David E
2011-01-31
In helical computed tomography (CT), reconstruction information from volumes adjacent to the clinical volume of interest (VOI) is required for proper reconstruction. Previous studies have relied upon either operator console readings or indirect extrapolation of measurements in order to determine the over-ranging length of a scan. This paper presents a methodology for the direct quantification of over-ranging dose contributions using real-time dosimetry. A Siemens SOMATOM Sensation 16 multislice helical CT scanner is used with a novel real-time "point" fiber-optic dosimeter system with 10 ms temporal resolution to measure over-ranging length, which is also expressed in dose-length-product (DLP). Film was used to benchmark the exact length of over-ranging. Over-ranging length varied from 4.38 cm at pitch of 0.5 to 6.72 cm at a pitch of 1.5, which corresponds to DLP of 131 to 202 mGy-cm. The dose-extrapolation method of Van der Molen et al. yielded results within 3%, while the console reading method of Tzedakis et al. yielded consistently larger over-ranging lengths. From film measurements, it was determined that Tzedakis et al. overestimated over-ranging lengths by one-half of beam collimation width. Over-ranging length measured as a function of reconstruction slice thicknesses produced two linear regions similar to previous publications. Over-ranging is quantified with both absolute length and DLP, which contributes about 60 mGy-cm or about 10% of DLP for a routine abdominal scan. This paper presents a direct physical measurement of over-ranging length within 10% of previous methodologies. Current uncertainties are less than 1%, in comparison with 5% in other methodologies. Clinical implantation can be increased by using only one dosimeter if codependence with console readings is acceptable, with an uncertainty of 1.1% This methodology will be applied to different vendors, models, and postprocessing methods--which have been shown to produce over-ranging lengths
Inheritance of Telomere Length in a Bird
Horn, Thorsten; Robertson, Bruce C.; Will, Margaret; Eason, Daryl K.; Elliott, Graeme P.; Gemmell, Neil J.
2011-01-01
Telomere dynamics are intensively studied in human ageing research and epidemiology, with many correlations reported between telomere length and age-related diseases, cancer and death. While telomere length is influenced by environmental factors there is also good evidence for a strong heritable component. In human, the mode of telomere length inheritance appears to be paternal and telomere length differs between sexes, with females having longer telomeres than males. Genetic factors, e.g. sex chromosomal inactivation, and non-genetic factors, e.g. antioxidant properties of oestrogen, have been suggested as possible explanations for these sex-specific telomere inheritance and telomere length differences. To test the influence of sex chromosomes on telomere length, we investigated inheritance and sex-specificity of telomere length in a bird species, the kakapo (Strigops habroptilus), in which females are the heterogametic sex (ZW) and males are the homogametic (ZZ) sex. We found that, contrary to findings in humans, telomere length was maternally inherited and also longer in males. These results argue against an effect of sex hormones on telomere length and suggest that factors associated with heterogamy may play a role in telomere inheritance and sex-specific differences in telomere length. PMID:21364951
Inheritance of telomere length in a bird.
Horn, Thorsten; Robertson, Bruce C; Will, Margaret; Eason, Daryl K; Elliott, Graeme P; Gemmell, Neil J
2011-02-22
Telomere dynamics are intensively studied in human ageing research and epidemiology, with many correlations reported between telomere length and age-related diseases, cancer and death. While telomere length is influenced by environmental factors there is also good evidence for a strong heritable component. In human, the mode of telomere length inheritance appears to be paternal and telomere length differs between sexes, with females having longer telomeres than males. Genetic factors, e.g. sex chromosomal inactivation, and non-genetic factors, e.g. antioxidant properties of oestrogen, have been suggested as possible explanations for these sex-specific telomere inheritance and telomere length differences. To test the influence of sex chromosomes on telomere length, we investigated inheritance and sex-specificity of telomere length in a bird species, the kakapo (Strigops habroptilus), in which females are the heterogametic sex (ZW) and males are the homogametic (ZZ) sex. We found that, contrary to findings in humans, telomere length was maternally inherited and also longer in males. These results argue against an effect of sex hormones on telomere length and suggest that factors associated with heterogamy may play a role in telomere inheritance and sex-specific differences in telomere length.
Gulliver, John; de Hoogh, Kees; Hoek, Gerard; Vienneau, Danielle; Fecht, Daniela; Hansell, Anna
2016-01-01
Robust methods to estimate historic population air pollution exposures are important tools for epidemiological studies evaluating long-term health effects. We developed land use regression (LUR) models for NO2 exposure in Great Britain for 1991 and explored whether the choice of year-specific or back-extrapolated LUR yields 1) similar LUR variables and model performance, and 2) similar national and regional address-level and small-area concentrations. We constructed two LUR models for 1991using NO2 concentrations from the diffusion tube monitoring network, one using 75% of all available measurement sites (that over-represent industrial areas), and the other using 75% of a subset of sites proportionate to population by region to study the effects of monitoring site selection bias. We compared, using the remaining (hold-out) 25% of monitoring sites, the performance of the two 1991 models with back-extrapolation of a previously published 2009 model, developed using NO2 concentrations from automatic chemiluminescence monitoring sites and predictor variables from 2006/2007. The 2009 model was back-extrapolated to 1991 using the same predictors (1990 & 1995) used to develop 1991 models. The 1991 models included industrial land use variables, not present for 2009. The hold-out performance of 1991 models (mean-squared-error-based-R(2): 0.62-0.64) was up to 8% higher and ~1μg/m(3) lower in root mean squared error than the back-extrapolated 2009 model, with best performance from the subset of sites representing population exposures. Year-specific and back-extrapolated exposures for residential addresses (n=1.338,399) and small areas (n=10.518) were very highly linearly correlated for Great Britain (r>0.83). This study suggests that year-specific model for 1991 and back-extrapolation of the 2009 LUR yield similar exposure assessment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ford, William Paul; van Orden, Wally
2013-11-25
In this work, an off-shell extrapolation is proposed for the Regge-model NN amplitudes presented in a paper by Ford and Van Orden [ Phys. Rev. C 87 014004 (2013)] and in an eprint by Ford (arXiv:1310.0871 [nucl-th]). The prescriptions for extrapolating these amplitudes for one nucleon off-shell in the initial state are presented. Application of these amplitudes to calculations of deuteron electrodisintegration are presented and compared to the limited available precision data in the kinematical region covered by the Regge model.
Ford, William Paul; van Orden, Wally
2013-11-25
In this work, an off-shell extrapolation is proposed for the Regge-model NN amplitudes presented in a paper by Ford and Van Orden [ Phys. Rev. C 87 014004 (2013)] and in an eprint by Ford (arXiv:1310.0871 [nucl-th]). The prescriptions for extrapolating these amplitudes for one nucleon off-shell in the initial state are presented. Application of these amplitudes to calculations of deuteron electrodisintegration are presented and compared to the limited available precision data in the kinematical region covered by the Regge model.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1982-01-01
The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.
Length scale of interaction in spatiotemporal chaos.
Stahlke, Dan; Wackerbauer, Renate
2011-04-01
Extensive systems have no long scale correlations and behave as a sum of their parts. Various techniques are introduced to determine a characteristic length scale of interaction beyond which spatiotemporal chaos is extensive in reaction-diffusion networks. Information about network size, boundary condition, or abnormalities in network topology gets scrambled in spatiotemporal chaos, and the attenuation of information provides such characteristic length scales. Space-time information flow associated with the recovery of spatiotemporal chaos from finite perturbations, a concept somewhat opposite to the paradigm of Lyapunov exponents, defines another characteristic length scale. High-precision computational studies of asymptotic spatiotemporal chaos in the complex Ginzburg-Landau system and transient spatiotemporal chaos in the Gray-Scott network show that these different length scales are comparable and thus suitable to define a length scale of interaction. Preliminary studies demonstrate the relevance of these length scales for stable chaos.
The long persistence length of model tubules.
Stevens, Mark J
2017-07-28
Young's elastic modulus and the persistence length are calculated for a coarse-grained model of tubule forming polymers. The model uses a wedge shaped composite of particles that previously has been shown to self-assemble into tubules. These calculations demonstrate that the model yields very large persistence lengths (corresponding to 78-126 μm) that are comparable to that observed in experiments for the microtubule lengths accessible to the calculations. The source for the stiffness is the restricted rotation of the monomer due to the excluded volume interactions between bonded macromolecular monomers as well as the binding between monomers. For this reason, large persistence lengths are common in tubule systems with a macromolecule as the monomer. The persistence length increases linearly with increased binding strength in the filament direction. No dependence in the persistence length is found for varying the tubule pitch for geometries with the protofilaments remaining straight.
The long persistence length of model tubules
NASA Astrophysics Data System (ADS)
Stevens, Mark J.
2017-07-01
Young's elastic modulus and the persistence length are calculated for a coarse-grained model of tubule forming polymers. The model uses a wedge shaped composite of particles that previously has been shown to self-assemble into tubules. These calculations demonstrate that the model yields very large persistence lengths (corresponding to 78-126 μ m) that are comparable to that observed in experiments for the microtubule lengths accessible to the calculations. The source for the stiffness is the restricted rotation of the monomer due to the excluded volume interactions between bonded macromolecular monomers as well as the binding between monomers. For this reason, large persistence lengths are common in tubule systems with a macromolecule as the monomer. The persistence length increases linearly with increased binding strength in the filament direction. No dependence in the persistence length is found for varying the tubule pitch for geometries with the protofilaments remaining straight.
NASA Astrophysics Data System (ADS)
Verbovšek, T.
2009-04-01
Fractal dimensions of fracture networks (D) are usually determined from 2-D objects, like the digitized fracture traces in outcrops. Sometimes, extrapolations to higher dimensions are required if the measurements (for example fracture traces in the boreholes or in the scanlines) are performed in 1-D environment (D1-D) and are later upscaled to higher dimensions (D2-D). For isotropic fractals this relation should be straight-forward according to the theory: D2-D = D1-D +1, as the intersection of a 2-D fractal with a plane results in a fractal with D1-D equal to D2-D minus one. Some authors have questioned this relation and proposed different empirical relationships. Still, there exist very few field studies of natural fracture networks to support or test such a relationship. The study was therefore focused on the analysis of 23 natural fracture networks in Triassic dolomites in Slovenia. The traces of these fractures were analyzed separately in both 1-D and 2-D environments, and relationships between the obtained fractal dimensions were determined. For 2-D data, the digitized images of fracture traces in 2048x2048 pixel resolution were analyzed by the box-counting method, considering truncation and censoring effects (the 'cut-off' method, using only the valid data right of the cut-off points) and also by considering the complete data range interval (the 'full' method). These values were consequently compared to 1-D values. Those were obtained by dissecting images in both x- and y-directions into 2048 smaller linear images of 1-pixel width, simulating the intersection with a plane. Such line images were then examined by the fracture line-counting method, a 1-D equivalent of the box-counting technique. Results show that the values of all fractal dimensions, regardless of the different fracture networks or the method used, lie in a very narrow data range, and the standard deviations are very small (up to 0.03). The small range can be attributed to a similar fracturing
NASA Astrophysics Data System (ADS)
Ekin, Jack W.; Cheggour, Najib; Goodrich, Loren; Splett, Jolene; Bordini, Bernardo; Richter, David
2016-12-01
A scaling study of several thousand Nb3Sn critical-current (I c) measurements is used to derive the Extrapolative Scaling Expression (ESE), a relation that can quickly and accurately extrapolate limited datasets to obtain full three-dimensional dependences of I c on magnetic field (B), temperature (T), and mechanical strain (ɛ). The relation has the advantage of being easy to implement, and offers significant savings in sample characterization time and a useful tool for magnet design. Thorough data-based analysis of the general parameterization of the Unified Scaling Law (USL) shows the existence of three universal scaling constants for practical Nb3Sn conductors. The study also identifies the scaling parameters that are conductor specific and need to be fitted to each conductor. This investigation includes two new, rare, and very large I c(B,T,ɛ) datasets (each with nearly a thousand I c measurements spanning magnetic fields from 1 to 16 T, temperatures from ˜2.26 to 14 K, and intrinsic strains from -1.1% to +0.3%). The results are summarized in terms of the general USL parameters given in table 3 of Part 1 (Ekin J W 2010 Supercond. Sci. Technol. 23 083001) of this series of articles. The scaling constants determined for practical Nb3Sn conductors are: the upper-critical-field temperature parameter v = 1.50 ± 0.04 the cross-link parameter w = 3.0 ± 0.3 and the strain curvature parameter u = 1.7 ± 0.1 (from equation (29) for b c2(ɛ) in Part 1). These constants and required fitting parameters result in the ESE relation, given by I c ( B , T , ɛ ) B = C [ b c 2 ( ɛ ) ] s ( 1 - t 1.5 ) η - μ ( 1 - t 2 ) μ b p ( 1 - b ) q with reduced magnetic field b ≡ B/B c2*(T,ɛ) and reduced temperature t ≡ T/T c*(ɛ), where: B c 2 * ( T , ɛ ) = B c 2 * ( 0 , 0 ) ( 1 - t 1.5 ) b c 2 ( ɛ ) T c * ( ɛ ) = T c * ( 0 ) [ b c 2 ( ɛ ) ] 1/3 and fitting parameters: C, B c2*(0,0), T c*(0), s, either η or μ (but not both), plus the parameters in the strain function b c2
Measuring Crack Length in Coarse Grain Ceramics
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Ghosn, Louis J.
2010-01-01
Due to a coarse grain structure, crack lengths in precracked spinel specimens could not be measured optically, so the crack lengths and fracture toughness were estimated by strain gage measurements. An expression was developed via finite element analysis to correlate the measured strain with crack length in four-point flexure. The fracture toughness estimated by the strain gaged samples and another standardized method were in agreement.
Controlling Arc Length in Plasma Welding
NASA Technical Reports Server (NTRS)
Iceland, W. F.
1986-01-01
Circuit maintains arc length on irregularly shaped workpieces. Length of plasma arc continuously adjusted by control circuit to maintain commanded value. After pilot arc is established, contactor closed and transfers arc to workpiece. Control circuit then half-wave rectifies ac arc voltage to produce dc control signal proportional to arc length. Circuit added to plasma arc welding machines with few wiring changes. Welds made with circuit cleaner and require less rework than welds made without it. Beads smooth and free of inclusions.
Axial Globe Length in Congenital Ptosis.
Takahashi, Yasuhiro; Kang, Hyera; Kakizaki, Hirohiko
2015-01-01
To compare axial globe length between affected and unaffected sides in patients with unilateral congenital ptosis. This prospective observational study included 37 patients (age range: 7 months to 58 years). The axial globe length, margin reflex distance-1 (MRD-1), and refractive power were measured. The axial globe length difference was calculated by subtracting the axial globe length on the unaffected side from that of the affected side. The relationships among axial globe length differences, MRD-1 on the affected sides, and patient ages were analyzed using multiple regression analysis. No significant differences were found in the axial globe length between sides (P = .677). The axial globe length difference was 0.17 ± 0.30 mm (mean ± standard deviation), and two patients (5.4%), aged 32 to 57 years, showed axial globe length more than 0.67 mm longer (corresponding to a refractive power of 2 diopters) on the affected side compared to the unaffected side. The multiple regression model between axial globe length difference, patient age, and MRD-1 on the affected sides was less appropriate (YAGL = 0.003XAGE-0.048XMRD-1 +0.112; r = 0.338; adjusted r2 = 0.062; P = .127). The cylindrical power was greater on the affected side (P = .046), although the spherical power was not different between sides (P = .657). No significant difference was identified in the axial globe length between sides, and only 5% of non-pediatric patients showed an axial globe length more than 0.67 mm longer on the affected side. Congenital ptosis may have little effect on axial globe length elongation, and the risk of axial myopia-induced anisometropic amblyopia may be low in patients with unilateral congenital ptosis. Copyright 2015, SLACK Incorporated.