DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparks, R.B.; Aydogan, B.
In the development of new radiopharmaceuticals, animal studies are typically performed to get a first approximation of the expected radiation dose in humans. This study evaluates the performance of some commonly used data extrapolation techniques to predict residence times in humans using data collected from animals. Residence times were calculated using animal and human data, and distributions of ratios of the animal results to human results were constructed for each extrapolation method. Four methods using animal data to predict human residence times were examined: (1) using no extrapolation, (2) using relative organ mass extrapolation, (3) using physiological time extrapolation, andmore » (4) using a combination of the mass and time methods. The residence time ratios were found to be log normally distributed for the nonextrapolated and extrapolated data sets. The use of relative organ mass extrapolation yielded no statistically significant change in the geometric mean or variance of the residence time ratios as compared to using no extrapolation. Physiologic time extrapolation yielded a statistically significant improvement (p < 0.01, paired t test) in the geometric mean of the residence time ratio from 0.5 to 0.8. Combining mass and time methods did not significantly improve the results of using time extrapolation alone. 63 refs., 4 figs., 3 tabs.« less
The forecast for RAC extrapolation: mostly cloudy.
Goldman, Elizabeth; Jacobs, Robert; Scott, Ellen; Scott, Bonnie
2011-09-01
The current statutory and regulatory guidance for recovery audit contractor (RAC) extrapolation leaves providers with minimal protection against the process and a limited ability to challenge overpayment demands. Providers not only should understand the statutory and regulatory basis for extrapolation forecast, but also should be able to assess their extrapolation risk and their recourse through regulatory safeguards against contractor error. Providers also should aggressively appeal all incorrect RAC denials to minimize the potential impact of extrapolation.
Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I
2014-01-01
Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less
NASA Astrophysics Data System (ADS)
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Jun; Wang, Han, E-mail: wang-han@iapcm.ac.cn; CAEP Software Center for High Performance Numerical Simulation, Beijing
2016-06-28
Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn–Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps ormore » more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.« less
Extrapolating bound state data of anions into the metastable domain
NASA Astrophysics Data System (ADS)
Feuerbacher, Sven; Sommerfeld, Thomas; Cederbaum, Lorenz S.
2004-10-01
Computing energies of electronically metastable resonance states is still a great challenge. Both scattering techniques and quantum chemistry based L2 methods are very time consuming. Here we investigate two more economical extrapolation methods. Extrapolating bound states energies into the metastable region using increased nuclear charges has been suggested almost 20 years ago. We critically evaluate this attractive technique employing our complex absorbing potential/Green's function method that allows us to follow a bound state into the continuum. Using the 2Πg resonance of N2- and the 2Πu resonance of CO2- as examples, we found that the extrapolation works suprisingly well. The second extrapolation method involves increasing of bond lengths until the sought resonance becomes stable. The keystone is to extrapolate the attachment energy and not the total energy of the system. This method has the great advantage that the whole potential energy curve is obtained with quite good accuracy by the extrapolation. Limitations of the two techniques are discussed.
Interspecies extrapolation encompasses two related but distinct topic areas that are germane to quantitative extrapolation and hence computational toxicology-dose scaling and parameter scaling. Dose scaling is the process of converting a dose determined in an experimental animal ...
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
The Extrapolation of Elementary Sequences
NASA Technical Reports Server (NTRS)
Laird, Philip; Saul, Ronald
1992-01-01
We study sequence extrapolation as a stream-learning problem. Input examples are a stream of data elements of the same type (integers, strings, etc.), and the problem is to construct a hypothesis that both explains the observed sequence of examples and extrapolates the rest of the stream. A primary objective -- and one that distinguishes this work from previous extrapolation algorithms -- is that the same algorithm be able to extrapolate sequences over a variety of different types, including integers, strings, and trees. We define a generous family of constructive data types, and define as our learning bias a stream language called elementary stream descriptions. We then give an algorithm that extrapolates elementary descriptions over constructive datatypes and prove that it learns correctly. For freely-generated types, we prove a polynomial time bound on descriptions of bounded complexity. An especially interesting feature of this work is the ability to provide quantitative measures of confidence in competing hypotheses, using a Bayesian model of prediction.
Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?
NASA Astrophysics Data System (ADS)
Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.
2016-02-01
It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
NASA Astrophysics Data System (ADS)
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
AXES OF EXTRAPOLATION IN RISK ASSESSMENTS
Extrapolation in risk assessment involves the use of data and information to estimate or predict something that has not been measured or observed. Reasons for extrapolation include that the number of combinations of environmental stressors and possible receptors is too large to c...
CROSS-SPECIES DOSE EXTRAPOLATION FOR DIESEL EMISSIONS
Models for cross-species (rat to human) dose extrapolation of diesel emission were evaluated for purposes of establishing guidelines for human exposure to diesel emissions (DE) based on DE toxicological data obtained in rats. Ideally, a model for this extrapolation would provide...
Extrapolation procedures in Mott electron polarimetry
NASA Technical Reports Server (NTRS)
Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.
1992-01-01
In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.
NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.
Hinrichs, R N; McLean, S P
1995-10-01
This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.
Cross-species extrapolation of chemical effects: Challenges and new insights
One of the greatest uncertainties in chemical risk assessment is extrapolation of effects from tested to untested species. While this undoubtedly is a challenge in the human health arena, species extrapolation is a particularly daunting task in ecological assessments, where it is...
Strong, James Asa; Elliott, Michael
2017-03-15
The reporting of ecological phenomena and environmental status routinely required point observations, collected with traditional sampling approaches to be extrapolated to larger reporting scales. This process encompasses difficulties that can quickly entrain significant errors. Remote sensing techniques offer insights and exceptional spatial coverage for observing the marine environment. This review provides guidance on (i) the structures and discontinuities inherent within the extrapolative process, (ii) how to extrapolate effectively across multiple spatial scales, and (iii) remote sensing techniques and data sets that can facilitate this process. This evaluation illustrates that remote sensing techniques are a critical component in extrapolation and likely to underpin the production of high-quality assessments of ecological phenomena and the regional reporting of environmental status. Ultimately, is it hoped that this guidance will aid the production of robust and consistent extrapolations that also make full use of the techniques and data sets that expedite this process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Latychevskaia, T; Chushkin, Y; Fink, H-W
2016-10-01
In coherent diffractive imaging, the resolution of the reconstructed object is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by postextrapolation of coherent diffraction images, such as diffraction patterns or holograms. We demonstrate that a diffraction pattern can unambiguously be extrapolated from only a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal is linearly proportional to the oversampling ratio. Although there could be in principle other methods to achieve extrapolation, we devote our discussion to employing iterative phase retrieval methods and demonstrate their limits. We present two numerical studies; namely, the extrapolation of diffraction patterns of nonbinary and that of phase objects together with a discussion of the optimal extrapolation procedure. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetic...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-09
... External Review Draft of the Guidance for Applying Quantitative Data To Develop Data-Derived Extrapolation... Applying Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies... Applying Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies...
Chiral extrapolation of nucleon axial charge gA in effective field theory
NASA Astrophysics Data System (ADS)
Li, Hong-na; Wang, P.
2016-12-01
The extrapolation of nucleon axial charge gA is investigated within the framework of heavy baryon chiral effective field theory. The intermediate octet and decuplet baryons are included in the one loop calculation. Finite range regularization is applied to improve the convergence in the quark-mass expansion. The lattice data from three different groups are used for the extrapolation. At physical pion mass, the extrapolated gA are all smaller than the experimental value. Supported by National Natural Science Foundation of China (11475186) and Sino-German CRC 110 (NSFC 11621131001)
NASA Technical Reports Server (NTRS)
Mack, Robert J.; Kuhn, Neil S.
2006-01-01
A study was performed to determine a limiting separation distance for the extrapolation of pressure signatures from cruise altitude to the ground. The study was performed at two wind-tunnel facilities with two research low-boom wind-tunnel models designed to generate ground pressure signatures with "flattop" shapes. Data acquired at the first wind-tunnel facility showed that pressure signatures had not achieved the desired low-boom features for extrapolation purposes at separation distances of 2 to 5 span lengths. However, data acquired at the second wind-tunnel facility at separation distances of 5 to 20 span lengths indicated the "limiting extrapolation distance" had been achieved so pressure signatures could be extrapolated with existing codes to obtain credible predictions of ground overpressures.
Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.
Sakaino, Hidetomo
2016-09-01
Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
... ENVIRONMENTAL PROTECTION AGENCY [EPA-HQ-ORD-2009-0694; FRL-9442-8] Notice of Availability of the External Review Draft of the Guidance for Applying Quantitative Data to Develop Data-Derived Extrapolation... Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies Extrapolation...
Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicit...
NASA Astrophysics Data System (ADS)
Mueller, David S.
2013-04-01
Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.
Can Tauc plot extrapolation be used for direct-band-gap semiconductor nanocrystals?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Y., E-mail: yu.feng@unsw.edu.au; Lin, S.; Huang, S.
Despite that Tauc plot extrapolation has been widely adopted for extracting bandgap energies of semiconductors, there is a lack of theoretical support for applying it to nanocrystals. In this paper, direct-allowed optical transitions in semiconductor nanocrystals have been formulated based on a purely theoretical approach. This result reveals a size-dependant transition of the power factor used in Tauc plot, increasing from one half used in the 3D bulk case to one in the 0D case. This size-dependant intermediate value of power factor allows a better extrapolation of measured absorption data. Being a material characterization technique, the generalized Tauc extrapolation givesmore » a more reasonable and accurate acquisition of the intrinsic bandgap, while the unjustified purpose of extrapolating any elevated bandgap caused by quantum confinement is shown to be incorrect.« less
Cosmogony as an extrapolation of magnetospheric research
NASA Technical Reports Server (NTRS)
Alfven, H.
1984-01-01
A theory of the origin and evolution of the Solar System which considered electromagnetic forces and plasma effects is revised in light of information supplied by space research. In situ measurements in the magnetospheres and solar wind can be extrapolated outwards in space, to interstellar clouds, and backwards in time, to the formation of the solar system. The first extrapolation leads to a revision of cloud properties essential for the early phases in the formation of stars and solar nebulae. The latter extrapolation facilitates analysis of the cosmogonic processes by extrapolation of magnetospheric phenomena. Pioneer-Voyager observations of the Saturnian rings indicate that essential parts of their structure are fossils from cosmogonic times. By using detailed information from these space missions, it is possible to reconstruct events 4 to 5 billion years ago with an accuracy of a few percent.
Measurement accuracies in band-limited extrapolation
NASA Technical Reports Server (NTRS)
Kritikos, H. N.
1982-01-01
The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.
Semiempirical Theories of the Affinities of Negative Atomic Ions
NASA Technical Reports Server (NTRS)
Edie, John W.
1961-01-01
The determination of the electron affinities of negative atomic ions by means of direct experimental investigation is limited. To supplement the meager experimental results, several semiempirical theories have been advanced. One commonly used technique involves extrapolating the electron affinities along the isoelectronic sequences, The most recent of these extrapolations Is studied by extending the method to Include one more member of the isoelectronic sequence, When the results show that this extension does not increase the accuracy of the calculations, several possible explanations for this situation are explored. A different approach to the problem is suggested by the regularities appearing in the electron affinities. Noting that the regular linear pattern that exists for the ionization potentials of the p electrons as a function of Z, repeats itself for different degrees of ionization q, the slopes and intercepts of these curves are extrapolated to the case of the negative Ion. The method is placed on a theoretical basis by calculating the Slater parameters as functions of q and n, the number of equivalent p-electrons. These functions are no more than quadratic in q and n. The electron affinities are calculated by extending the linear relations that exist for the neutral atoms and positive ions to the negative ions. The extrapolated. slopes are apparently correct, but the intercepts must be slightly altered to agree with experiment. For this purpose one or two experimental affinities (depending on the extrapolation method) are used in each of the two short periods. The two extrapolation methods used are: (A) an isoelectronic sequence extrapolation of the linear pattern as such; (B) the same extrapolation of a linearization of this pattern (configuration centers) combined with an extrapolation of the other terms of the ground configurations. The latter method Is preferable, since it requires only experimental point for each period. The results agree within experimental error with all data, except with the most recent value of C, which lies 10% lower.
Pion mass dependence of the HVP contribution to muon g - 2
NASA Astrophysics Data System (ADS)
Golterman, Maarten; Maltman, Kim; Peris, Santiago
2018-03-01
One of the systematic errors in some of the current lattice computations of the HVP contribution to the muon anomalous magnetic moment g - 2 is that associated with the extrapolation to the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 220 to 440 MeV with the help of two-loop chiral perturbation theory, and find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various proposed tricks to improve the chiral extrapolation are taken into account.
An extrapolation scheme for solid-state NMR chemical shift calculations
NASA Astrophysics Data System (ADS)
Nakajima, Takahito
2017-06-01
Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.
In situ LTE exposure of the general public: Characterization and extrapolation.
Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc
2012-09-01
In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.
Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar
2015-01-01
Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-20
... is calculated from tumor data of the cancer bioassays using a statistical extrapolation procedure... carcinogenic concern currently set forth in Sec. 500.84 utilizes a statistical extrapolation procedure that... procedures did not rely on a statistical extrapolation of the data to a 1 in 1 million risk of cancer to test...
Mueller, David S.
2013-01-01
profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers’ software.
Conic state extrapolation. [computer program for space shuttle navigation and guidance requirements
NASA Technical Reports Server (NTRS)
Shepperd, S. W.; Robertson, W. M.
1973-01-01
The Conic State Extrapolation Routine provides the capability to conically extrapolate any spacecraft inertial state vector either backwards or forwards as a function of time or as a function of transfer angle. It is merely the coded form of two versions of the solution of the two-body differential equations of motion of the spacecraft center of mass. Because of its relatively fast computation speed and moderate accuracy, it serves as a preliminary navigation tool and as a method of obtaining quick solutions for targeting and guidance functions. More accurate (but slower) results are provided by the Precision State Extrapolation Routine.
The Extrapolation of High Altitude Solar Cell I(V) Characteristics to AM0
NASA Technical Reports Server (NTRS)
Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Reinke, William; Blankenship, Kurt; Demers, James
2007-01-01
The high altitude aircraft method has been used at NASA GRC since the early 1960's to calibrate solar cell short circuit current, ISC, to Air Mass Zero (AMO). This method extrapolates ISC to AM0 via the Langley plot method, a logarithmic extrapolation to 0 air mass, and includes corrections for the varying Earth-Sun distance to 1.0 AU and compensating for the non-uniform ozone distribution in the atmosphere. However, other characteristics of the solar cell I(V) curve do not extrapolate in the same way. Another approach is needed to extrapolate VOC and the maximum power point (PMAX) to AM0 illumination. As part of the high altitude aircraft method, VOC and PMAX can be obtained as ISC changes during the flight. These values can then the extrapolated, sometimes interpolated, to the ISC(AM0) value. This approach should be valid as long as the shape of the solar spectra in the stratosphere does not change too much from AMO. As a feasibility check, the results are compared to AMO I(V) curves obtained using the NASA GRC X25 based multi-source simulator. This paper investigates the approach on both multi-junction solar cells and sub-cells.
Toward a Quantitative Comparison of Magnetic Field Extrapolations and Observed Coronal Loops
NASA Astrophysics Data System (ADS)
Warren, Harry P.; Crump, Nicholas A.; Ugarte-Urra, Ignacio; Sun, Xudong; Aschwanden, Markus J.; Wiegelmann, Thomas
2018-06-01
It is widely believed that loops observed in the solar atmosphere trace out magnetic field lines. However, the degree to which magnetic field extrapolations yield field lines that actually do follow loops has yet to be studied systematically. In this paper, we apply three different extrapolation techniques—a simple potential model, a nonlinear force-free (NLFF) model based on photospheric vector data, and an NLFF model based on forward fitting magnetic sources with vertical currents—to 15 active regions that span a wide range of magnetic conditions. We use a distance metric to assess how well each of these models is able to match field lines to the 12202 loops traced in coronal images. These distances are typically 1″–2″. We also compute the misalignment angle between each traced loop and the local magnetic field vector, and find values of 5°–12°. We find that the NLFF models generally outperform the potential extrapolation on these metrics, although the differences between the different extrapolations are relatively small. The methodology that we employ for this study suggests a number of ways that both the extrapolations and loop identification can be improved.
2016-04-01
incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching architecture...Simulation Model, Quasi -Nonlinear, Piloted Simulation, Flight-Test Implications, System Identification, Off-Nominal Loading Extrapolation, Stability...incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching
NASA Astrophysics Data System (ADS)
Hill, J. Grant; Peterson, Kirk A.; Knizia, Gerald; Werner, Hans-Joachim
2009-11-01
Accurate extrapolation to the complete basis set (CBS) limit of valence correlation energies calculated with explicitly correlated MP2-F12 and CCSD(T)-F12b methods have been investigated using a Schwenke-style approach for molecules containing both first and second row atoms. Extrapolation coefficients that are optimal for molecular systems containing first row elements differ from those optimized for second row analogs, hence values optimized for a combined set of first and second row systems are also presented. The new coefficients are shown to produce excellent results in both Schwenke-style and equivalent power-law-based two-point CBS extrapolations, with the MP2-F12/cc-pV(D,T)Z-F12 extrapolations producing an average error of just 0.17 mEh with a maximum error of 0.49 for a collection of 23 small molecules. The use of larger basis sets, i.e., cc-pV(T,Q)Z-F12 and aug-cc-pV(Q,5)Z, in extrapolations of the MP2-F12 correlation energy leads to average errors that are smaller than the degree of confidence in the reference data (˜0.1 mEh). The latter were obtained through use of very large basis sets in MP2-F12 calculations on small molecules containing both first and second row elements. CBS limits obtained from optimized coefficients for conventional MP2 are only comparable to the accuracy of the MP2-F12/cc-pV(D,T)Z-F12 extrapolation when the aug-cc-pV(5+d)Z and aug-cc-pV(6+d)Z basis sets are used. The CCSD(T)-F12b correlation energy is extrapolated as two distinct parts: CCSD-F12b and (T). While the CCSD-F12b extrapolations with smaller basis sets are statistically less accurate than those of the MP2-F12 correlation energies, this is presumably due to the slower basis set convergence of the CCSD-F12b method compared to MP2-F12. The use of larger basis sets in the CCSD-F12b extrapolations produces correlation energies with accuracies exceeding the confidence in the reference data (also obtained in large basis set F12 calculations). It is demonstrated that the use of the 3C(D) Ansatz is preferred for MP2-F12 CBS extrapolations. Optimal values of the geminal Slater exponent are presented for the diagonal, fixed amplitude Ansatz in MP2-F12 calculations, and these are also recommended for CCSD-F12b calculations.
High-order Newton-penalty algorithms
NASA Astrophysics Data System (ADS)
Dussault, Jean-Pierre
2005-10-01
Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.
A Comparison of Methods for Computing the Residual Resistivity Ratio of High-Purity Niobium
Splett, J. D.; Vecchia, D. F.; Goodrich, L. F.
2011-01-01
We compare methods for estimating the residual resistivity ratio (RRR) of high-purity niobium and investigate the effects of using different functional models. RRR is typically defined as the ratio of the electrical resistances measured at 273 K (the ice point) and 4.2 K (the boiling point of helium at standard atmospheric pressure). However, pure niobium is superconducting below about 9.3 K, so the low-temperature resistance is defined as the normal-state (i.e., non-superconducting state) resistance extrapolated to 4.2 K and zero magnetic field. Thus, the estimated value of RRR depends significantly on the model used for extrapolation. We examine three models for extrapolation based on temperature versus resistance, two models for extrapolation based on magnetic field versus resistance, and a new model based on the Kohler relationship that can be applied to combined temperature and field data. We also investigate the possibility of re-defining RRR so that the quantity is not dependent on extrapolation. PMID:26989580
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fry, R.J.M.
The author discusses some examples of how different experimental animal systems have helped to answer questions about the effects of radiation, in particular, carcinogenesis, and to indicate how the new experimental model systems promise an even more exciting future. Entwined in these themes will be observations about susceptibility and extrapolation across species. The hope of developing acceptable methods of extrapolation of estimates of the risk of radiogenic cancer increases as molecular biology reveals the trail of remarkable similarities in the genetic control of many functions common to many species. A major concern about even attempting to extrapolate estimates of risksmore » of radiation-induced cancer across species has been that the mechanisms of carcinogenesis were so different among different species that it would negate the validity of extrapolation. The more that has become known about the genes involved in cancer, especially those related to the initial events in carcinogenesis, the more have the reasons for considering methods of extrapolation across species increased.« less
Correlation energy extrapolation by many-body expansion
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...
2017-01-09
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Correlation energy extrapolation by many-body expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, J; Culberson, W; DeWerd, L
Purpose: To test the validity of a windowless extrapolation chamber used to measure surface dose rate from planar ophthalmic applicators and to compare different Monte Carlo based codes for deriving correction factors. Methods: Dose rate measurements were performed using a windowless, planar extrapolation chamber with a {sup 90}Sr/{sup 90}Y Tracerlab RA-1 ophthalmic applicator previously calibrated at the National Institute of Standards and Technology (NIST). Capacitance measurements were performed to estimate the initial air gap width between the source face and collecting electrode. Current was measured as a function of air gap, and Bragg-Gray cavity theory was used to calculate themore » absorbed dose rate to water. To determine correction factors for backscatter, divergence, and attenuation from the Mylar entrance window found in the NIST extrapolation chamber, both EGSnrc Monte Carlo user code and Monte Carlo N-Particle Transport Code (MCNP) were utilized. Simulation results were compared with experimental current readings from the windowless extrapolation chamber as a function of air gap. Additionally, measured dose rate values were compared with the expected result from the NIST source calibration to test the validity of the windowless chamber design. Results: Better agreement was seen between EGSnrc simulated dose results and experimental current readings at very small air gaps (<100 µm) for the windowless extrapolation chamber, while MCNP results demonstrated divergence at these small gap widths. Three separate dose rate measurements were performed with the RA-1 applicator. The average observed difference from the expected result based on the NIST calibration was −1.88% with a statistical standard deviation of 0.39% (k=1). Conclusion: EGSnrc user code will be used during future work to derive correction factors for extrapolation chamber measurements. Additionally, experiment results suggest that an entrance window is not needed in order for an extrapolation chamber to provide accurate dose rate measurements for a planar ophthalmic applicator.« less
2018-04-01
EXTRAPOLATION OF HIGH -TEMPERATURE DATA ECBC-TR-1507 Ann Brozena Patrice Abercrombie-Thomas RESEARCH AND TECHNOLOGY DIRECTORATE David E. Tevault...Compounds, CMMP, DPMP, DMEP, and DEEP: Extrapolation of High - Temperature Data 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR...22060-6201 10. SPONSOR/MONITOR’S ACRONYM(S) DTRA 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT Approved
2014-01-01
Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.
2014-12-14
A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less
NASA Astrophysics Data System (ADS)
Havasi, Ágnes; Kazemi, Ehsan
2018-04-01
In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.
Magnetic field extrapolation with MHD relaxation using AWSoM
NASA Astrophysics Data System (ADS)
Shi, T.; Manchester, W.; Landi, E.
2017-12-01
Coronal mass ejections are known to be the major source of disturbances in the solar wind capable of affecting geomagnetic environments. In order for accurate predictions of such space weather events, a data-driven simulation is needed. The first step towards such a simulation is to extrapolate the magnetic field from the observed field that is only at the solar surface. Here we present results of a new code of magnetic field extrapolation with direct magnetohydrodynamics (MHD) relaxation using the Alfvén Wave Solar Model (AWSoM) in the Space Weather Modeling Framework. The obtained field is self-consistent with our model and can be used later in time-dependent simulations without modifications of the equations. We use the Low and Lou analytical solution to test our results and they reach a good agreement. We also extrapolate the magnetic field from the observed data. We then specify the active region corona field with this extrapolation result in the AWSoM model and self-consistently calculate the temperature of the active region loops with Alfvén wave dissipation. Multi-wavelength images are also synthesized.
NASA Technical Reports Server (NTRS)
Darden, C. M.
1984-01-01
A method for analyzing shock coalescence which includes three dimensional effects was developed. The method is based on an extension of the axisymmetric solution, with asymmetric effects introduced through an additional set of governing equations, derived by taking the second circumferential derivative of the standard shock equations in the plane of symmetry. The coalescence method is consistent with and has been combined with a nonlinear sonic boom extrapolation program which is based on the method of characteristics. The extrapolation program, is able to extrapolate pressure signatures which include embedded shocks from an initial data line in the plane of symmetry at approximately one body length from the axis of the aircraft to the ground. The axisymmetric shock coalescence solution, the asymmetric shock coalescence solution, the method of incorporating these solutions into the extrapolation program, and the methods used to determine spatial derivatives needed in the coalescence solution are described. Results of the method are shown for a body of revolution at a small, positive angle of attack.
Extrapolation methods for vector sequences
NASA Technical Reports Server (NTRS)
Smith, David A.; Ford, William F.; Sidi, Avram
1987-01-01
This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.
Yamamoto, Tetsuya
2007-06-01
A novel test fixture operating at a millimeter-wave band using an extrapolation range measurement technique was developed at the National Metrology Institute of Japan (NMIJ). Here I describe the measurement system using a Q-band test fixture. I measured the relative insertion loss as a function of antenna separation distance and observed the effects of multiple reflections between the antennas. I also evaluated the antenna gain at 33 GHz using the extrapolation technique.
Extrapolation of operators acting into quasi-Banach spaces
NASA Astrophysics Data System (ADS)
Lykov, K. V.
2016-01-01
Linear and sublinear operators acting from the scale of L_p spaces to a certain fixed quasinormed space are considered. It is shown how the extrapolation construction proposed by Jawerth and Milman at the end of 1980s can be used to extend a bounded action of an operator from the L_p scale to wider spaces. Theorems are proved which generalize Yano's extrapolation theorem to the case of a quasinormed target space. More precise results are obtained under additional conditions on the quasinorm. Bibliography: 35 titles.
An analysis of the nucleon spectrum from lattice partially-quenched QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. Armour; Allton, C. R.; Leinweber, Derek B.
2010-09-01
The chiral extrapolation of the nucleon mass, Mn, is investigated using data coming from 2-flavour partially-quenched lattice simulations. The leading one-loop corrections to the nucleon mass are derived for partially-quenched QCD. A large sample of lattice results from the CP-PACS Collaboration is analysed, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite-volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value ofmore » Mn in agreement with experiment. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.« less
Application of a framework for extrapolating chemical effects ...
Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetics, life-stage, and pathway similarities/differences. Here we propose a framework using a tiered approach for species extrapolation that enables a transparent weight-of-evidence driven evaluation of pathway conservation (or lack thereof) in the context of adverse outcome pathways. Adverse outcome pathways describe the linkages from a molecular initiating event, defined as the chemical-biomolecule interaction, through subsequent key events leading to an adverse outcome of regulatory concern (e.g., mortality, reproductive dysfunction). Tier 1 of the extrapolation framework employs in silico evaluations of sequence and structural conservation of molecules (e.g., receptors, enzymes) associated with molecular initiating events or upstream key events. Such evaluations make use of available empirical and sequence data to assess taxonomic relevance. Tier 2 uses in vitro bioassays, such as enzyme inhibition/activation, competitive receptor binding, and transcriptional activation assays to explore functional conservation of pathways across taxa. Finally, Tier 3 provides a comparative analysis of in vivo responses between species utilizing well-established model organisms to assess departure from
Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)
1998-01-01
The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.
Properties of infrared extrapolations in a harmonic oscillator basis
Coon, Sidney A.; Kruse, Michael K. G.
2016-02-22
Here, the success and utility of effective field theory (EFT) in explaining the structure and reactions of few-nucleon systems has prompted the initiation of EFT-inspired extrapolations to larger model spaces in ab initio methods such as the no-core shell model (NCSM). In this contribution, we review and continue our studies of infrared (ir) and ultraviolet (uv) regulators of NCSM calculations in which the input is phenomenological NN and NNN interactions fitted to data. We extend our previous findings that an extrapolation in the ir cutoff with the uv cutoff above the intrinsic uv scale of the interaction is quite successful,more » not only for the eigenstates of the Hamiltonian but also for expectation values of operators, such as r 2, considered long range. The latter results are obtained with Hamiltonians transformed by the similarity renormalization group (SRG) evolution. On the other hand, a possible extrapolation of ground state energies in the uv cutoff when the ir cutoff is below the intrinsic ir scale is not robust and does not agree with the ir extrapolation of the same data or with independent calculations using other methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latychevskaia, Tatiana, E-mail: tatiana@physik.uzh.ch; Fink, Hans-Werner; Chushkin, Yuriy
Coherent diffraction imaging is a high-resolution imaging technique whose potential can be greatly enhanced by applying the extrapolation method presented here. We demonstrate the enhancement in resolution of a non-periodical object reconstructed from an experimental X-ray diffraction record which contains about 10% missing information, including the pixels in the center of the diffraction pattern. A diffraction pattern is extrapolated beyond the detector area and as a result, the object is reconstructed at an enhanced resolution and better agreement with experimental amplitudes is achieved. The optimal parameters for the iterative routine and the limits of the extrapolation procedure are discussed.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Arnold, James O. (Technical Monitor)
1999-01-01
The atomization energy of Mg4 is determined using the MP2 and CCSD(T) levels of theory. Basis set incompleteness, basis set extrapolation, and core-valence effects are discussed. Our best atomization energy, including the zero-point energy and scalar relativistic effects, is 24.6+/-1.6 kcal per mol. Our computed and extrapolated values are compared with previous results, where it is observed that our extrapolated MP2 value is good agreement with the MP2-R12 value. The CCSD(T) and MP2 core effects are found to have the opposite signs.
NASA Astrophysics Data System (ADS)
Niedzielski, Tomasz; Kosek, Wiesław
2008-02-01
This article presents the application of a multivariate prediction technique for predicting universal time (UT1-UTC), length of day (LOD) and the axial component of atmospheric angular momentum (AAM χ 3). The multivariate predictions of LOD and UT1-UTC are generated by means of the combination of (1) least-squares (LS) extrapolation of models for annual, semiannual, 18.6-year, 9.3-year oscillations and for the linear trend, and (2) multivariate autoregressive (MAR) stochastic prediction of LS residuals (LS + MAR). The MAR technique enables the use of the AAM χ 3 time-series as the explanatory variable for the computation of LOD or UT1-UTC predictions. In order to evaluate the performance of this approach, two other prediction schemes are also applied: (1) LS extrapolation, (2) combination of LS extrapolation and univariate autoregressive (AR) prediction of LS residuals (LS + AR). The multivariate predictions of AAM χ 3 data, however, are computed as a combination of the extrapolation of the LS model for annual and semiannual oscillations and the LS + MAR. The AAM χ 3 predictions are also compared with LS extrapolation and LS + AR prediction. It is shown that the predictions of LOD and UT1-UTC based on LS + MAR taking into account the axial component of AAM are more accurate than the predictions of LOD and UT1-UTC based on LS extrapolation or on LS + AR. In particular, the UT1-UTC predictions based on LS + MAR during El Niño/La Niña events exhibit considerably smaller prediction errors than those calculated by means of LS or LS + AR. The AAM χ 3 time-series is predicted using LS + MAR with higher accuracy than applying LS extrapolation itself in the case of medium-term predictions (up to 100 days in the future). However, the predictions of AAM χ 3 reveal the best accuracy for LS + AR.
NASA Astrophysics Data System (ADS)
Ullemeyer, Klaus; Lokajíček, Tomás; Vasin, Roman N.; Keppler, Ruth; Behrmann, Jan H.
2018-02-01
In this study elastic moduli of three different rock types of simple (calcite marble) and more complex (amphibolite, micaschist) mineralogical compositions were determined by modeling of elastic moduli using texture (crystallographic preferred orientation; CPO) data, experimental investigation and extrapolation. 3D models were calculated using single crystal elastic moduli, and CPO measured using time-of-flight neutron diffraction at the SKAT diffractometer in Dubna (Russia) and subsequently analyzed using Rietveld Texture Analysis. To define extrinsic factors influencing elastic behaviour, P-wave and S-wave velocity anisotropies were experimentally determined at 200, 400 and 600 MPa confining pressure. Functions describing variations of the elastic moduli with confining pressure were then used to predict elastic properties at 1000 MPa, revealing anisotropies in a supposedly crack-free medium. In the calcite marble elastic anisotropy is dominated by the CPO. Velocities continuously increase, while anisotropies decrease from measured, over extrapolated to CPO derived data. Differences in velocity patterns with sample orientation suggest that the foliation forms an important mechanical anisotropy. The amphibolite sample shows similar magnitudes of extrapolated and CPO derived velocities, however the pattern of CPO derived velocity is closer to that measured at 200 MPa. Anisotropy decreases from the extrapolated to the CPO derived data. In the micaschist, velocities are higher and anisotropies are lower in the extrapolated data, in comparison to the data from measurements at lower pressures. Generally our results show that predictions for the elastic behavior of rocks at great depths are possible based on experimental data and those computed from CPO. The elastic properties of the lower crust can, thus, be characterized with an improved degree of confidence using extrapolations. Anisotropically distributed spherical micro-pores are likely to be preserved, affecting seismic velocity distributions. Compositional variations in the polyphase rock samples do not significantly change the velocity patterns, allowing the use of RTA-derived volume percentages for the modeling of elastic moduli.
Endangered species toxicity extrapolation using ICE models
The National Research Council’s (NRC) report on assessing pesticide risks to threatened and endangered species (T&E) included the recommendation of using interspecies correlation models (ICE) as an alternative to general safety factors for extrapolating across species. ...
Present constraints on the H-dibaryon at the physical point from Lattice QCD
Beane, S. R.; Chang, E.; Detmold, W.; ...
2011-11-10
The current constraints from Lattice QCD on the existence of the H-dibaryon are discussed. With only two significant Lattice QCD calculations of the H-dibaryon binding energy at approximately the same lattice spacing, the form of the chiral and continuum extrapolations to the physical point are not determined. In this brief report, an extrapolation that is quadratic in the pion mass, motivated by low-energy effective field theory, is considered. An extrapolation that is linear in the pion mass is also considered, a form that has no basis in the effective field theory, but is found to describe the light-quark mass dependencemore » observed in Lattice QCD calculations of the octet baryon masses. In both cases, the extrapolation to the physical pion mass allows for a bound H-dibaryon or a near-threshold scattering state.« less
Bounding species distribution models
Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.
Bounding Species Distribution Models
NASA Technical Reports Server (NTRS)
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
How to Appropriately Extrapolate Costs and Utilities in Cost-Effectiveness Analysis.
Bojke, Laura; Manca, Andrea; Asaria, Miqdad; Mahon, Ronan; Ren, Shijie; Palmer, Stephen
2017-08-01
Costs and utilities are key inputs into any cost-effectiveness analysis. Their estimates are typically derived from individual patient-level data collected as part of clinical studies the follow-up duration of which is often too short to allow a robust quantification of the likely costs and benefits a technology will yield over the patient's entire lifetime. In the absence of long-term data, some form of temporal extrapolation-to project short-term evidence over a longer time horizon-is required. Temporal extrapolation inevitably involves assumptions regarding the behaviour of the quantities of interest beyond the time horizon supported by the clinical evidence. Unfortunately, the implications for decisions made on the basis of evidence derived following this practice and the degree of uncertainty surrounding the validity of any assumptions made are often not fully appreciated. The issue is compounded by the absence of methodological guidance concerning the extrapolation of non-time-to-event outcomes such as costs and utilities. This paper considers current approaches to predict long-term costs and utilities, highlights some of the challenges with the existing methods, and provides recommendations for future applications. It finds that, typically, economic evaluation models employ a simplistic approach to temporal extrapolation of costs and utilities. For instance, their parameters (e.g. mean) are typically assumed to be homogeneous with respect to both time and patients' characteristics. Furthermore, costs and utilities have often been modelled to follow the dynamics of the associated time-to-event outcomes. However, cost and utility estimates may be more nuanced, and it is important to ensure extrapolation is carried out appropriately for these parameters.
Controversy on toxicological dose-response relationships and low-dose extrapolation of respective risks is often the consequence of misleading data presentation, lack of differentiation between types of response variables, and diverging mechanistic interpretation. In this chapter...
The chemistry side of AOP: implications for toxicity extrapolation
An adverse outcome pathway (AOP) is a structured representation of the biological events that lead to adverse impacts following a molecular initiating event caused by chemical interaction with a macromolecule. AOPs have been proposed to facilitate toxicity extrapolation across s...
Jaffrin, M Y; Maasrani, M; Le Gourrier, A; Boudailliez, B
1997-05-01
A method is presented for monitoring the relative variation of extracellular and intracellular fluid volumes using a multifrequency impedance meter and the Cole-Cole extrapolation technique. It is found that this extrapolation is necessary to obtain reliable data for the resistance of the intracellular fluid. The extracellular and intracellular resistances can be approached using frequencies of, respectively, 5 kHz and 1000 kHz, but the use of 100 kHz leads to unacceptable errors. In the conventional treatment the overall relative variation of intracellular resistance is found to be relatively small.
First moments of nucleon generalized parton distributions
Wang, P.; Thomas, A. W.
2010-06-01
We extrapolate the first moments of the generalized parton distributions using heavy baryon chiral perturbation theory. The calculation is based on the one loop level with the finite range regularization. The description of the lattice data is satisfactory, and the extrapolated moments at physical pion mass are consistent with the results obtained with dimensional regularization, although the extrapolation in the momentum transfer to t=0 does show sensitivity to form factor effects, which lie outside the realm of chiral perturbation theory. We discuss the significance of the results in the light of modern experiments as well as QCD inspired models.
Extrapolation of Functions of Many Variables by Means of Metric Analysis
NASA Astrophysics Data System (ADS)
Kryanev, Alexandr; Ivanov, Victor; Romanova, Anastasiya; Sevastianov, Leonid; Udumyan, David
2018-02-01
The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.
How Accurate Are Infrared Luminosities from Monochromatic Photometric Extrapolation?
NASA Astrophysics Data System (ADS)
Lin, Zesen; Fang, Guanwen; Kong, Xu
2016-12-01
Template-based extrapolations from only one photometric band can be a cost-effective method to estimate the total infrared (IR) luminosities ({L}{IR}) of galaxies. By utilizing multi-wavelength data that covers across 0.35-500 μm in GOODS-North and GOODS-South fields, we investigate the accuracy of this monochromatic extrapolated {L}{IR} based on three IR spectral energy distribution (SED) templates out to z˜ 3.5. We find that the Chary & Elbaz template provides the best estimate of {L}{IR} in Herschel/Photodetector Array Camera and Spectrometer (PACS) bands, while the Dale & Helou template performs best in Herschel/Spectral and Photometric Imaging Receiver (SPIRE) bands. To estimate {L}{IR}, we suggest that extrapolations from the available longest wavelength PACS band based on the Chary & Elbaz template can be a good estimator. Moreover, if the PACS measurement is unavailable, extrapolations from SPIRE observations but based on the Dale & Helou template can also provide a statistically unbiased estimate for galaxies at z≲ 2. The emission with a rest-frame 10-100 μm range of IR SED can be well described by all three templates, but only the Dale & Helou template shows a nearly unbiased estimate of the emission of the rest-frame submillimeter part.
Chiral extrapolation of the leading hadronic contribution to the muon anomalous magnetic moment
NASA Astrophysics Data System (ADS)
Golterman, Maarten; Maltman, Kim; Peris, Santiago
2017-04-01
A lattice computation of the leading-order hadronic contribution to the muon anomalous magnetic moment can potentially help reduce the error on the Standard Model prediction for this quantity, if sufficient control of all systematic errors affecting such a computation can be achieved. One of these systematic errors is that associated with the extrapolation to the physical pion mass from values on the lattice larger than the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 200 to 400 MeV with the help of two-loop chiral perturbation theory, and we find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various tricks to improve the reliability of the chiral extrapolation employed in the literature are taken into account. In addition, while chiral perturbation theory also predicts the dependence on the pion mass of the leading-order hadronic contribution to the muon anomalous magnetic moment as the chiral limit is approached, this prediction turns out to be of no practical use because the physical pion mass is larger than the muon mass that sets the scale for the onset of this behavior.
Risk estimates for CO exposure in man based on behavioral and physiological responses in rodents
NASA Technical Reports Server (NTRS)
Gross, M. K.
1983-01-01
An examination of animal response to CO is studied along with potential models for extrapolating animal test data to humans. The best models for extrapolating data were found to be the Probit and Weibull models.
Calculation methods study on hot spot stress of new girder structure detail
NASA Astrophysics Data System (ADS)
Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing
2017-10-01
To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.
NASA Astrophysics Data System (ADS)
Hernández-Pajares, Manuel; Garcia-Fernández, Miquel; Rius, Antonio; Notarpietro, Riccardo; von Engeln, Axel; Olivares-Pulido, Germán.; Aragón-Àngel, Àngela; García-Rigo, Alberto
2017-08-01
The new radio-occultation (RO) instrument on board the future EUMETSAT Polar System-Second Generation (EPS-SG) satellites, flying at a height of 820 km, is primarily focusing on neutral atmospheric profiling. It will also provide an opportunity for RO ionospheric sounding, but only below impact heights of 500 km, in order to guarantee a full data gathering of the neutral part. This will leave a gap of 320 km, which impedes the application of the direct inversion techniques to retrieve the electron density profile. To overcome this challenge, we have looked for new ways (accurate and simple) of extrapolating the electron density (also applicable to other low-Earth orbiting, LEO, missions like CHAMP): a new Vary-Chap Extrapolation Technique (VCET). VCET is based on the scale height behavior, linearly dependent on the altitude above hmF2. This allows extrapolating the electron density profile for impact heights above its peak height (this is the case for EPS-SG), up to the satellite orbital height. VCET has been assessed with more than 3700 complete electron density profiles obtained in four representative scenarios of the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) in the United States and the Formosa Satellite Mission 3 (FORMOSAT-3) in Taiwan, in solar maximum and minimum conditions, and geomagnetically disturbed conditions, by applying an updated Improved Abel Transform Inversion technique to dual-frequency GPS measurements. It is shown that VCET performs much better than other classical Chapman models, with 60% of occultations showing relative extrapolation errors below 20%, in contrast with conventional Chapman model extrapolation approaches with 10% or less of the profiles with relative error below 20%.
Brennan, Scott F; Cresswell, Andrew G; Farris, Dominic J; Lichtwark, Glen A
2017-11-07
Ultrasonography is a useful technique to study muscle contractions in vivo, however larger muscles like vastus lateralis may be difficult to visualise with smaller, commonly used transducers. Fascicle length is often estimated using linear trigonometry to extrapolate fascicle length to regions where the fascicle is not visible. However, this approach has not been compared to measurements made with a larger field of view for dynamic muscle contractions. Here we compared two different single-transducer extrapolation methods to measure VL muscle fascicle length to a direct measurement made using two synchronised, in-series transducers. The first method used pennation angle and muscle thickness to extrapolate fascicle length outside the image (extrapolate method). The second method determined fascicle length based on the extrapolated intercept between a fascicle and the aponeurosis (intercept method). Nine participants performed maximal effort, isometric, knee extension contractions on a dynamometer at 10° increments from 50 to 100° of knee flexion. Fascicle length and torque were simultaneously recorded for offline analysis. The dual transducer method showed similar patterns of fascicle length change (overall mean coefficient of multiple correlation was 0.76 and 0.71 compared to extrapolate and intercept methods respectively), but reached different absolute lengths during the contractions. This had the effect of producing force-length curves of the same shape, but each curve was shifted in terms of absolute length. We concluded that dual transducers are beneficial for studies that examine absolute fascicle lengths, whereas either of the single transducer methods may produce similar results for normalised length changes, and repeated measures experimental designs. Copyright © 2017 Elsevier Ltd. All rights reserved.
A major challenge in chemical risk assessment is extrapolation of toxicity data from tested to untested species. Successful cross-species extrapolation involves understanding similarities and differences in toxicokinetic and toxicodynamic processes among species. Herein we consi...
Extrapolation of sonic boom pressure signatures by the waveform parameter method
NASA Technical Reports Server (NTRS)
Thomas, C. L.
1972-01-01
The waveform parameter method of sonic boom extrapolation is derived and shown to be equivalent to the F-function method. A computer program based on the waveform parameter method is presented and discussed, with a sample case demonstrating program input and output.
THE MISUSE OF HYDROLOGIC UNIT MAPS FOR EXTRAPOLATION, REPORTING, AND ECOSYSTEM MANAGEMENT
The use of watersheds to conduct research on land-water relationships has expanded recently to include both extrapolation and reporting of water resource information and ecosystem management. More often than not, hydrologic units, and hydrologic unit codes (HUCs) in particular, a...
SeqAPASS v3.0 for Extrapolation of Toxicity Knowledge Across Species
The U.S. Environmental Protection Agency Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS; https://seqapass.epa.gov/seqapass/) tool was initially released to the public in 2016, providing a novel means to begin to address challenges in extrapolating toxicity ...
STRESSOR-RESPONSE RELATIONSHIPS: THE FOUNDATION FOR CHARACTERIZING EFFECTS
This research has 4 main components. The first focuses on developing the scientific information needed to extrapolate data from one or a few tested species to species of primary concern, e.g., the need to extrapolate data from domesticated birds to piscivorous avian species when...
THE ROLE OF MAMMALIAN DATA IN DETERMINING PHARMACEUTICAL RESPONSES IN AQUATIC ORGANISMS
The limitations surrounding application of pharmaceutical data are restricted to extrapolation of the animal and human data across phyla. Experience dictates that mammalian data are most likely to extrapolate predictably to fish and other aquatic vertebrates (e.g. Amphibia), and ...
Robust approaches to quantification of margin and uncertainty for sparse data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less
On the assessment of biological life support system operation range
NASA Astrophysics Data System (ADS)
Bartsev, Sergey
Biological life support systems (BLSS) can be used in long-term space missions only if well-thought-out assessment of the allowable operating range is obtained. The range has to account both permissible working parameters of BLSS and the critical level of perturbations of BLSS stationary state. Direct approach to outlining the range by statistical treatment of experimental data on BLSS destruction seems to be not applicable due to ethical, economical, and saving time reasons. Mathematical model is the unique tool for the generalization of experimental data and the extrapolation of the revealed regularities beyond empirical experience. The problem is that the quality of extrapolation depends on the adequacy of corresponding model verification, but good verification requires wide range of experimental data for fitting, which is not achievable for manned experimental BLSS. Possible way to improve the extrapolation quality of inevitably poorly verified models of manned BLSS is to extrapolate general tendency obtained from unmanned LSS theoretical-experiment investigations. Possibilities and limitations of such approach are discussed.
Larsen, Ross E.
2016-04-12
In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally,more » tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morales, Johnny E., E-mail: johnny.morales@lh.org.
Purpose: An experimental extrapolation technique is presented, which can be used to determine the relative output factors for very small x-ray fields using the Gafchromic EBT3 film. Methods: Relative output factors were measured for the Brainlab SRS cones ranging in diameters from 4 to 30 mm{sup 2} on a Novalis Trilogy linear accelerator with 6 MV SRS x-rays. The relative output factor was determined from an experimental reducing circular region of interest (ROI) extrapolation technique developed to remove the effects of volume averaging. This was achieved by scanning the EBT3 film measurements with a high scanning resolution of 1200 dpi.more » From the high resolution scans, the size of the circular regions of interest was varied to produce a plot of relative output factors versus area of analysis. The plot was then extrapolated to zero to determine the relative output factor corresponding to zero volume. Results: Results have shown that for a 4 mm field size, the extrapolated relative output factor was measured as a value of 0.651 ± 0.018 as compared to 0.639 ± 0.019 and 0.633 ± 0.021 for 0.5 and 1.0 mm diameter of analysis values, respectively. This showed a change in the relative output factors of 1.8% and 2.8% at these comparative regions of interest sizes. In comparison, the 25 mm cone had negligible differences in the measured output factor between zero extrapolation, 0.5 and 1.0 mm diameter ROIs, respectively. Conclusions: This work shows that for very small fields such as 4.0 mm cone sizes, a measureable difference can be seen in the relative output factor based on the circular ROI and the size of the area of analysis using radiochromic film dosimetry. The authors recommend to scan the Gafchromic EBT3 film at a resolution of 1200 dpi for cone sizes less than 7.5 mm and to utilize an extrapolation technique for the output factor measurements of very small field dosimetry.« less
NASA Technical Reports Server (NTRS)
1971-01-01
A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.
Buckler, Denny R., Foster L. Mayer, Mark R. Ellersieck and Amha Asfaw. 2003. Evaluation of Minimum Data Requirements for Acute Toxicity Value Extrapolation with Aquatic Organisms. EPA/600/R-03/104. U.S. Environmental Protection Agency, National Health and Environmental Effects Re...
Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...
NASA Astrophysics Data System (ADS)
Alam, Md. Mehboob; Deur, Killian; Knecht, Stefan; Fromager, Emmanuel
2017-11-01
The extrapolation technique of Savin [J. Chem. Phys. 140, 18A509 (2014)], which was initially applied to range-separated ground-state-density-functional Hamiltonians, is adapted in this work to ghost-interaction-corrected (GIC) range-separated ensemble density-functional theory (eDFT) for excited states. While standard extrapolations rely on energies that decay as μ-2 in the large range-separation-parameter μ limit, we show analytically that (approximate) range-separated GIC ensemble energies converge more rapidly (as μ-3) towards their pure wavefunction theory values (μ → +∞ limit), thus requiring a different extrapolation correction. The purpose of such a correction is to further improve on the convergence and, consequently, to obtain more accurate excitation energies for a finite (and, in practice, relatively small) μ value. As a proof of concept, we apply the extrapolation method to He and small molecular systems (viz., H2, HeH+, and LiH), thus considering different types of excitations such as Rydberg, charge transfer, and double excitations. Potential energy profiles of the first three and four singlet Σ+ excitation energies in HeH+ and H2, respectively, are studied with a particular focus on avoided crossings for the latter. Finally, the extraction of individual state energies from the ensemble energy is discussed in the context of range-separated eDFT, as a perspective.
Ferris, D Lance; Reb, Jochen; Lian, Huiwen; Sim, Samantha; Ang, Dionysius
2018-03-01
Past research on dynamic workplace performance evaluation has taken as axiomatic that temporal performance trends produce naïve extrapolation effects on performance ratings. That is, we naïvely assume that an individual whose performance has trended upward over time will continue to improve, and rate that individual more positively than an individual whose performance has trended downward over time-even if, on average, the 2 individuals have performed at an equivalent level. However, we argue that such naïve extrapolation effects are more pronounced in Western countries than Eastern countries, owing to Eastern countries having a more holistic cognitive style. To test our hypotheses, we examined the effect of performance trend on expectations of future performance and ratings of past performance across 2 studies: Study 1 compares the magnitude of naïve extrapolation effects among Singaporeans primed with either a more or less holistic cognitive style, and Study 2 examines holistic cognitive style as a mediating mechanism accounting for differences in the magnitude of naïve extrapolation effects between American and Chinese raters. Across both studies, we found support for our predictions that dynamic performance trends have less impact on the ratings of more holistic thinkers. Implications for the dynamic performance and naïve extrapolation literatures are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Power maps and wavefront for progressive addition lenses in eyeglass frames.
Mejía, Yobani; Mora, David A; Díaz, Daniel E
2014-10-01
To evaluate a method for measuring the cylinder, sphere, and wavefront of progressive addition lenses (PALs) in eyeglass frames. We examine the contour maps of cylinder, sphere, and wavefront of a PAL assembled in an eyeglass frame using an optical system based on a Hartmann test. To reduce the data noise, particularly in the border of the eyeglass frame, we implement a method based on the Fourier analysis to extrapolate spots outside the eyeglass frame. The spots are extrapolated up to a circular pupil that circumscribes the eyeglass frame and compared with data obtained from a circular uncut PAL. By using the Fourier analysis to extrapolate spots outside the eyeglass frame, we can remove the edge artifacts of the PAL within its frame and implement the modal method to fit wavefront data with Zernike polynomials within a circular aperture that circumscribes the frame. The extrapolated modal maps from framed PALs accurately reflect maps obtained from uncut PALs and provide smoothed maps for the cylinder and sphere inside the eyeglass frame. The proposed method for extrapolating spots outside the eyeglass frame removes edge artifacts of the contour maps (wavefront, cylinder, and sphere), which may be useful to facilitate measurements such as the length and width of the progressive corridor for a PAL in its frame. The method can be applied to any shape of eyeglass frame.
Biotransformation rates (Vmax) extrapolated from in vitro data are used increasingly in human physiologically based pharmacokinetic (PBPK) models. Extrapolation of Vmax from in vitro data requires use of scaling factors, including mg of microsomal protein/g liver (MPPGL), nmol of...
USDA-ARS?s Scientific Manuscript database
Quantitative microbial risk assessment (QMRA) is a valuable complement to epidemiology for understanding the health impacts of waterborne pathogens. The approach works by extrapolating available data in two ways. First, dose-response data are typically extrapolated from feeding studies, which use ...
In vitro-in vivo extrapolation (IVIVE), or the process of using in vitro data to predict in vivo phenomena, provides key opportunities to bridge the disconnect between high-throughput screening data and real-world human exposures and potential health effects. Strategies utilizing...
A significant challenge in ecotoxicology has been determining chemical hazards to species with limited or no toxicity data. Currently, extrapolation tools like U.S. EPA’s Web-based Interspecies Correlation Estimation (Web-ICE; www3.epa.gov/webice) models categorize toxicity...
MULTIPLE SOLVENT EXPOSURE IN HUMANS: CROSS-SPECIES EXTRAPOLATIONS
Multiple Solvent Exposures in Humans:
Cross-Species Extrapolations
(Future Research Plan)
Vernon A. Benignus1, Philip J. Bushnell2 and William K. Boyes2
A few solvents can be safely studied in acute experiments in human subjects. Data exist in rats f...
Windtunnel Rebuilding And Extrapolation To Flight At Transsonic Speed For ExoMars
NASA Astrophysics Data System (ADS)
Fertig, Markus; Neeb, Dominik; Gulhan, Ali
2011-05-01
The static as well as the dynamic behaviour of the EXOMARS vehicle in the transonic velocity regime has been investigated experimentally by the Supersonic and Hypersonic Technology Department of DLR in order to investigate the behaviour prior to parachute opening. Since the experimental work was performed in air, a numerical extrapolation to flight by means of CFD is necessary. At low supersonic speed this extrapolation to flight was performed by the Spacecraft Department of the Institute of Flow Technology of DLR employing the CFD code TAU. Numerical as well as experimental results for the wind tunnel test at Mach 1.2 will be compared and discussed for three different angles of attack.
A Method for Extrapolation of Atmospheric Soundings
2014-05-01
14 3.1.2 WRF Inter-Comparisons...8 Figure 5. Profiles comparing the 00 UTC 14 January 2013 GJT radiosonde to 1-km WRF data from 23 UTC extended from...comparing 1-km WRF data and 3-km WRF data extended from the “old surface” to the radiosonde surface using the standard extrapolation and extended
40 CFR 86.435-78 - Extrapolated emission values.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) accumulate distance to the useful life. [42 FR 1126, Jan. 5, 1977, as amended at 49 FR 48139, Dec. 10, 1984] ... Regulations for 1978 and Later New Motorcycles, General Provisions § 86.435-78 Extrapolated emission values... the useful life, or if all points used to generate the lines are below the standards, predicted useful...
40 CFR 86.435-78 - Extrapolated emission values.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) accumulate distance to the useful life. [42 FR 1126, Jan. 5, 1977, as amended at 49 FR 48139, Dec. 10, 1984] ... Regulations for 1978 and Later New Motorcycles, General Provisions § 86.435-78 Extrapolated emission values... the useful life, or if all points used to generate the lines are below the standards, predicted useful...
Using models to extrapolate population-level effects from laboratory toxicity tests in support of population risk assessments. Munns, W.R., Jr.*, Anne Kuhn, Matt G. Mitro, and Timothy R. Gleason, U.S. EPA ORD NHEERL, Narragansett, RI, USA. Driven in large part by management goa...
DOSE-RESPONSE BEHAVIOR OF ANDROGENIC AND ANTIANDROGENIC CHEMICALS: IMPLICATIONS FOR LOW-DOSE EXTRAPOLATION AND CUMULATIVE TOXICITY. LE Gray Jr, C Wolf, J Furr, M Price, C Lambright, VS Wilson and J Ostby. USEPA, ORD, NHEERL, EB, RTD, RTP, NC, USA.
Dose-response behavior of a...
There are a number of risk management decisions, which range from prioritization for testing to quantitative risk assessments. The utility of in vitro studies in these decisions depends on how well the results of such data can be qualitatively and quantitatively extrapolated to i...
An age-classified projection matrix model has been developed to extrapolate the chronic (28-35d) demographic responses of Americamysis bahia (formerly Mysidopsis bahia) to population-level response. This study was conducted to evaluate the efficacy of this model for predicting t...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... is calculated from tumor data of the cancer bioassays using a statistical extrapolation procedure... regulated as a carcinogen, FDA will analyze the data submitted using either a statistical extrapolation... million. * * * * * 0 3. In Sec. 500.84, revise paragraph (c) introductory text to read as follows: Sec...
Sonic Boom Propagation Codes Validated by Flight Test
NASA Technical Reports Server (NTRS)
Poling, Hugh W.
1996-01-01
The sonic boom propagation codes reviewed in this study, SHOCKN and ZEPHYRUS, implement current theory on air absorption using different computational concepts. Review of the codes with a realistic atmosphere model confirm the agreement of propagation results reported by others for idealized propagation conditions. ZEPHYRUS offers greater flexibility in propagation conditions and is thus preferred for practical aircraft analysis. The ZEPHYRUS code was used to propagate sonic boom waveforms measured approximately 1000 feet away from an SR-71 aircraft flying at Mach 1.25 to 5000 feet away. These extrapolated signatures were compared to measurements at 5000 feet. Pressure values of the significant shocks (bow, canopy, inlet and tail) in the waveforms are consistent between extrapolation and measurement. Of particular interest is that four (independent) measurements taken under the aircraft centerline converge to the same extrapolated result despite differences in measurement conditions. Agreement between extrapolated and measured signature duration is prevented by measured duration of the 5000 foot signatures either much longer or shorter than would be expected. The duration anomalies may be due to signature probing not sufficiently parallel to the aircraft flight direction.
Linear prediction data extrapolation superresolution radar imaging
NASA Astrophysics Data System (ADS)
Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing
1993-05-01
Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.
NASA Astrophysics Data System (ADS)
Ilieva, T.; Iliev, I.; Pashov, A.
2016-12-01
In the traditional description of electronic states of diatomic molecules by means of molecular constants or Dunham coefficients, one of the important fitting parameters is the value of the zero point energy - the minimum of the potential curve or the energy of the lowest vibrational-rotational level - E00 . Their values are almost always the result of an extrapolation and it may be difficult to estimate their uncertainties, because they are connected not only with the uncertainty of the experimental data, but also with the distribution of experimentally observed energy levels and the particular realization of set of Dunham coefficients. This paper presents a comprehensive analysis based on Monte Carlo simulations, which aims to demonstrate the influence of all these factors on the uncertainty of the extrapolated minimum of the potential energy curve U (Re) and the value of E00 . The very good extrapolation properties of the Dunham coefficients are quantitatively confirmed and it is shown that for a proper estimate of the uncertainties, the ambiguity in the composition of the Dunham coefficients should be taken into account.
Long-term predictions using natural analogues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, R.C.
1995-09-01
One of the unique and scientifically most challenging aspects of nuclear waste isolation is the extrapolation of short-term laboratory data (hours to years) to the long time periods (10{sup 3}-10{sup 5} years) required by regulatory agencies for performance assessment. The direct validation of these extrapolations is not possible, but methods must be developed to demonstrate compliance with government regulations and to satisfy the lay public that there is a demonstrable and reasonable basis for accepting the long-term extrapolations. Natural systems (e.g., {open_quotes}natural analogues{close_quotes}) provide perhaps the only means of partial {open_quotes}validation,{close_quotes} as well as data that may be used directlymore » in the models that are used in the extrapolation. Natural systems provide data on very large spatial (nm to km) and temporal (10{sup 3}-10{sup 8} years) scales and in highly complex terranes in which unknown synergisms may affect radionuclide migration. This paper reviews the application (and most importantly, the limitations) of data from natural analogue systems to the {open_quotes}validation{close_quotes} of performance assessments.« less
NASA Astrophysics Data System (ADS)
Moraitis, Kostas; Archontis, Vasilis; Tziotziou, Konstantinos; Georgoulis, Manolis K.
We calculate the instantaneous free magnetic energy and relative magnetic helicity of solar active regions using two independent approaches: a) a non-linear force-free (NLFF) method that requires only a single photospheric vector magnetogram, and b) well known semi-analytical formulas that require the full three-dimensional (3D) magnetic field structure. The 3D field is obtained either from MHD simulations, or from observed magnetograms via respective NLFF field extrapolations. We find qualitative agreement between the two methods and, quantitatively, a discrepancy not exceeding a factor of 4. The comparison of the two methods reveals, as a byproduct, two independent tests for the quality of a given force-free field extrapolation. We find that not all extrapolations manage to achieve the force-free condition in a valid, divergence-free, magnetic configuration. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.
Benhaim, Deborah; Grushka, Eli
2010-01-01
This study investigates lipophilicity determination by chromatographic measurements using the polar embedded Ascentis RP-Amide stationary phase. As a new generation of amide-functionalized silica stationary phase, the Ascentis RP-Amide column is evaluated as a possible substitution to the n-octanol/water partitioning system for lipophilicity measurements. For this evaluation, extrapolated retention factors, log k'w, of a set of diverse compounds were determined using different methanol contents in the mobile phase. The use of n-octanol enriched mobile phase enhances the relationship between the slope (S) of the extrapolation lines and the extrapolated log k'w (the intercept of the extrapolation),as well as the correlation between log P values and the extrapolated log k'w (1:1 correlation, r2 = 0.966).In addition, the use of isocratic retention factors, at 40% methanol in the mobile phase, provides a rapid tool for lipophilicity determination. The intermolecular interactions that contribute to the retention process in the Ascentis RP-Amide phase are characterized using the solvation parameter model of Abraham.The LSER system constants for the column are very similar to the LSER constants of the n-octanol/water extraction system. Tanaka radar plots are used for quick visual comparison of the system constants of the Ascentis RP-Amide column and the n-octanol/water extraction system. The results all indicate that the Ascentis RP-Amide stationary phase can provide reliable lipophilic data. Copyright 2009 Elsevier B.V. All rights reserved.
Effects of sport expertise on representational momentum during timing control.
Nakamoto, Hiroki; Mori, Shiro; Ikudome, Sachi; Unenaka, Satoshi; Imanaka, Kuniyasu
2015-04-01
Sports involving fast visual perception require players to compensate for delays in neural processing of visual information. Memory for the final position of a moving object is distorted forward along its path of motion (i.e., "representational momentum," RM). This cognitive extrapolation of visual perception might compensate for the neural delay in interacting appropriately with a moving object. The present study examined whether experienced batters cognitively extrapolate the location of a fast-moving object and whether this extrapolation is associated with coincident timing control. Nine expert and nine novice baseball players performed a prediction motion task in which a target moved from one end of a straight 400-cm track at a constant velocity. In half of the trials, vision was suddenly occluded when the target reached the 200-cm point (occlusion condition). Participants had to press a button concurrently with the target arrival at the end of the track and verbally report their subjective assessment of the first target-occluded position. Experts showed larger RM magnitude (cognitive extrapolation) than did novices in the occlusion condition. RM magnitude and timing errors were strongly correlated in the fast velocity condition in both experts and novices, whereas in the slow velocity condition, a significant correlation appeared only in experts. This suggests that experts can cognitively extrapolate the location of a moving object according to their anticipation and, as a result, potentially circumvent neural processing delays. This process might be used to control response timing when interacting with moving objects.
Filling gaps in visual motion for target capture
Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637
Filling gaps in visual motion for target capture.
Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.
A nowcasting technique based on application of the particle filter blending algorithm
NASA Astrophysics Data System (ADS)
Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai
2017-10-01
To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.
Steen, Valerie; Sofaer, Helen R.; Skagen, Susan K.; Ray, Andrea J.; Noon, Barry R
2017-01-01
Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water availability may be a more proximal driver than precipitation. However, because cross-validation results were correlated with extrapolation results, the use of cross-validation performance metrics to guide modeling choices where knowledge is limited was supported.
Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments
Dittrich, Sandra; Noesselt, Tömme
2018-01-01
Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research. PMID:29618999
Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments.
Dittrich, Sandra; Noesselt, Tömme
2018-01-01
Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research.
NASA Astrophysics Data System (ADS)
Bližňák, Vojtěch; Sokol, Zbyněk; Zacharov, Petr
2017-02-01
An evaluation of convective cloud forecasts performed with the numerical weather prediction (NWP) model COSMO and extrapolation of cloud fields is presented using observed data derived from the geostationary satellite Meteosat Second Generation (MSG). The present study focuses on the nowcasting range (1-5 h) for five severe convective storms in their developing stage that occurred during the warm season in the years 2012-2013. Radar reflectivity and extrapolated radar reflectivity data were assimilated for at least 6 h depending on the time of occurrence of convection. Synthetic satellite imageries were calculated using radiative transfer model RTTOV v10.2, which was implemented into the COSMO model. NWP model simulations of IR10.8 μm and WV06.2 μm brightness temperatures (BTs) with a horizontal resolution of 2.8 km were interpolated into the satellite projection and objectively verified against observations using Root Mean Square Error (RMSE), correlation coefficient (CORR) and Fractions Skill Score (FSS) values. Naturally, the extrapolation of cloud fields yielded an approximately 25% lower RMSE, 20% higher CORR and 15% higher FSS at the beginning of the second forecasted hour compared to the NWP model forecasts. On the other hand, comparable scores were observed for the third hour, whereas the NWP forecasts outperformed the extrapolation by 10% for RMSE, 15% for CORR and up to 15% for FSS during the fourth forecasted hour and 15% for RMSE, 27% for CORR and up to 15% for FSS during the fifth forecasted hour. The analysis was completed by a verification of the precipitation forecasts yielding approximately 8% higher RMSE, 15% higher CORR and up to 45% higher FSS when the NWP model simulation is used compared to the extrapolation for the first hour. Both the methods yielded unsatisfactory level of precipitation forecast accuracy from the fourth forecasted hour onward.
Surface dose measurements with commonly used detectors: a consistent thickness correction method.
Reynolds, Tatsiana A; Higgins, Patrick
2015-09-08
The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30-360) with other parallel plate chambers RMI-449 (Attix), Capintec PS-033, PTW 30-329 (Markus) and Memorial. Measurements of surface dose for 6MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (-0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid-state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three-detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth-dose curves and is not recommended for these types of measurements.
Steen, Valerie; Sofaer, Helen R; Skagen, Susan K; Ray, Andrea J; Noon, Barry R
2017-11-01
Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water availability may be a more proximal driver than precipitation. However, because cross-validation results were correlated with extrapolation results, the use of cross-validation performance metrics to guide modeling choices where knowledge is limited was supported.
Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio
2017-07-01
Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as geographical areas subjected to novel climates are expected to arise, they must be reported as they show less accurate predictions under future climate scenarios. Consequently, environmental extrapolation and dispersal processes should be explicitly incorporated to report and reduce uncertainties in temporal predictions of SDMs, respectively. Doing so, we expect to improve the reliability of the information we provide for conservation decision makers under future climate change scenarios. © 2017 by the Ecological Society of America.
Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian
2014-01-01
Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species’ evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals (“MammalDIET”). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external validation showed that: (1) extrapolations were most reliable for primary food items; (2) several diet categories (“Animal”, “Mammal”, “Invertebrate”, “Plant”, “Seed”, “Fruit”, and “Leaf”) had high proportions of correctly predicted diet ranks; and (3) the potential of correctly extrapolating specific diet categories varied both within and among clades. Global maps of species richness and proportion showed congruence among trophic levels, but also substantial discrepancies between dietary guilds. MammalDIET provides a comprehensive, unique and freely available dataset on diet preferences for all terrestrial mammals worldwide. It enables broad-scale analyses for specific trophic levels and dietary guilds, and a first assessment of trait conservatism in mammalian diet preferences at a global scale. The digitalization, extrapolation and validation procedures could be transferable to other trait data and taxa. PMID:25165528
NASA Astrophysics Data System (ADS)
Huang, Xinchuan; Valeev, Edward F.; Lee, Timothy J.
2010-12-01
One-particle basis set extrapolation is compared with one of the new R12 methods for computing highly accurate quartic force fields (QFFs) and spectroscopic data, including molecular structures, rotational constants, and vibrational frequencies for the H2O, N2H+, NO2+, and C2H2 molecules. In general, agreement between the spectroscopic data computed from the best R12 and basis set extrapolation methods is very good with the exception of a few parameters for N2H+ where it is concluded that basis set extrapolation is still preferred. The differences for H2O and NO2+ are small and it is concluded that the QFFs from both approaches are more or less equivalent in accuracy. For C2H2, however, a known one-particle basis set deficiency for C-C multiple bonds significantly degrades the quality of results obtained from basis set extrapolation and in this case the R12 approach is clearly preferred over one-particle basis set extrapolation. The R12 approach used in the present study was modified in order to obtain high precision electronic energies, which are needed when computing a QFF. We also investigated including core-correlation explicitly in the R12 calculations, but conclude that current approaches are lacking. Hence core-correlation is computed as a correction using conventional methods. Considering the results for all four molecules, it is concluded that R12 methods will soon replace basis set extrapolation approaches for high accuracy electronic structure applications such as computing QFFs and spectroscopic data for comparison to high-resolution laboratory or astronomical observations, provided one uses a robust R12 method as we have done here. The specific R12 method used in the present study, CCSD(T)R12, incorporated a reformulation of one intermediate matrix in order to attain machine precision in the electronic energies. Final QFFs for N2H+ and NO2+ were computed, including basis set extrapolation, core-correlation, scalar relativity, and higher-order correlation and then used to compute highly accurate spectroscopic data for all isotopologues. Agreement with high-resolution experiment for 14N2H+ and 14N2D+ was excellent, but for 14N16O2+ agreement for the two stretching fundamentals is outside the expected residual uncertainty in the theoretical values, and it is concluded that there is an error in the experimental quantities. It is hoped that the highly accurate spectroscopic data presented for the minor isotopologues of N2H+ and NO2+ will be useful in the interpretation of future laboratory or astronomical observations.
In practice, it is neither feasible nor ethical to conduct toxicity tests with all species that may be impacted by chemical exposures. Therefore, cross-species extrapolation is fundamental to human health and ecological risk assessment. The extensive chemical universe for which w...
Extrapolating intensified forest inventory data to the surrounding landscape using landsat
Evan B. Brooks; John W. Coulston; Valerie A. Thomas; Randolph H. Wynne
2015-01-01
In 2011, a collection of spatially intensified plots was established on three of the Experimental Forests and Ranges (EFRs) sites with the intent of facilitating FIA program objectives for regional extrapolation. Characteristic coefficients from harmonic regression (HR) analysis of associated Landsat stacks are used as inputs into a conditional random forests model to...
Extrapolation of forest community types with a geographic information system
W.K. Clatterbuck; J. Gregory
1991-01-01
A geographic information system (GIS) was used to project eight forest community types from a 1,190-acre (482-ha) intensively sampled area to an unsampled 19,887-acre (8,054-ha) adjacent area with similar environments on the Western Highland Rim of Tennessee. Both physiographic and vegetative parameters were used to distinguish, extrapolate, and map communities.
Species Differences in Androgen and Estrogen Receptor Structure and Function Among Vertebrates and Invertebrates: Interspecies Extrapolations regarding Endocrine Disrupting Chemicals
VS Wilson1, GT Ankley2, M Gooding 1,3, PD Reynolds 1,4, NC Noriega 1, M Cardon 1, P Hartig1,...
NASA Technical Reports Server (NTRS)
Mendelson, A.; Manson, S. S.
1960-01-01
A method using finite-difference recurrence relations is presented for direct extrapolation of families of curves. The method is illustrated by applications to creep-rupture data for several materials and it is shown that good results can be obtained without the necessity for any of the usual parameter concepts.
Low-dose extrapolation model selection for evaluating the health effects of environmental pollutants is a key component of the risk assessment process. At a workshop held in Baltimore, MD, on April 23-24, 2007, and sponsored by U.S. Environmental Protection Agency (EPA) and Johns...
New method of extrapolation of the resistance of a model planing boat to full size
NASA Technical Reports Server (NTRS)
Sottorf, W
1942-01-01
The previously employed method of extrapolating the total resistance to full size with lambda(exp 3) (model scale) and thereby foregoing a separate appraisal of the frictional resistance, was permissible for large models and floats of normal size. But faced with the ever increasing size of aircraft a reexamination of the problem of extrapolation to full size is called for. A method is described by means of which, on the basis of an analysis of tests on planing surfaces, the variation of the wetted surface over the take-off range is analytically obtained. The friction coefficients are read from Prandtl's curve for turbulent boundary layer with laminar approach. With these two values a correction for friction is obtainable.
TLD extrapolation for skin dose determination in vivo.
Kron, T; Butson, M; Hunt, F; Denham, J
1996-11-01
Prediction of skin reactions requires knowledge of the dose at various depths in the human skin. Using thermoluminescence dosimeters of three different thicknesses, the dose can be extrapolated to the surface and interpolated between the different depths. A TLD holder was designed for these TLD extrapolation measurements on patients during treatment which allowed measurements of entrance and exit skin dose with a day to day variability of +/-7% (S.D. of mean reading). In a pilot study on 18 patients undergoing breast irradiation, it was found that the angle of incidence of the radiation beam is the most significant factor influencing skin entrance dose. In most of these measurements the beam exit dose contributed 50% more to the surface dose than the entrance dose.
A regularization method for extrapolation of solar potential magnetic fields
NASA Technical Reports Server (NTRS)
Gary, G. A.; Musielak, Z. E.
1992-01-01
The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1982-01-01
The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.
Effective orthorhombic anisotropic models for wavefield extrapolation
NASA Astrophysics Data System (ADS)
Ibanez-Jacome, Wilson; Alkhalifah, Tariq; Waheed, Umair bin
2014-09-01
Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth's subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the first-arrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.
ERIC Educational Resources Information Center
Wolfe, Mary L.; Martuza, Victor R.
The major purpose of this experiment was to examine the effects of format (bar graphs vs. tables) and organization (by year vs. by brand) on the speed and accuracy of extrapolation and interpolation with multiple, nonlinear trend displays. Fifty-six undergraduates enrolled in the College of Education at the University of Delaware served as the…
ERIC Educational Resources Information Center
LaFlair, Geoffrey T.; Staples, Shelley
2017-01-01
Investigations of the validity of a number of high-stakes language assessments are conducted using an argument-based approach, which requires evidence for inferences that are critical to score interpretation (Chapelle, Enright, & Jamieson, 2008b; Kane, 2013). The current study investigates the extrapolation inference for a high-stakes test of…
Predicting structural properties of fluids by thermodynamic extrapolation
NASA Astrophysics Data System (ADS)
Mahynski, Nathan A.; Jiao, Sally; Hatch, Harold W.; Blanco, Marco A.; Shen, Vincent K.
2018-05-01
We describe a methodology for extrapolating the structural properties of multicomponent fluids from one thermodynamic state to another. These properties generally include features of a system that may be computed from an individual configuration such as radial distribution functions, cluster size distributions, or a polymer's radius of gyration. This approach is based on the principle of using fluctuations in a system's extensive thermodynamic variables, such as energy, to construct an appropriate Taylor series expansion for these structural properties in terms of intensive conjugate variables, such as temperature. Thus, one may extrapolate these properties from one state to another when the series is truncated to some finite order. We demonstrate this extrapolation for simple and coarse-grained fluids in both the canonical and grand canonical ensembles, in terms of both temperatures and the chemical potentials of different components. The results show that this method is able to reasonably approximate structural properties of such fluids over a broad range of conditions. Consequently, this methodology may be employed to increase the computational efficiency of molecular simulations used to measure the structural properties of certain fluid systems, especially those used in high-throughput or data-driven investigations.
Loss tolerant speech decoder for telecommunications
NASA Technical Reports Server (NTRS)
Prieto, Jr., Jaime L. (Inventor)
1999-01-01
A method and device for extrapolating past signal-history data for insertion into missing data segments in order to conceal digital speech frame errors. The extrapolation method uses past-signal history that is stored in a buffer. The method is implemented with a device that utilizes a finite-impulse response (FIR) multi-layer feed-forward artificial neural network that is trained by back-propagation for one-step extrapolation of speech compression algorithm (SCA) parameters. Once a speech connection has been established, the speech compression algorithm device begins sending encoded speech frames. As the speech frames are received, they are decoded and converted back into speech signal voltages. During the normal decoding process, pre-processing of the required SCA parameters will occur and the results stored in the past-history buffer. If a speech frame is detected to be lost or in error, then extrapolation modules are executed and replacement SCA parameters are generated and sent as the parameters required by the SCA. In this way, the information transfer to the SCA is transparent, and the SCA processing continues as usual. The listener will not normally notice that a speech frame has been lost because of the smooth transition between the last-received, lost, and next-received speech frames.
Xia, Yan; Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre; Maier, Andreas
2017-01-01
We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method.
Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre
2017-01-01
We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method. PMID:28808441
Kechagia, Irene-Ariadne; Kalantzi, Lida; Dokoumetzidis, Aristides
2015-11-01
To extrapolate enalapril efficacy to children 0-6 years old, a pharmacokinetic/pharmacodynamic (PKPD) model was built using literature data, with blood pressure as the PD endpoint. A PK model of enalapril was developed from literature paediatric data up to 16 years old. A PD model of enalapril was fitted to adult literature response vs time data with various doses. The final PKPD model was validated with literature paediatric efficacy observations (diastolic blood pressure (DBP) drop after 2 weeks of treatment) in children of age 6 years and higher. The model was used to predict enalapril efficacy for ages 0-6 years. A two-compartment PK model was chosen with weight, reflecting indirectly age as a covariate on clearance and central volume. An indirect link PD model was chosen to describe drug effect. External validation of the model's capability to predict efficacy in children was successful. Enalapril efficacy was extrapolated to ages 1-11 months and 1-6 years finding the mean DBP drop 11.2 and 11.79 mmHg, respectively. Mathematical modelling was used to extrapolate enalapril efficacy to young children to support a paediatric investigation plan targeting a paediatric-use marketing authorization application. © 2015 Royal Pharmaceutical Society.
NASA Astrophysics Data System (ADS)
Varandas, António J. C.
2018-04-01
Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
Two-flavor simulations of ρ ( 770 ) and the role of the K K ¯ channel
Hu, B.; Molina, R.; Döring, M.; ...
2016-09-15
Here, the ρ(770) meson is the most extensively studied resonance in lattice QCD simulations in two (N f = 2) and three (N f = 2 + 1) flavor formulations. We analyze N f = 2 lattice scattering data using unitarized chiral perturbation theory, allowing not only for the extrapolation in mass but also in flavor, N f = 2 → N f = 2 + 1. The flavor extrapolation requires information from a global fit to ππ and πK phase shifts from experiment. While the chiral extrapolation of N f = 2 lattice data leads to masses of themore » ρ(770) meson far below the experimental one, we find that the missing KK¯ channel is able to explain this discrepancy.« less
NASA Astrophysics Data System (ADS)
Sachs, M. K.; Yoder, M. R.; Turcotte, D. L.; Rundle, J. B.; Malamud, B. D.
2012-05-01
Extreme events that change global society have been characterized as black swans. The frequency-size distributions of many natural phenomena are often well approximated by power-law (fractal) distributions. An important question is whether the probability of extreme events can be estimated by extrapolating the power-law distributions. Events that exceed these extrapolations have been characterized as dragon-kings. In this paper we consider extreme events for earthquakes, volcanic eruptions, wildfires, landslides and floods. We also consider the extreme event behavior of three models that exhibit self-organized criticality (SOC): the slider-block, forest-fire, and sand-pile models. Since extrapolations using power-laws are widely used in probabilistic hazard assessment, the occurrence of dragon-king events have important practical implications.
Varandas, A J C
2009-02-01
The potential energy surface for the C(20)-He interaction is extrapolated for three representative cuts to the complete basis set limit using second-order Møller-Plesset perturbation calculations with correlation consistent basis sets up to the doubly augmented variety. The results both with and without counterpoise correction show consistency with each other, supporting that extrapolation without such a correction provides a reliable scheme to elude the basis-set-superposition error. Converged attributes are obtained for the C(20)-He interaction, which are used to predict the fullerene dimer ones. Time requirements show that the method can be drastically more economical than the counterpoise procedure and even competitive with Kohn-Sham density functional theory for the title system.
The design of L1-norm visco-acoustic wavefield extrapolators
NASA Astrophysics Data System (ADS)
Salam, Syed Abdul; Mousa, Wail A.
2018-04-01
Explicit depth frequency-space (f - x) prestack imaging is an attractive mechanism for seismic imaging. To date, the main focus of this method was data migration assuming an acoustic medium, but until now very little work assumed visco-acoustic media. Real seismic data usually suffer from attenuation and dispersion effects. To compensate for attenuation in a visco-acoustic medium, new operators are required. We propose using the L1-norm minimization technique to design visco-acoustic f - x extrapolators. To show the accuracy and compensation of the operators, prestack depth migration is performed on the challenging Marmousi model for both acoustic and visco-acoustic datasets. The final migrated images show that the proposed L1-norm extrapolation results in practically stable and improved resolution of the images.
NASA Technical Reports Server (NTRS)
Weber, L. A.
1971-01-01
Thermophysical properties data for oxygen at pressures below 5000 psia have been extrapolated to higher pressures (5,000-10,000 psia) in the temperature range 100-600 R. The tables include density, entropy, enthalpy, internal energy, speed of sound, specific heat, thermal conductivity, viscosity, thermal diffusivity, Prandtl number, and dielectric constant.
Recent research on the acute effects of volatile organic compounds (VQCs) suggests that extrapolation from short (~ 1 h) to long durations (up to 4 h) may be improved by using estimates of brain toluene concentration (Br[Tol]) instead of cumulative inhaled dose (C x t) as a metri...
NASA Astrophysics Data System (ADS)
Clements, Aspen R.; Berk, Brandon; Cooke, Ilsa R.; Garrod, Robin T.
2018-02-01
Using an off-lattice kinetic Monte Carlo model we reproduce experimental laboratory trends in the density of amorphous solid water (ASW) for varied deposition angle, rate and surface temperature. Extrapolation of the model to conditions appropriate to protoplanetary disks and interstellar dark clouds indicate that these ices may be less porous than laboratory ices.
cDNA Cloning of Fathead minnow (Pimephales promelas) Estrogen and Androgen Receptors for Use in Steroid Receptor Extrapolation Studies for Endocrine Disrupting Chemicals.
Wilson, V.S.1,, Korte, J.2, Hartig P. 1, Ankley, G.T.2, Gray, L.E., Jr 1, , and Welch, J.E.1. 1U.S...
USDA-ARS?s Scientific Manuscript database
Long Chain Free Fatty Acids (LCFFAs) from the hydrolysis of fat, oil and grease (FOG) are major components in the formation of insoluble saponified solids known as FOG deposits that accumulate in sewer pipes and lead to sanitary sewer overflows (SSOs). A Double Wavenumber Extrapolative Technique (DW...
Surface dose measurements with commonly used detectors: a consistent thickness correction method
Higgins, Patrick
2015-01-01
The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30‐360) with other parallel plate chambers RMI‐449 (Attix), Capintec PS‐033, PTW 30‐329 (Markus) and Memorial. Measurements of surface dose for 6 MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (−0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid‐state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three‐detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth‐dose curves and is not recommended for these types of measurements. PACS number: 87.56.‐v PMID:26699319
Hall, E J
2001-01-01
The possible risk of induced malignancies in astronauts, as a consequence of the radiation environment in space, is a factor of concern for long term missions. Cancer risk estimates for high doses of low LET radiation are available from the epidemiological studies of the A-bomb survivors. Cancer risks at lower doses cannot be detected in epidemiological studies and must be inferred by extrapolation from the high dose risks. The standard setting bodies, such as the ICRP recommend a linear, no-threshold extrapolation of risks from high to low doses, but this is controversial. A study of mechanisms of carcinogenesis may shed some light on the validity of a linear extrapolation. The multi-step nature of carcinogenesis suggests that the role of radiation may be to induce a mutation leading to a mutator phenotype. High energy Fe ions, such as those encountered in space are highly effective in inducing genomic instability. Experiments involving the single particle microbeam have demonstrated a "bystander effect", ie a biological effect in cells not themselves hit, but in close proximity to those that are, as well as the induction of mutations in cells where only the cytoplasm, and not the nucleus, have been traversed by a charged particle. These recent experiments cast doubt on the validity of a simple linear extrapolation, but the data are so far fragmentary and conflicting. More studies are necessary. While mechanistic studies cannot replace epidemiology as a source of quantitative risk estimates, they may shed some light on the shape of the dose response relationship and therefore on the limitations of a linear extrapolation to low doses.
Line-of-sight extrapolation noise in dust polarization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poh, Jason; Dodelson, Scott
The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g., 350 GHz) is due solely to dust and then extrapolate the signal down to lower frequency (e.g., 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typicalmore » Galactic thermal dust temperatures of about 20K, these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r < 0.0015 .« less
Biosimilars in Inflammatory Bowel Disease: Facts and Fears of Extrapolation.
Ben-Horin, Shomron; Vande Casteele, Niels; Schreiber, Stefan; Lakatos, Peter Laszlo
2016-12-01
Biologic drugs such as infliximab and other anti-tumor necrosis factor monoclonal antibodies have transformed the treatment of immune-mediated inflammatory conditions such as Crohn's disease and ulcerative colitis (collectively known as inflammatory bowel disease [IBD]). However, the complex manufacturing processes involved in producing these drugs mean their use in clinical practice is expensive. Recent or impending expiration of patents for several biologics has led to development of biosimilar versions of these drugs, with the aim of providing substantial cost savings and increased accessibility to treatment. Biosimilars undergo an expedited regulatory process. This involves proving structural, functional, and biological biosimilarity to the reference product (RP). It is also expected that clinical equivalency/comparability will be demonstrated in a clinical trial in one (or more) sensitive population. Once these requirements are fulfilled, extrapolation of biosimilar approval to other indications for which the RP is approved is permitted without the need for further clinical trials, as long as this is scientifically justifiable. However, such justification requires that the mechanism(s) of action of the RP in question should be similar across indications and also comparable between the RP and the biosimilar in the clinically tested population(s). Likewise, the pharmacokinetics, immunogenicity, and safety of the RP should be similar across indications and comparable between the RP and biosimilar in the clinically tested population(s). To date, most anti-tumor necrosis factor biosimilars have been tested in trials recruiting patients with rheumatoid arthritis. Concerns have been raised regarding extrapolation of clinical data obtained in rheumatologic populations to IBD indications. In this review, we discuss the issues surrounding indication extrapolation, with a focus on extrapolation to IBD. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hall, E. J.
2001-01-01
The possible risk of induced malignancies in astronauts, as a consequence of the radiation environment in space, is a factor of concern for long term missions. Cancer risk estimates for high doses of low LET radiation are available from the epidemiological studies of the A-bomb survivors. Cancer risks at lower doses cannot be detected in epidemiological studies and must be inferred by extrapolation from the high dose risks. The standard setting bodies, such as the ICRP recommend a linear, no-threshold extrapolation of risks from high to low doses, but this is controversial. A study of mechanisms of carcinogenesis may shed some light on the validity of a linear extrapolation. The multi-step nature of carcinogenesis suggests that the role of radiation may be to induce a mutation leading to a mutator phenotype. High energy Fe ions, such as those encountered in space are highly effective in inducing genomic instability. Experiments involving the single particle microbeam have demonstrated a "bystander effect", ie a biological effect in cells not themselves hit, but in close proximity to those that are, as well as the induction of mutations in cells where only the cytoplasm, and not the nucleus, have been traversed by a charged particle. These recent experiments cast doubt on the validity of a simple linear extrapolation, but the data are so far fragmentary and conflicting. More studies are necessary. While mechanistic studies cannot replace epidemiology as a source of quantitative risk estimates, they may shed some light on the shape of the dose response relationship and therefore on the limitations of a linear extrapolation to low doses.
NASA Astrophysics Data System (ADS)
Joung, Wukchul; Park, Jihye; Pearce, Jonathan V.
2018-06-01
In this work, the liquidus temperature of tin was determined by melting the sample using the pressure-controlled loop heat pipe. Square wave-type pressure steps generated periodic 0.7 °C temperature steps in the isothermal region in the vicinity of the tin sample, and the tin was melted with controllable heat pulses from the generated temperature changes. The melting temperatures at specific melted fractions were measured, and they were extrapolated to the melted fraction of unity to determine the liquidus temperature of tin. To investigate the influence of the impurity distribution on the melting behavior, a molten tin sample was solidified by an outward slow freezing or by quenching to segregate the impurities inside the sample with concentrations increasing outwards or to spread the impurities uniformly, respectively. The measured melting temperatures followed the local solidus temperature variations well in the case of the segregated sample and stayed near the solidus temperature in the quenched sample due to the microscopic melting behavior. The extrapolated melting temperatures of the segregated and quenched samples were 0.95 mK and 0.49 mK higher than the outside-nucleated freezing temperature of tin (with uncertainties of 0.15 mK and 0.16 mK, at approximately 95% level of confidence), respectively. The extrapolated melting temperature of the segregated sample was supposed to be a closer approximation to the liquidus temperature of tin, whereas the quenched sample yielded the possibility of a misleading extrapolation to the solidus temperature. Therefore, the determination of the liquidus temperature could result in different extrapolated melting temperatures depending on the way the impurities were distributed within the sample, which has implications for the contemporary methodology for realizing temperature fixed points of the International Temperature Scale of 1990 (ITS-90).
Physiologically based pharmacokinetic model for quinocetone in pigs and extrapolation to mequindox.
Zhu, Xudong; Huang, Lingli; Xu, Yamei; Xie, Shuyu; Pan, Yuanhu; Chen, Dongmei; Liu, Zhenli; Yuan, Zonghui
2017-02-01
Physiologically based pharmacokinetic (PBPK) models are scientific methods used to predict veterinary drug residues that may occur in food-producing animals, and which have powerful extrapolation ability. Quinocetone (QCT) and mequindox (MEQ) are widely used in China for the prevention of bacterial infections and promoting animal growth, but their abuse causes a potential threat to human health. In this study, a flow-limited PBPK model was developed to simulate simultaneously residue depletion of QCT and its marker residue dideoxyquinocetone (DQCT) in pigs. The model included compartments for blood, liver, kidney, muscle and fat and an extra compartment representing the other tissues. Physiological parameters were obtained from the literature. Plasma protein binding rates, renal clearances and tissue/plasma partition coefficients were determined by in vitro and in vivo experiments. The model was calibrated and validated with several pharmacokinetic and residue-depletion datasets from the literature. Sensitivity analysis and Monte Carlo simulations were incorporated into the PBPK model to estimate individual variation of residual concentrations. The PBPK model for MEQ, the congener compound of QCT, was built through cross-compound extrapolation based on the model for QCT. The QCT model accurately predicted the concentrations of QCT and DQCT in various tissues at most time points, especially the later time points. Correlation coefficients between predicted and measured values for all tissues were greater than 0.9. Monte Carlo simulations showed excellent consistency between estimated concentration distributions and measured data points. The extrapolation model also showed good predictive power. The present models contribute to improve the residue monitoring systems of QCT and MEQ, and provide evidence of the usefulness of PBPK model extrapolation for the same kinds of compounds.
Temperature extrapolation of multicomponent grand canonical free energy landscapes
NASA Astrophysics Data System (ADS)
Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.
2017-08-01
We derive a method for extrapolating the grand canonical free energy landscape of a multicomponent fluid system from one temperature to another. Previously, we introduced this statistical mechanical framework for the case where kinetic energy contributions to the classical partition function were neglected for simplicity [N. A. Mahynski et al., J. Chem. Phys. 146, 074101 (2017)]. Here, we generalize the derivation to admit these contributions in order to explicitly illustrate the differences that result. Specifically, we show how factoring out kinetic energy effects a priori, in order to consider only the configurational partition function, leads to simpler mathematical expressions that tend to produce more accurate extrapolations than when these effects are included. We demonstrate this by comparing and contrasting these two approaches for the simple cases of an ideal gas and a non-ideal, square-well fluid.
SNSEDextend: SuperNova Spectral Energy Distributions extrapolation toolkit
NASA Astrophysics Data System (ADS)
Pierel, Justin D. R.; Rodney, Steven A.; Avelino, Arturo; Bianco, Federica; Foley, Ryan J.; Friedman, Andrew; Hicken, Malcolm; Hounsell, Rebekah; Jha, Saurabh W.; Kessler, Richard; Kirshner, Robert; Mandel, Kaisey; Narayan, Gautham; Filippenko, Alexei V.; Scolnic, Daniel; Strolger, Louis-Gregory
2018-05-01
SNSEDextend extrapolates core-collapse and Type Ia Spectral Energy Distributions (SEDs) into the UV and IR for use in simulations and photometric classifications. The user provides a library of existing SED templates (such as those in the authors' SN SED Repository) along with new photometric constraints in the UV and/or NIR wavelength ranges. The software then extends the existing template SEDs so their colors match the input data at all phases. SNSEDextend can also extend the SALT2 spectral time-series model for Type Ia SN for a "first-order" extrapolation of the SALT2 model components, suitable for use in survey simulations and photometric classification tools; as the code does not do a rigorous re-training of the SALT2 model, the results should not be relied on for precision applications such as light curve fitting for cosmology.
2006-07-01
linearity; (4) determination of polarization as a function of radiographic parameters ; and (5) determination of the effect of binding energy on... hydroxyapatite . Type II calcifications are known to be associated with carcinoma, while it is generally accepted that the exclusive finding of type I...concentrate on the extrapolation of the Rh target spectra. The extrapolation was split in two parts. Below 24 keV we used the parameters from Boone’s paper
Fallou, Hélène; Cimetière, Nicolas; Giraudet, Sylvain; Wolbert, Dominique; Le Cloirec, Pierre
2016-01-15
Activated carbon fiber cloths (ACFC) have shown promising results when applied to water treatment, especially for removing organic micropollutants such as pharmaceutical compounds. Nevertheless, further investigations are required, especially considering trace concentrations, which are found in current water treatment. Until now, most studies have been carried out at relatively high concentrations (mg L(-1)), since the experimental and analytical methodologies are more difficult and more expensive when dealing with lower concentrations (ng L(-1)). Therefore, the objective of this study was to validate an extrapolation procedure from high to low concentrations, for four compounds (Carbamazepine, Diclofenac, Caffeine and Acetaminophen). For this purpose, the reliability of the usual adsorption isotherm models, when extrapolated from high (mg L(-1)) to low concentrations (ng L(-1)), was assessed as well as the influence of numerous error functions. Some isotherm models (Freundlich, Toth) and error functions (RSS, ARE) show weaknesses to be used as an adsorption isotherms at low concentrations. However, from these results, the pairing of the Langmuir-Freundlich isotherm model with Marquardt's percent standard of deviation was evidenced as the best combination model, enabling the extrapolation of adsorption capacities by orders of magnitude. Copyright © 2015 Elsevier Ltd. All rights reserved.
McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O
2004-06-21
The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.
X-ray surface dose measurements using TLD extrapolation.
Kron, T; Elliot, A; Wong, T; Showell, G; Clubb, B; Metcalfe, P
1993-01-01
Surface dose measurements in therapeutic x-ray beams are of importance in determining the dose to the skin of patients undergoing radiotherapy. Measurements were performed in the 6-MV beam of a medical linear accelerator with LiF thermoluminescence dosimeters (TLD) using a solid water phantom. TLD chips (surface area 3.17 x 3.17 cm2) of three different thicknesses (0.230, 0.099, and 0.038 g/cm2) were used to extrapolate dose readings to an infinitesimally thin layer of LiF. This surface dose was measured for field sizes ranging from 1 x 1 cm2 to 40 x 40 cm2. The surface dose relative to maximum dose was found to be 10.0% for a field size of 5 x 5 cm2, 16.3% for 10 x 10 cm2, and 26.9% for 20 x 20 cm2. Using a 6-mm Perspex block tray in the beam increased the surface dose in these fields to 10.7%, 17.7%, and 34.2% respectively. Due to the small size of the TLD chips, TLD extrapolation is applicable also for intracavity and exit dose determinations. The technique used for in vivo dosimetry could provide clinicians information about the build up of dose up to 1-mm depth in addition to an extrapolated surface dose measurement.
Challenges of accelerated aging techniques for elastomer lifetime predictions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillen, Kenneth T.; Bernstein, R.; Celina, M.
Elastomers are often degraded when exposed to air or high humidity for extended times (years to decades). Lifetime estimates normally involve extrapolating accelerated aging results made at higher than ambient environments. Several potential problems associated with such studies are reviewed, and experimental and theoretical methods to address them are provided. The importance of verifying time–temperature superposition of degradation data is emphasized as evidence that the overall nature of the degradation process remains unchanged versus acceleration temperature. The confounding effects that occur when diffusion-limited oxidation (DLO) contributes under accelerated conditions are described, and it is shown that the DLO magnitude canmore » be modeled by measurements or estimates of the oxygen permeability coefficient (P Ox) and oxygen consumption rate (Φ). P Ox and Φ measurements can be influenced by DLO, and it is demonstrated how confident values can be derived. In addition, several experimental profiling techniques that screen for DLO effects are discussed. Values of Φ taken from high temperature to temperatures approaching ambient can be used to more confidently extrapolate accelerated aging results for air-aged materials, and many studies now show that Arrhenius extrapolations bend to lower activation energies as aging temperatures are lowered. Furthermore, best approaches for accelerated aging extrapolations of humidity-exposed materials are also offered.« less
Gamalo-Siebers, Margaret; Savic, Jasmina; Basu, Cynthia; Zhao, Xin; Gopalakrishnan, Mathangi; Gao, Aijun; Song, Guochen; Baygani, Simin; Thompson, Laura; Xia, H Amy; Price, Karen; Tiwari, Ram; Carlin, Bradley P
2017-07-01
Children represent a large underserved population of "therapeutic orphans," as an estimated 80% of children are treated off-label. However, pediatric drug development often faces substantial challenges, including economic, logistical, technical, and ethical barriers, among others. Among many efforts trying to remove these barriers, increased recent attention has been paid to extrapolation; that is, the leveraging of available data from adults or older age groups to draw conclusions for the pediatric population. The Bayesian statistical paradigm is natural in this setting, as it permits the combining (or "borrowing") of information across disparate sources, such as the adult and pediatric data. In this paper, authored by the pediatric subteam of the Drug Information Association Bayesian Scientific Working Group and Adaptive Design Working Group, we develop, illustrate, and provide suggestions on Bayesian statistical methods that could be used to design improved pediatric development programs that use all available information in the most efficient manner. A variety of relevant Bayesian approaches are described, several of which are illustrated through 2 case studies: extrapolating adult efficacy data to expand the labeling for Remicade to include pediatric ulcerative colitis and extrapolating adult exposure-response information for antiepileptic drugs to pediatrics. Copyright © 2017 John Wiley & Sons, Ltd.
Nuclear's role in 21. century Pacific rim energy use
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, Clifford; Taylor, J'Tia
2007-07-01
Extrapolations contrast the future of nuclear energy use in Japan and the Republic of Korea (ROK) to that of the Association of Southeast Asian Nations (ASEAN). Japan can expect a gradual rise in the nuclear fraction of a nearly constant total energy use rate as the use of fossil fuels declines. ROK nuclear energy rises gradually with total energy use. ASEAN's total nuclear energy use rate can rapidly approach that of the ROK if Indonesia and Vietnam make their current nuclear energy targets by 2020, but experience elsewhere suggests that nuclear energy growth may be slower than planned. Extrapolations aremore » based on econometric calibration to a utility optimization model of the impact of growth of population, gross domestic product, total energy use, and cumulative fossil carbon use. Fractions of total energy use from fluid fossil fuels, coal, water-driven electrical power production, nuclear energy, and wind and solar electric energy sources are fit to market fractions data. Where historical data is insufficient for extrapolation, plans for non-fossil energy are used as a guide. Extrapolations suggest much more U.S. nuclear energy and spent nuclear fuel generation than for the ROK and ASEAN until beyond the first half of the twenty-first century. (authors)« less
Lorenz, Alyson; Dhingra, Radhika; Chang, Howard H; Bisanzio, Donal; Liu, Yang; Remais, Justin V
2014-01-01
Extrapolating landscape regression models for use in assessing vector-borne disease risk and other applications requires thoughtful evaluation of fundamental model choice issues. To examine implications of such choices, an analysis was conducted to explore the extent to which disparate landscape models agree in their epidemiological and entomological risk predictions when extrapolated to new regions. Agreement between six literature-drawn landscape models was examined by comparing predicted county-level distributions of either Lyme disease or Ixodes scapularis vector using Spearman ranked correlation. AUC analyses and multinomial logistic regression were used to assess the ability of these extrapolated landscape models to predict observed national data. Three models based on measures of vegetation, habitat patch characteristics, and herbaceous landcover emerged as effective predictors of observed disease and vector distribution. An ensemble model containing these three models improved precision and predictive ability over individual models. A priori assessment of qualitative model characteristics effectively identified models that subsequently emerged as better predictors in quantitative analysis. Both a methodology for quantitative model comparison and a checklist for qualitative assessment of candidate models for extrapolation are provided; both tools aim to improve collaboration between those producing models and those interested in applying them to new areas and research questions.
Challenges of accelerated aging techniques for elastomer lifetime predictions
Gillen, Kenneth T.; Bernstein, R.; Celina, M.
2015-03-01
Elastomers are often degraded when exposed to air or high humidity for extended times (years to decades). Lifetime estimates normally involve extrapolating accelerated aging results made at higher than ambient environments. Several potential problems associated with such studies are reviewed, and experimental and theoretical methods to address them are provided. The importance of verifying time–temperature superposition of degradation data is emphasized as evidence that the overall nature of the degradation process remains unchanged versus acceleration temperature. The confounding effects that occur when diffusion-limited oxidation (DLO) contributes under accelerated conditions are described, and it is shown that the DLO magnitude canmore » be modeled by measurements or estimates of the oxygen permeability coefficient (P Ox) and oxygen consumption rate (Φ). P Ox and Φ measurements can be influenced by DLO, and it is demonstrated how confident values can be derived. In addition, several experimental profiling techniques that screen for DLO effects are discussed. Values of Φ taken from high temperature to temperatures approaching ambient can be used to more confidently extrapolate accelerated aging results for air-aged materials, and many studies now show that Arrhenius extrapolations bend to lower activation energies as aging temperatures are lowered. Furthermore, best approaches for accelerated aging extrapolations of humidity-exposed materials are also offered.« less
Yoshida, Kenta; Zhao, Ping; Zhang, Lei; Abernethy, Darrell R; Rekić, Dinko; Reynolds, Kellie S; Galetin, Aleksandra; Huang, Shiew-Mei
2017-09-01
Evaluation of drug-drug interaction (DDI) risk is vital to establish benefit-risk profiles of investigational new drugs during drug development. In vitro experiments are routinely conducted as an important first step to assess metabolism- and transporter-mediated DDI potential of investigational new drugs. Results from these experiments are interpreted, often with the aid of in vitro-in vivo extrapolation methods, to determine whether and how DDI should be evaluated clinically to provide the basis for proper DDI management strategies, including dosing recommendations, alternative therapies, or contraindications under various DDI scenarios and in different patient population. This article provides an overview of currently available in vitro experimental systems and basic in vitro-in vivo extrapolation methodologies for metabolism- and transporter-mediated DDIs. Published by Elsevier Inc.
Infrared length scale and extrapolations for the no-core shell model
Wendt, K. A.; Forssén, C.; Papenbrock, T.; ...
2015-06-03
In this paper, we precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3(A-1)-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound statesmore » of 4He, 6He, 6Li, and 7Li. Finally, we also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.« less
NASA Technical Reports Server (NTRS)
Capobianco, Christopher J.; Jones, John H.; Drake, Michael J.
1993-01-01
Low-temperature metal-silicate partition coefficients are extrapolated to magma ocean temperatures. If the low-temperature chemistry data is found to be applicable at high temperatures, an important assumption, then the results indicate that high temperature alone cannot account for the excess siderophile element problem of the upper mantle. For most elements, a rise in temperature will result in a modest increase in siderophile behavior if an iron-wuestite redox buffer is paralleled. However, long-range extrapolation of experimental data is hazardous when the data contains even modest experimental errors. For a given element, extrapolated high-temperature partition coefficients can differ by orders of magnitude, even when data from independent studies is consistent within quoted errors. In order to accurately assess siderophile element behavior in a magma ocean, it will be necessary to obtain direct experimental measurements for at least some of the siderophile elements.
NASA Astrophysics Data System (ADS)
Nora, R.; Field, J. E.; Peterson, J. Luc; Spears, B.; Kruse, M.; Humbird, K.; Gaffney, J.; Springer, P. T.; Brandon, S.; Langer, S.
2017-10-01
We present an experimentally corroborated hydrodynamic extrapolation of several recent BigFoot implosions on the National Ignition Facility. An estimate on the value and error of the hydrodynamic scale necessary for ignition (for each individual BigFoot implosion) is found by hydrodynamically scaling a distribution of multi-dimensional HYDRA simulations whose outputs correspond to their experimental observables. The 11-parameter database of simulations, which include arbitrary drive asymmetries, dopant fractions, hydrodynamic scaling parameters, and surface perturbations due to surrogate tent and fill-tube engineering features, was computed on the TRINITY supercomputer at Los Alamos National Laboratory. This simple extrapolation is the first step in providing a rigorous calibration of our workflow to provide an accurate estimate of the efficacy of achieving ignition on the National Ignition Facility. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Bachmann, Talis; Murd, Carolina; Põder, Endel
2012-09-01
One fundamental property of the perceptual and cognitive systems is their capacity for prediction in the dynamic environment; the flash-lag effect has been considered as a particularly suggestive example of this capacity (Nijhawan in nature 370:256-257, 1994, Behav brain sci 31:179-239, 2008). Thus, because of involvement of the mechanisms of extrapolation and visual prediction, the moving object is perceived ahead of the simultaneously flashed static object objectively aligned with the moving one. In the present study we introduce a new method and report experimental results inconsistent with at least some versions of the prediction/extrapolation theory. We show that a stimulus moving in the opposite direction to the reference stimulus by approaching it before the flash does not diminish the flash-lag effect, but rather augments it. In addition, alternative theories (in)capable of explaining this paradoxical result are discussed.
Kerzel, Dirk
2003-05-01
Observers' judgments of the final position of a moving target are typically shifted in the direction of implied motion ("representational momentum"). The role of attention is unclear: visual attention may be necessary to maintain or halt target displacement. When attention was captured by irrelevant distractors presented during the retention interval, forward displacement after implied target motion disappeared, suggesting that attention may be necessary to maintain mental extrapolation of target motion. In a further corroborative experiment, the deployment of attention was measured after a sequence of implied motion, and faster responses were observed to stimuli appearing in the direction of motion. Thus, attention may guide the mental extrapolation of target motion. Additionally, eye movements were measured during stimulus presentation and retention interval. The results showed that forward displacement with implied motion does not depend on eye movements. Differences between implied and smooth motion are discussed with respect to recent neurophysiological findings.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
Monte Carlo based approach to the LS-NaI 4πβ-γ anticoincidence extrapolation and uncertainty
Fitzgerald, R.
2016-01-01
The 4πβ-γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone. PMID:27358944
Toxicokinetic Model Development for the Insensitive Munitions Component 3-Nitro-1,2,4-Triazol-5-One.
Sweeney, Lisa M; Phillips, Elizabeth A; Goodwin, Michelle R; Bannon, Desmond I
2015-01-01
3-Nitro-1,2,4-triazol-5-one (NTO) is a component of insensitive munitions that are potential replacements for conventional explosives. Toxicokinetic data can aid in the interpretation of toxicity studies and interspecies extrapolation, but only limited data on the toxicokinetics and metabolism of NTO are available. To supplement these limited data, further in vivo studies of NTO in rats were conducted and blood concentrations were measured, tissue distribution of NTO was estimated using an in silico method, and physiologically based pharmacokinetic models of the disposition of NTO in rats and macaques were developed and extrapolated to humans. The model predictions can be used to extrapolate from designated points of departure identified from rat toxicology studies to provide a scientific basis for estimates of acceptable human exposure levels for NTO. © The Author(s) 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croom, Edward L.; Shafer, Timothy J.; Evans, Marina V.
Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicity. Lindane cell and media concentrations in vitro, together with in vitro concentration-response data for lindane effects on neuronal network firing rates, were compared to in vivo data and model simulations as an exercise in extrapolation for chemical-induced neurotoxicity in rodents and humans. Time- and concentration-dependent lindane dosimetry was determined in primary cultures of rat cortical neuronsmore » in vitro using “faux” (without electrodes) microelectrode arrays (MEAs). In vivo data were derived from literature values, and physiologically based pharmacokinetic (PBPK) modeling was used to extrapolate from rat to human. The previously determined EC{sub 50} for increased firing rates in primary cultures of cortical neurons was 0.6 μg/ml. Media and cell lindane concentrations at the EC{sub 50} were 0.4 μg/ml and 7.1 μg/ml, respectively, and cellular lindane accumulation was time- and concentration-dependent. Rat blood and brain lindane levels during seizures were 1.7–1.9 μg/ml and 5–11 μg/ml, respectively. Brain lindane levels associated with seizures in rats and those predicted for humans (average = 7 μg/ml) by PBPK modeling were very similar to in vitro concentrations detected in cortical cells at the EC{sub 50} dose. PBPK model predictions matched literature data and timing. These findings indicate that in vitro MEA results are predictive of in vivo responses to lindane and demonstrate a successful modeling approach for IVIVE of rat and human neurotoxicity. - Highlights: • In vitro to in vivo extrapolation for lindane neurotoxicity was performed. • Dosimetry of lindane in a micro-electrode array (MEA) test system was assessed. • Cell concentrations at the MEA EC{sub 50} equaled rat brain levels associated with seizure. • PBPK-predicted human brain levels at seizure also equaled EC{sub 50} cell concentrations. • In vitro MEA results are predictive of lindane in vivo dose–response in rats/humans.« less
SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C; Kumarasiri, A; Chetvertkov, M
2014-06-01
Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformablymore » registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (μ±σ) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.« less
Motion-based prediction explains the role of tracking in motion extrapolation.
Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U
2013-11-01
During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated. Then, during tracking, trajectory estimation is robust to blanks even in the presence of relatively high levels of noise. Moreover, we found that tracking is necessary for motion extrapolation, this calls for further experimental work exploring the role of noise in motion extrapolation. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wu, S. T.; Sun, M. T.; Sakurai, Takashi
1990-01-01
This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.
Interpolation Method Needed for Numerical Uncertainty
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.
Failure of the straight-line DCS boundary when extrapolated to the hypobaric realm.
Conkin, J; Van Liew, H D
1992-11-01
The lowest pressure (P2) to which a diver can ascend without developing decompression sickness (DCS) after becoming equilibrated at some higher pressure (P1) is described by a straight line with a negative y-intercept. We tested whether extrapolation of such a line also predicts safe decompression to altitude. We substituted tissue nitrogen pressure (P1N2) calculated for a compartment with a 360-min half-time for P1 values; this allows data from hypobaric exposures to be plotted on a P2 vs. P1N2 graph, even if the subject breathes oxygen before ascent. In literature sources, we found 40 reports of human exposures in hypobaric chambers that fell in the region of a P2 vs. P1N2 plot where the extrapolation from hyperbaric data predicted that the decompression should be free of DCS. Of 4,576 exposures, 785 persons suffered decompression sickness (17%), indicating that extrapolation of the diver line to altitude is not valid. Over the pressure range spanned by human hypobaric exposures and hyperbaric air exposures, the best separation between no DCS and DCS on a P2 vs. P1N2 plot seems to be a curve which approximates a straight line in the hyperbaric region but bends toward the origin in the hypobaric region.
Natural Hazards characterisation in industrial practice
NASA Astrophysics Data System (ADS)
Bernardara, Pietro
2017-04-01
The definition of rare hydroclimatic extremes (up to 10-4 annual probability of occurrence) is of the utmost importance for the design of high value industrial infrastructures, such as grids, power plants, offshore platforms. The underestimation as well as the overestimation of the risk may lead to huge costs (ex. mid-life expensive works or overdesign) which may even prevent the project to happen. Nevertheless, the uncertainty associated to the extrapolation towards the rare frequencies are huge and manifold. They are mainly due to the scarcity of observations, the lack of quality on the extreme value records and on the arbitrary choice of the models used for extrapolations. This often put the design engineers in uncomfortable situations when they must choose the design values to use. Providentially, the recent progresses in the earth observation techniques, information technology, historical data collection and weather and ocean modelling are making huge datasets available. A careful use of big datasets of observations and modelled data are leading towards a better understanding of the physics of the underlying phenomena, the complex interactions between them and thus of the extreme events frequency extrapolations. This will move the engineering practice from the single site, small sample, application of statistical analysis to a more spatially coherent, physically driven extrapolation of extreme values. Few examples, from the EDF industrial practice are given to illustrate these progresses and their potential impact on the design approaches.
Uncertainty factors in screening ecological risk assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duke, L.D.; Taggart, M.
2000-06-01
The hazard quotient (HQ) method is commonly used in screening ecological risk assessments (ERAs) to estimate risk to wildlife at contaminated sites. Many ERAs use uncertainty factors (UFs) in the HQ calculation to incorporate uncertainty associated with predicting wildlife responses to contaminant exposure using laboratory toxicity data. The overall objective was to evaluate the current UF methodology as applied to screening ERAs in California, USA. Specific objectives included characterizing current UF methodology, evaluating the degree of conservatism in UFs as applied, and identifying limitations to the current approach. Twenty-four of 29 evaluated ERAs used the HQ approach: 23 of thesemore » used UFs in the HQ calculation. All 24 made interspecies extrapolations, and 21 compensated for its uncertainty, most using allometric adjustments and some using RFs. Most also incorporated uncertainty for same-species extrapolations. Twenty-one ERAs used UFs extrapolating from lowest observed adverse effect level (LOAEL) to no observed adverse effect level (NOAEL), and 18 used UFs extrapolating from subchronic to chronic exposure. Values and application of all UF types were inconsistent. Maximum cumulative UFs ranged from 10 to 3,000. Results suggest UF methodology is widely used but inconsistently applied and is not uniformly conservative relative to UFs recommended in regulatory guidelines and academic literature. The method is limited by lack of consensus among scientists, regulators, and practitioners about magnitudes, types, and conceptual underpinnings of the UF methodology.« less
Ciambella, J; Paolone, A; Vidoli, S
2014-09-01
We report about the experimental identification of viscoelastic constitutive models for frequencies ranging within 0-10Hz. Dynamic moduli data are fitted forseveral materials of interest to medical applications: liver tissue (Chatelin et al., 2011), bioadhesive gel (Andrews et al., 2005), spleen tissue (Nicolle et al., 2012) and synthetic elastomer (Osanaiye, 1996). These materials actually represent a rather wide class of soft viscoelastic materials which are usually subjected to low frequencies deformations. We also provide prescriptions for the correct extrapolation of the material behavior at higher frequencies. Indeed, while experimental tests are more easily carried out at low frequency, the identified viscoelastic models are often used outside the frequency range of the actual test. We consider two different classes of models according to their relaxation function: Debye models, whose kernel decays exponentially fast, and fractional models, including Cole-Cole, Davidson-Cole, Nutting and Havriliak-Negami, characterized by a slower decay rate of the material memory. Candidate constitutive models are hence rated according to the accurateness of the identification and to their robustness to extrapolation. It is shown that all kernels whose decay rate is too fast lead to a poor fitting and high errors when the material behavior is extrapolated to broader frequency ranges. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M
2002-07-21
The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.
Sommerfeld, Thomas; Ehara, Masahiro
2015-01-21
The energy of a temporary anion can be computed by adding a stabilizing potential to the molecular Hamiltonian, increasing the stabilization until the temporary state is turned into a bound state, and then further increasing the stabilization until enough bound state energies have been collected so that these can be extrapolated back to vanishing stabilization. The lifetime can be obtained from the same data, but only if the extrapolation is done through analytic continuation of the momentum as a function of the square root of a shifted stabilizing parameter. This method is known as analytic continuation of the coupling constant, and it requires--at least in principle--that the bound-state input data are computed with a short-range stabilizing potential. In the context of molecules and ab initio packages, long-range Coulomb stabilizing potentials are, however, far more convenient and have been used in the past with some success, although the error introduced by the long-rang nature of the stabilizing potential remains unknown. Here, we introduce a soft-Voronoi box potential that can serve as a short-range stabilizing potential. The difference between a Coulomb and the new stabilization is analyzed in detail for a one-dimensional model system as well as for the (2)Πu resonance of CO2(-), and in both cases, the extrapolation results are compared to independently computed resonance parameters, from complex scaling for the model, and from complex absorbing potential calculations for CO2(-). It is important to emphasize that for both the model and for CO2(-), all three sets of results have, respectively, been obtained with the same electronic structure method and basis set so that the theoretical description of the continuum can be directly compared. The new soft-Voronoi-box-based extrapolation is then used to study the influence of the size of diffuse and the valence basis sets on the computed resonance parameters.
Line-of-sight extrapolation noise in dust polarization
NASA Astrophysics Data System (ADS)
Poh, Jason; Dodelson, Scott
2017-05-01
The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g. 350 GHz) is due solely to dust and then extrapolate the signal down to a lower frequency (e.g. 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typical Galactic thermal dust temperatures of ˜20 K , these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise on a greybody dust model consistent with Planck and Pan-STARRS observations, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r ≲0.0015 in the greybody dust models considered in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perko, Z; Bortfeld, T; Hong, T
Purpose: The safe use of radiotherapy requires the knowledge of tolerable organ doses. For experimental fractionation schemes (e.g. hypofractionation) these are typically extrapolated from traditional fractionation schedules using the Biologically Effective Dose (BED) model. This work demonstrates that using the mean dose in the standard BED equation may overestimate tolerances, potentially leading to unsafe treatments. Instead, extrapolation of mean dose tolerances should take the spatial dose distribution into account. Methods: A formula has been derived to extrapolate mean physical dose constraints such that they are mean BED equivalent. This formula constitutes a modified BED equation where the influence of themore » spatial dose distribution is summarized in a single parameter, the dose shape factor. To quantify effects we analyzed 14 liver cancer patients previously treated with proton therapy in 5 or 15 fractions, for whom also photon IMRT plans were available. Results: Our work has two main implications. First, in typical clinical plans the dose distribution can have significant effects. When mean dose tolerances are extrapolated from standard fractionation towards hypofractionation they can be overestimated by 10–15%. Second, the shape difference between photon and proton dose distributions can cause 30–40% differences in mean physical dose for plans having the same mean BED. The combined effect when extrapolating proton doses to mean BED equivalent photon doses in traditional 35 fraction regimens resulted in up to 7–8 Gy higher doses than when applying the standard BED formula. This can potentially lead to unsafe treatments (in 1 of the 14 analyzed plans the liver mean dose was above its 32 Gy tolerance). Conclusion: The shape effect should be accounted for to avoid unsafe overestimation of mean dose tolerances, particularly when estimating constraints for hypofractionated regimens. In addition, tolerances established for a given treatment modality cannot necessarily be applied to other modalities with drastically different dose distributions.« less
Nonlinear cancer response at ultralow dose: a 40800-animal ED(001) tumor and biomarker study.
Bailey, George S; Reddy, Ashok P; Pereira, Clifford B; Harttig, Ulrich; Baird, William; Spitsbergen, Jan M; Hendricks, Jerry D; Orner, Gayle A; Williams, David E; Swenberg, James A
2009-07-01
Assessment of human cancer risk from animal carcinogen studies is severely limited by inadequate experimental data at environmentally relevant exposures and by procedures requiring modeled extrapolations many orders of magnitude below observable data. We used rainbow trout, an animal model well-suited to ultralow-dose carcinogenesis research, to explore dose-response down to a targeted 10 excess liver tumors per 10000 animals (ED(001)). A total of 40800 trout were fed 0-225 ppm dibenzo[a,l]pyrene (DBP) for 4 weeks, sampled for biomarker analyses, and returned to control diet for 9 months prior to gross and histologic examination. Suspect tumors were confirmed by pathology, and resulting incidences were modeled and compared to the default EPA LED(10) linear extrapolation method. The study provided observed incidence data down to two above-background liver tumors per 10000 animals at the lowest dose (that is, an unmodeled ED(0002) measurement). Among nine statistical models explored, three were determined to fit the liver data well-linear probit, quadratic logit, and Ryzin-Rai. None of these fitted models is compatible with the LED(10) default assumption, and all fell increasingly below the default extrapolation with decreasing DBP dose. Low-dose tumor response was also not predictable from hepatic DBP-DNA adduct biomarkers, which accumulated as a power function of dose (adducts = 100 x DBP(1.31)). Two-order extrapolations below the modeled tumor data predicted DBP doses producing one excess cancer per million individuals (ED(10)(-6)) that were 500-1500-fold higher than that predicted by the five-order LED(10) extrapolation. These results are considered specific to the animal model, carcinogen, and protocol used. They provide the first experimental estimation in any model of the degree of conservatism that may exist for the EPA default linear assumption for a genotoxic carcinogen.
NASA Astrophysics Data System (ADS)
Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch
2017-09-01
A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.
Reliability of constricted double-heterojunction AlGaAs diode lasers
NASA Technical Reports Server (NTRS)
Botez, D.; Connolly, J. C.; Ettenberg, M.; Gilbert, D. B.; Hughes, J. J.
1983-01-01
Constricted double-heterojunction diode lasers have been life tested at 70 C heatsink temperature and 3-4 mW/facet in CW operation. A median life of 7800 h is obtained at 70 C, which extrapolates to 400,000 h median life at room temperature. The extrapolated mean time to failure at room temperature is in excess of 1,000,000 h. Single-longitudinal-mode CW operation is maintained after 10,000 h of accelerated aging at 70 C.
Xia, Hong; Luo, Zhendong
2017-01-01
In this study, we devote ourselves to establishing a stabilized mixed finite element (MFE) reduced-order extrapolation (SMFEROE) model holding seldom unknowns for the two-dimensional (2D) unsteady conduction-convection problem via the proper orthogonal decomposition (POD) technique, analyzing the existence and uniqueness and the stability as well as the convergence of the SMFEROE solutions and validating the correctness and dependability of the SMFEROE model by means of numerical simulations.
Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc
2013-12-01
The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.
Predicting river travel time from hydraulic characteristics
Jobson, H.E.
2001-01-01
Predicting the effect of a pollutant spill on downstream water quality is primarily dependent on the water velocity, longitudinal mixing, and chemical/physical reactions. Of these, velocity is the most important and difficult to predict. This paper provides guidance on extrapolating travel-time information from one within bank discharge to another. In many cases, a time series of discharge (such as provided by a U.S. Geological Survey stream gauge) will provide an excellent basis for this extrapolation. Otherwise, the accuracy of a travel time extrapolation based on a resistance equation can be greatly improved by assuming the total flow area is composed of two parts, an active and an inactive area. For 60 reaches of 12 rivers with slopes greater than about 0.0002, travel times could be predicted to within about 10% by computing the active flow area using the Manning equation with n = 0.035 and assuming a constant inactive area for each reach. The predicted travel times were not very sensitive to the assumed values of bed slope or channel width.
Vlčková, Klára; Hofman, Jakub
2012-01-01
The close relationship between soil organic matter and the bioavailability of POPs in soils suggests the possibility of using it for the extrapolation between different soils. The aim of this study was to prove that TOC content is not a single factor affecting the bioavailability of POPs and that TOC based extrapolation might be incorrect, especially when comparing natural and artificial soils. Three natural soils with increasing TOC and three artificial soils with TOC comparable to these natural soils were spiked with phenanthrene, pyrene, lindane, p,p'-DDT, and PCB 153 and studied after 0, 14, 28, and 56 days. At each sampling point, total soil concentration and bioaccumulation in earthworms Eisenia fetida were measured. The results showed different behavior and bioavailability of POPs in natural and artificial soils and apparent effects of aging on these differences. Hence, direct TOC based extrapolation between various soils seems to be limited. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latychevskaia, Tatiana; Fink, Hans-Werner
Previously reported crystalline structures obtained by an iterative phase retrieval reconstruction of their diffraction patterns seem to be free from displaying any irregularities or defects in the lattice, which appears to be unrealistic. We demonstrate here that the structure of a nanocrystal including its atomic defects can unambiguously be recovered from its diffraction pattern alone by applying a direct phase retrieval procedure not relying on prior information of the object shape. Individual point defects in the atomic lattice are clearly apparent. Conventional phase retrieval routines assume isotropic scattering. We show that when dealing with electrons, the quantitatively correct transmission functionmore » of the sample cannot be retrieved due to anisotropic, strong forward scattering specific to electrons. We summarize the conditions for this phase retrieval method and show that the diffraction pattern can be extrapolated beyond the original record to even reveal formerly not visible Bragg peaks. Such extrapolated wave field pattern leads to enhanced spatial resolution in the reconstruction.« less
Guided wave tomography in anisotropic media using recursive extrapolation operators
NASA Astrophysics Data System (ADS)
Volker, Arno
2018-04-01
Guided wave tomography is an advanced technology for quantitative wall thickness mapping to image wall loss due to corrosion or erosion. An inversion approach is used to match the measured phase (time) at a specific frequency to a model. The accuracy of the model determines the sizing accuracy. Particularly for seam welded pipes there is a measurable amount of anisotropy. Moreover, for small defects a ray-tracing based modelling approach is no longer accurate. Both issues are solved by applying a recursive wave field extrapolation operator assuming vertical transverse anisotropy. The inversion scheme is extended by not only estimating the wall loss profile but also the anisotropy, local material changes and transducer ring alignment errors. This makes the approach more robust. The approach will be demonstrated experimentally on different defect sizes, and a comparison will be made between this new approach and an isotropic ray-tracing approach. An example is given in Fig. 1 for a 75 mm wide, 5 mm deep defect. The wave field extrapolation based tomography clearly provides superior results.
Extrapolating Single Organic Ion Solvation Thermochemistry from Simulated Water Nanodroplets.
Coles, Jonathan P; Houriez, Céline; Meot-Ner Mautner, Michael; Masella, Michel
2016-09-08
We compute the ion/water interaction energies of methylated ammonium cations and alkylated carboxylate anions solvated in large nanodroplets of 10 000 water molecules using 10 ns molecular dynamics simulations and an all-atom polarizable force-field approach. Together with our earlier results concerning the solvation of these organic ions in nanodroplets whose molecular sizes range from 50 to 1000, these new data allow us to discuss the reliability of extrapolating absolute single-ion bulk solvation energies from small ion/water droplets using common power-law functions of cluster size. We show that reliable estimates of these energies can be extrapolated from a small data set comprising the results of three droplets whose sizes are between 100 and 1000 using a basic power-law function of droplet size. This agrees with an earlier conclusion drawn from a model built within the mean spherical framework and paves the road toward a theoretical protocol to systematically compute the solvation energies of complex organic ions.
Finite volume effects on the electric polarizability of neutral hadrons in lattice QCD
NASA Astrophysics Data System (ADS)
Lujan, M.; Alexandru, A.; Freeman, W.; Lee, F. X.
2016-10-01
We study the finite volume effects on the electric polarizability for the neutron, neutral pion, and neutral kaon using eight dynamically generated two-flavor nHYP-clover ensembles at two different pion masses: 306(1) and 227(2) MeV. An infinite volume extrapolation is performed for each hadron at both pion masses. For the neutral kaon, finite volume effects are relatively mild. The dependence on the quark mass is also mild, and a reliable chiral extrapolation can be performed along with the infinite volume extrapolation. Our result is αK0 phys=0.356 (74 )(46 )×10-4 fm3 . In contrast, for neutron, the electric polarizability depends strongly on the volume. After removing the finite volume corrections, our neutron polarizability results are in good agreement with chiral perturbation theory. For the connected part of the neutral pion polarizability, the negative trend persists, and it is not due to finite volume effects but likely sea quark charging effects.
A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems
NASA Astrophysics Data System (ADS)
Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong
2017-09-01
In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.
Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.
OConnor, William; Runquist, Elizabeth A
2008-07-01
Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.
A model for the data extrapolation of greenhouse gas emissions in the Brazilian hydroelectric system
NASA Astrophysics Data System (ADS)
Pinguelli Rosa, Luiz; Aurélio dos Santos, Marco; Gesteira, Claudio; Elias Xavier, Adilson
2016-06-01
Hydropower reservoirs are artificial water systems and comprise a small proportion of the Earth’s continental territory. However, they play an important role in the aquatic biogeochemistry and may affect the environment negatively. Since the 90s, as a result of research on organic matter decay in manmade flooded areas, some reports have associated greenhouse gas emissions with dam construction. Pioneering work carried out in the early period challenged the view that hydroelectric plants generate completely clean energy. Those estimates suggested that GHG emissions into the atmosphere from some hydroelectric dams may be significant when measured per unit of energy generated and should be compared to GHG emissions from fossil fuels used for power generation. The contribution to global warming of greenhouse gases emitted by hydropower reservoirs is currently the subject of various international discussions and debates. One of the most controversial issues is the extrapolation of data from different sites. In this study, the extrapolation from a site sample where measurements were made to the complete set of 251 reservoirs in Brazil, comprising a total flooded area of 32 485 square kilometers, was derived from the theory of self-organized criticality. We employed a power law for its statistical representation. The present article reviews the data generated at that time in order to demonstrate how, with the help of mathematical tools, we can extrapolate values from one reservoir to another without compromising the reliability of the results.
NASA Astrophysics Data System (ADS)
Dalmasse, K.; Pariat, É.; Valori, G.; Jing, J.; Démoulin, P.
2018-01-01
In the solar corona, magnetic helicity slowly and continuously accumulates in response to plasma flows tangential to the photosphere and magnetic flux emergence through it. Analyzing this transfer of magnetic helicity is key for identifying its role in the dynamics of active regions (ARs). The connectivity-based helicity flux density method was recently developed for studying the 2D and 3D transfer of magnetic helicity in ARs. The method takes into account the 3D nature of magnetic helicity by explicitly using knowledge of the magnetic field connectivity, which allows it to faithfully track the photospheric flux of magnetic helicity. Because the magnetic field is not measured in the solar corona, modeled 3D solutions obtained from force-free magnetic field extrapolations must be used to derive the magnetic connectivity. Different extrapolation methods can lead to markedly different 3D magnetic field connectivities, thus questioning the reliability of the connectivity-based approach in observational applications. We address these concerns by applying this method to the isolated and internally complex AR 11158 with different magnetic field extrapolation models. We show that the connectivity-based calculations are robust to different extrapolation methods, in particular with regard to identifying regions of opposite magnetic helicity flux. We conclude that the connectivity-based approach can be reliably used in observational analyses and is a promising tool for studying the transfer of magnetic helicity in ARs and relating it to their flaring activity.
Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa
2017-04-01
International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Maier, B; Reitsamer-Tontsch, S; Weisser, C; Schreiner, B
2011-10-01
Austria still lacks a baby-take-home rate after assisted reproductive technologies (ART) and therefore an adequate quality management of ART. This paper extrapolates data about births/infants after ART at the University Clinic of Obstetrics and Gynaecology (PMU/SALK) in Salzburg for Austria, especially in regard to multiple births/infants collected between 2000 and 2009. On average 2 271 infants were born per year during the last 10 years. Among them, 76 infants (3.34% of all children) were born after ART. Of all children conceived by ART and born (759) at the University Clinic of Obstetrics and Gynaecology 368 are multiples. This is 48.5% of all children born after ART. 31.6% of all multiples born were conceived through ART. The extrapolation of data concerning multiples results in 1 255 multiples/year after ART for Austria. Without a baby-take-home rate, serious quality management of reproductive medicine is impossible. Online registration of deliveries and infants is the only adequate approach. The data of this statistical extrapolation from a single perinatal center not only provide a survey about the situation in Austria, but also support the claim of a quantitative (numbers) as well as qualitative (condition of infants) baby-take-home rate after ART. © Georg Thieme Verlag KG Stuttgart · New York.
Frequency Comparison of [Formula: see text] Ion Optical Clocks at PTB and NPL via GPS PPP.
Leute, J; Huntemann, N; Lipphardt, B; Tamm, Christian; Nisbet-Jones, P B R; King, S A; Godun, R M; Jones, J M; Margolis, H S; Whibberley, P B; Wallin, A; Merimaa, M; Gill, P; Peik, E
2016-07-01
We used precise point positioning, a well-established GPS carrier-phase frequency transfer method to perform a direct remote comparison of two optical frequency standards based on single laser-cooled [Formula: see text] ions operated at the National Physical Laboratory (NPL), U.K. and the Physikalisch-Technische Bundesanstalt (PTB), Germany. At both institutes, an active hydrogen maser serves as a flywheel oscillator which is connected to a GPS receiver as an external frequency reference and compared simultaneously to a realization of the unperturbed frequency of the (2)S1/2(F=0)-(2)D3/2(F=2) electric quadrupole transition in [Formula: see text] via an optical femtosecond frequency comb. To profit from long coherent GPS-link measurements, we extrapolate the fractional frequency difference over the various data gaps in the optical clock to maser comparisons which introduces maser noise to the frequency comparison but improves the uncertainty from the GPS-link instability. We determined the total statistical uncertainty consisting of the GPS-link uncertainty and the extrapolation uncertainties for several extrapolation schemes. Using the extrapolation scheme with the smallest combined uncertainty, we find a fractional frequency difference [Formula: see text] of -1.3×10(-15) with a combined uncertainty of 1.2×10(-15) for a total measurement time of 67 h. This result is consistent with an agreement of the frequencies realized by both optical clocks and with recent absolute frequency measurements against caesium fountain clocks within the corresponding uncertainties.
Patient-bounded extrapolation using low-dose priors for volume-of-interest imaging in C-arm CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Y.; Maier, A.; Berger, M.
2015-04-15
Purpose: Three-dimensional (3D) volume-of-interest (VOI) imaging with C-arm systems provides anatomical information in a predefined 3D target region at a considerably low x-ray dose. However, VOI imaging involves laterally truncated projections from which conventional reconstruction algorithms generally yield images with severe truncation artifacts. Heuristic based extrapolation methods, e.g., water cylinder extrapolation, typically rely on techniques that complete the truncated data by means of a continuity assumption and thus appear to be ad-hoc. It is our goal to improve the image quality of VOI imaging by exploiting existing patient-specific prior information in the workflow. Methods: A necessary initial step prior tomore » a 3D acquisition is to isocenter the patient with respect to the target to be scanned. To this end, low-dose fluoroscopic x-ray acquisitions are usually applied from anterior–posterior (AP) and medio-lateral (ML) views. Based on this, the patient is isocentered by repositioning the table. In this work, we present a patient-bounded extrapolation method that makes use of these noncollimated fluoroscopic images to improve image quality in 3D VOI reconstruction. The algorithm first extracts the 2D patient contours from the noncollimated AP and ML fluoroscopic images. These 2D contours are then combined to estimate a volumetric model of the patient. Forward-projecting the shape of the model at the eventually acquired C-arm rotation views gives the patient boundary information in the projection domain. In this manner, we are in the position to substantially improve image quality by enforcing the extrapolated line profiles to end at the known patient boundaries, derived from the 3D shape model estimate. Results: The proposed method was evaluated on eight clinical datasets with different degrees of truncation. The proposed algorithm achieved a relative root mean square error (rRMSE) of about 1.0% with respect to the reference reconstruction on nontruncated data, even in the presence of severe truncation, compared to a rRMSE of 8.0% when applying a state-of-the-art heuristic extrapolation technique. Conclusions: The method we proposed in this paper leads to a major improvement in image quality for 3D C-arm based VOI imaging. It involves no additional radiation when using fluoroscopic images that are acquired during the patient isocentering process. The model estimation can be readily integrated into the existing interventional workflow without additional hardware.« less
NASA Technical Reports Server (NTRS)
Jaeck, C. L.
1976-01-01
A test was conducted in the Boeing Large Anechoic Chamber to determine static jet noise source locations of six baseline and suppressor nozzle models, and establish a technique for extrapolating near field data into the far field. The test covered nozzle pressure ratios from 1.44 to 2.25 and jet velocities from 412 to 594 m/s at a total temperature of 844 K.
A study of alternative schemes for extrapolation of secular variation at observatories
Alldredge, L.R.
1976-01-01
The geomagnetic secular variation is not well known. This limits the useful life of geomagnetic models. The secular variation is usually assumed to be linear with time. It is found that attenative schemes that employ quasiperiodic variations from internal and external sources can improve the extrapolation of secular variation at high-quality observatories. Although the schemes discussed are not yet fully applicable in worldwide model making, they do suggest some basic ideas that may be developed into useful tools in future model work. ?? 1976.
2016-05-01
measurements for distilled mustard (HD) and non- welded high-density polyethylene (HDPE) at 120 °F were completed for thicknesses of 20–80 mil for...extrapolation to the ~250 mil container thickness. A Fick’s law extrapolation inferred a breakthrough time of 10–11 days for the 250 mil non- welded HDPE at...8 3.2 Permeation Results for 20 mil HDPE at 100 °F: Welded ................................10 3.2.1 Confirmation Test: Welded
Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Groves, Curtis; Ilie, Marcel; Schallhorn, Paul
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature
Extrapolation to Nonequilibrium from Coarse-Grained Response Theory
NASA Astrophysics Data System (ADS)
Basu, Urna; Helden, Laurent; Krüger, Matthias
2018-05-01
Nonlinear response theory, in contrast to linear cases, involves (dynamical) details, and this makes application to many-body systems challenging. From the microscopic starting point we obtain an exact response theory for a small number of coarse-grained degrees of freedom. With it, an extrapolation scheme uses near-equilibrium measurements to predict far-from-equilibrium properties (here, second order responses). Because it does not involve system details, this approach can be applied to many-body systems. It is illustrated in a four-state model and in the near critical Ising model.
Resolution enhancement in digital holography by self-extrapolation of holograms.
Latychevskaia, Tatiana; Fink, Hans-Werner
2013-03-25
It is generally believed that the resolution in digital holography is limited by the size of the captured holographic record. Here, we present a method to circumvent this limit by self-extrapolating experimental holograms beyond the area that is actually captured. This is done by first padding the surroundings of the hologram and then conducting an iterative reconstruction procedure. The wavefront beyond the experimentally detected area is thus retrieved and the hologram reconstruction shows enhanced resolution. To demonstrate the power of this concept, we apply it to simulated as well as experimental holograms.
The correlation of fractal structures in the photospheric and the coronal magnetic field
NASA Astrophysics Data System (ADS)
Dimitropoulou, M.; Georgoulis, M.; Isliker, H.; Vlahos, L.; Anastasiadis, A.; Strintzi, D.; Moussas, X.
2009-10-01
Context: This work examines the relation between the fractal properties of the photospheric magnetic patterns and those of the coronal magnetic fields in solar active regions. Aims: We investigate whether there is any correlation between the fractal dimensions of the photospheric structures and the magnetic discontinuities formed in the corona. Methods: To investigate the connection between the photospheric and coronal complexity, we used a nonlinear force-free extrapolation method that reconstructs the 3d magnetic fields using 2d observed vector magnetograms as boundary conditions. We then located the magnetic discontinuities, which are considered as spatial proxies of reconnection-related instabilities. These discontinuities form well-defined volumes, called here unstable volumes. We calculated the fractal dimensions of these unstable volumes and compared them to the fractal dimensions of the boundary vector magnetograms. Results: Our results show no correlation between the fractal dimensions of the observed 2d photospheric structures and the extrapolated unstable volumes in the corona, when nonlinear force-free extrapolation is used. This result is independent of efforts to (1) bring the photospheric magnetic fields closer to a nonlinear force-free equilibrium and (2) omit the lower part of the modeled magnetic field volume that is almost completely filled by unstable volumes. A significant correlation between the fractal dimensions of the photospheric and coronal magnetic features is only observed at the zero level (lower limit) of approximation of a current-free (potential) magnetic field extrapolation. Conclusions: We conclude that the complicated transition from photospheric non-force-free fields to coronal force-free ones hampers any direct correlation between the fractal dimensions of the 2d photospheric patterns and their 3d counterparts in the corona at the nonlinear force-free limit, which can be considered as a second level of approximation in this study. Correspondingly, in the zero and first levels of approximation, namely, the potential and linear force-free extrapolation, respectively, we reveal a significant correlation between the fractal dimensions of the photospheric and coronal structures, which can be attributed to the lack of electric currents or to their purely field-aligned orientation.
Approach for extrapolating in vitro metabolism data to refine bioconcentration factor estimates.
Cowan-Ellsberry, Christina E; Dyer, Scott D; Erhardt, Susan; Bernhard, Mary Jo; Roe, Amy L; Dowty, Martin E; Weisbrod, Annie V
2008-02-01
National and international chemical management programs are assessing thousands of chemicals for their persistence, bioaccumulative and environmental toxic properties; however, data for evaluating the bioaccumulation potential for fish are limited. Computer based models that account for the uptake and elimination processes that contribute to bioaccumulation may help to meet the need for reliable estimates. One critical elimination process of chemicals is metabolic transformation. It has been suggested that in vitro metabolic transformation tests using fish liver hepatocytes or S9 fractions can provide rapid and cost-effective measurements of fish metabolic potential, which could be used to refine bioconcentration factor (BCF) computer model estimates. Therefore, recent activity has focused on developing in vitro methods to measure metabolic transformation in cellular and subcellular fish liver fractions. A method to extrapolate in vitro test data to the whole body metabolic transformation rates is presented that could be used to refine BCF computer model estimates. This extrapolation approach is based on concepts used to determine the fate and distribution of drugs within the human body which have successfully supported the development of new pharmaceuticals for years. In addition, this approach has already been applied in physiologically-based toxicokinetic models for fish. The validity of the in vitro to in vivo extrapolation is illustrated using the rate of loss of parent chemical measured in two independent in vitro test systems: (1) subcellular enzymatic test using the trout liver S9 fraction, and (2) primary hepatocytes isolated from the common carp. The test chemicals evaluated have high quality in vivo BCF values and a range of logK(ow) from 3.5 to 6.7. The results show very good agreement between the measured BCF and estimated BCF values when the extrapolated whole body metabolism rates are included, thus suggesting that in vitro biotransformation data could effectively be used to reduce in vivo BCF testing and refine BCF model estimates. However, additional fish physiological data for parameterization and validation for a wider range of chemicals are needed.
NASA Astrophysics Data System (ADS)
Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan
2013-04-01
Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation-quality-index. Subsequently the probability and quality information of the forecast ensemble is available and flexible blending to numerical prediction model for each subarea is possible. Simultaneously with automatic processing the ensemble nowcasting product is visualized in a new innovative way which combines the intensity, probability and quality information for different subareas in one forecast image.
Development of a primary standard for absorbed dose from unsealed radionuclide solutions
NASA Astrophysics Data System (ADS)
Billas, I.; Shipley, D.; Galer, S.; Bass, G.; Sander, T.; Fenwick, A.; Smyth, V.
2016-12-01
Currently, the determination of the internal absorbed dose to tissue from an administered radionuclide solution relies on Monte Carlo (MC) calculations based on published nuclear decay data, such as emission probabilities and energies. In order to validate these methods with measurements, it is necessary to achieve the required traceability of the internal absorbed dose measurements of a radionuclide solution to a primary standard of absorbed dose. The purpose of this work was to develop a suitable primary standard. A comparison between measurements and calculations of absorbed dose allows the validation of the internal radiation dose assessment methods. The absorbed dose from an yttrium-90 chloride (90YCl) solution was measured with an extrapolation chamber. A phantom was developed at the National Physical Laboratory (NPL), the UK’s National Measurement Institute, to position the extrapolation chamber as closely as possible to the surface of the solution. The performance of the extrapolation chamber was characterised and a full uncertainty budget for the absorbed dose determination was obtained. Absorbed dose to air in the collecting volume of the chamber was converted to absorbed dose at the centre of the radionuclide solution by applying a MC calculated correction factor. This allowed a direct comparison of the analytically calculated and experimentally determined absorbed dose of an 90YCl solution. The relative standard uncertainty in the measurement of absorbed dose at the centre of an 90YCl solution with the extrapolation chamber was found to be 1.6% (k = 1). The calculated 90Y absorbed doses from published medical internal radiation dose (MIRD) and radiation dose assessment resource (RADAR) data agreed with measurements to within 1.5% and 1.4%, respectively. This study has shown that it is feasible to use an extrapolation chamber for performing primary standard absorbed dose measurements of an unsealed radionuclide solution. Internal radiation dose assessment methods based on MIRD and RADAR data for 90Y have been validated with experimental absorbed dose determination and they agree within the stated expanded uncertainty (k = 2).
Extraterrestrial cold chemistry. A need for a specific database.
NASA Astrophysics Data System (ADS)
Pernot, P.; Carrasco, N.; Dobrijevic, M.; Hébrard, E.; Plessis, S.; Wakelam, V.
2008-09-01
The major resource databases for building chemical models for photochemistry in cold environments are mainly based on those designed for Earth atmospheric chemistry or combustion, in which reaction rates are reported for temperatures typically above 300 K [1,2]. Kinetic data measured at low temperatures are very sparse; for instance, in stateoftheart photochemical models of Titan atmosphere, less than 10% of the rates have been measured in the relevant temperature range (100200 K) [35]. In consequence, photochemical models rely mostly on lowT extrapolations by Arrheniustype laws. There is more and more evidence that this is often inappropriate [6], and low T extrapolations are hindered by very high uncertainty [3] (Fig.1). The predictions of models based on those extrapolations are expected to be very inaccurate [4,7]. We argue that there is not much sense in increasing the complexity of the present models as long as this predictivity issue has not been resolved. Fig. 1 Uncertainty of low temperature extrapolation for the N(2D) +C2H4 reaction rate, from measurements in the range 225 292 K [10], assuming an Arrhenius law (blue line). The sample of rate laws is generated by Monte Carlo uncertainty propagation after a Bayesian Data reAnalysis (BDA) of experimental data. A dialogue between modellers and experimentalists is necessary to improve this situation. Considering the heavy costs of low temperature reaction kinetics experiments, the identification of key reactions has to be based on an optimal strategy to improve the predictivity of photochemical models. This can be achieved by global sensitivity analysis, as illustrated on Titan atmospheric chemistry [8]. The main difficulty of this scheme is that it requires a lot of inputs, mainly the evaluation of uncertainty for extrapolated reaction rates. Although a large part has already been achieved by Hébrard et al. [3], extension and validation requires a group of experts. A new generation of collaborative kinetic database is needed to implement efficiently this scheme. The KIDA project [9], initiated by V. Wakelam for astrochemistry, has been joined by planetologists with similar prospects. EuroPlaNet will contribute to this effort through the organization of comities of experts on specific processes in atmospheric photochemistry.
3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer
NASA Technical Reports Server (NTRS)
Lane, John
2012-01-01
Determining the Z-R relationship (where Z is the radar reflectivity factor and R is rainfall rate) from disdrometer data has been and is a common goal of cloud physicists and radar meteorology researchers. The usefulness of this quantity has traditionally been limited since radar represents a volume measurement, while a disdrometer corresponds to a point measurement. To solve that problem, a 3D-DSD (drop-size distribution) method of determining an equivalent 3D Z-R was developed at the University of Central Florida and tested at the Kennedy Space Center, FL. Unfortunately, that method required a minimum of three disdrometers clustered together within a microscale network (.1-km separation). Since most commercial disdrometers used by the radar meteorology/cloud physics community are high-cost instruments, three disdrometers located within a microscale area is generally not a practical strategy due to the limitations of these kinds of research budgets. A relatively simple modification to the 3D-DSD algorithm provides an estimate of the 3D-DSD and therefore, a 3D Z-R measurement using a single disdrometer. The basis of the horizontal extrapolation is mass conservation of a drop size increment, employing the mass conservation equation. For vertical extrapolation, convolution of a drop size increment using raindrop terminal velocity is used. Together, these two independent extrapolation techniques provide a complete 3DDSD estimate in a volume around and above a single disdrometer. The estimation error is lowest along a vertical plane intersecting the disdrometer position in the direction of wind advection. This work demonstrates that multiple sensors are not required for successful implementation of the 3D interpolation/extrapolation algorithm. This is a great benefit since it is seldom that multiple sensors in the required spatial arrangement are available for this type of analysis. The original software (developed at the University of Central Florida, 1998.- 2000) has also been modified to read standardized disdrometer data format (Joss-Waldvogel format). Other modifications to the software involve accounting for vertical ambient wind motion, as well as evaporation of the raindrop during its flight time.
Cox, Kieran D; Black, Morgan J; Filip, Natalia; Miller, Matthew R; Mohns, Kayla; Mortimor, James; Freitas, Thaise R; Greiter Loerzer, Raquel; Gerwing, Travis G; Juanes, Francis; Dudas, Sarah E
2017-12-01
Diversity estimates play a key role in ecological assessments. Species richness and abundance are commonly used to generate complex diversity indices that are dependent on the quality of these estimates. As such, there is a long-standing interest in the development of monitoring techniques, their ability to adequately assess species diversity, and the implications for generated indices. To determine the ability of substratum community assessment methods to capture species diversity, we evaluated four methods: photo quadrat, point intercept, random subsampling, and full quadrat assessments. Species density, abundance, richness, Shannon diversity, and Simpson diversity were then calculated for each method. We then conducted a method validation at a subset of locations to serve as an indication for how well each method captured the totality of the diversity present. Density, richness, Shannon diversity, and Simpson diversity estimates varied between methods, despite assessments occurring at the same locations, with photo quadrats detecting the lowest estimates and full quadrat assessments the highest. Abundance estimates were consistent among methods. Sample-based rarefaction and extrapolation curves indicated that differences between Hill numbers (richness, Shannon diversity, and Simpson diversity) were significant in the majority of cases, and coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. Method validation highlighted the inability of the tested methods to capture the totality of the diversity present, while further supporting the notion of extrapolating abundances. Our results highlight the need for consistency across research methods, the advantages of utilizing multiple diversity indices, and potential concerns and considerations when comparing data from multiple sources.
Fluxes all of the time? A primer on the temporal representativeness of FLUXNET
NASA Astrophysics Data System (ADS)
Chu, Housen; Baldocchi, Dennis D.; John, Ranjeet; Wolf, Sebastian; Reichstein, Markus
2017-02-01
FLUXNET, the global network of eddy covariance flux towers, provides the largest synthesized data set of CO2, H2O, and energy fluxes. To achieve the ultimate goal of providing flux information "everywhere and all of the time," studies have attempted to address the representativeness issue, i.e., whether measurements taken in a set of given locations and measurement periods can be extrapolated to a space- and time-explicit extent (e.g., terrestrial globe, 1982-2013 climatological baseline). This study focuses on the temporal representativeness of FLUXNET and tests whether site-specific measurement periods are sufficient to capture the natural variability of climatological and biological conditions. FLUXNET is unevenly representative across sites in terms of the measurement lengths and potentials of extrapolation in time. Similarity of driver conditions among years generally enables the extrapolation of flux information beyond measurement periods. Yet such extrapolation potentials are further constrained by site-specific variability of driver conditions. Several driver variables such as air temperature, diurnal temperature range, potential evapotranspiration, and normalized difference vegetation index had detectable trends and/or breakpoints within the baseline period, and flux measurements generally covered similar and biased conditions in those drivers. About 38% and 60% of FLUXNET sites adequately sampled the mean conditions and interannual variability of all driver conditions, respectively. For long-record sites (≥15 years) the percentages increased to 59% and 69%, respectively. However, the justification of temporal representativeness should not rely solely on the lengths of measurements. Whenever possible, site-specific consideration (e.g., trend, breakpoint, and interannual variability in drivers) should be taken into account.
Neural Extrapolation of Motion for a Ball Rolling Down an Inclined Plane
La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka
2014-01-01
It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion. PMID:24940874
Hand interception of occluded motion in humans: a test of model-based vs. on-line control
Zago, Myrka; Lacquaniti, Francesco
2015-01-01
Two control schemes have been hypothesized for the manual interception of fast visual targets. In the model-free on-line control, extrapolation of target motion is based on continuous visual information, without resorting to physical models. In the model-based control, instead, a prior model of target motion predicts the future spatiotemporal trajectory. To distinguish between the two hypotheses in the case of projectile motion, we asked participants to hit a ball that rolled down an incline at 0.2 g and then fell in air at 1 g along a parabola. By varying starting position, ball velocity and trajectory differed between trials. Motion on the incline was always visible, whereas parabolic motion was either visible or occluded. We found that participants were equally successful at hitting the falling ball in both visible and occluded conditions. Moreover, in different trials the intersection points were distributed along the parabolic trajectories of the ball, indicating that subjects were able to extrapolate an extended segment of the target trajectory. Remarkably, this trend was observed even at the very first repetition of movements. These results are consistent with the hypothesis of model-based control, but not with on-line control. Indeed, ball path and speed during the occlusion could not be extrapolated solely from the kinematic information obtained during the preceding visible phase. The only way to extrapolate ball motion correctly during the occlusion was to assume that the ball would fall under gravity and air drag when hidden from view. Such an assumption had to be derived from prior experience. PMID:26133803
EXTRAPOLATION OF THE SOLAR CORONAL MAGNETIC FIELD FROM SDO/HMI MAGNETOGRAM BY A CESE-MHD-NLFFF CODE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang Chaowei; Feng Xueshang, E-mail: cwjiang@spaceweather.ac.cn, E-mail: fengx@spaceweather.ac.cn
Due to the absence of direct measurement, the magnetic field in the solar corona is usually extrapolated from the photosphere in a numerical way. At the moment, the nonlinear force-free field (NLFFF) model dominates the physical models for field extrapolation in the low corona. Recently, we have developed a new NLFFF model with MHD relaxation to reconstruct the coronal magnetic field. This method is based on CESE-MHD model with the conservation-element/solution-element (CESE) spacetime scheme. In this paper, we report the application of the CESE-MHD-NLFFF code to Solar Dynamics Observatory/Helioseismic and Magnetic Imager (SDO/HMI) data with magnetograms sampled for two activemore » regions (ARs), NOAA AR 11158 and 11283, both of which were very non-potential, producing X-class flares and eruptions. The raw magnetograms are preprocessed to remove the force and then inputted into the extrapolation code. Qualitative comparison of the results with the SDO/AIA images shows that our code can reconstruct magnetic field lines resembling the EUV-observed coronal loops. Most important structures of the ARs are reproduced excellently, like the highly sheared field lines that suspend filaments in AR 11158 and twisted flux rope which corresponds to a sigmoid in AR 11283. Quantitative assessment of the results shows that the force-free constraint is fulfilled very well in the strong-field regions but apparently not that well in the weak-field regions because of data noise and numerical errors in the small currents.« less
Neural extrapolation of motion for a ball rolling down an inclined plane.
La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka
2014-01-01
It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion.
Smooth extrapolation of unknown anatomy via statistical shape models
NASA Astrophysics Data System (ADS)
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
Designing a Pediatric Study for an Antimalarial Drug by Using Information from Adults
Jullien, Vincent; Samson, Adeline; Guedj, Jérémie; Kiechel, Jean-René; Zohar, Sarah; Comets, Emmanuelle
2015-01-01
The objectives of this study were to design a pharmacokinetic (PK) study by using information about adults and evaluate the robustness of the recommended design through a case study of mefloquine. PK data about adults and children were available from two different randomized studies of the treatment of malaria with the same artesunate-mefloquine combination regimen. A recommended design for pediatric studies of mefloquine was optimized on the basis of an extrapolated model built from adult data through the following approach. (i) An adult PK model was built, and parameters were estimated by using the stochastic approximation expectation-maximization algorithm. (ii) Pediatric PK parameters were then obtained by adding allometry and maturation to the adult model. (iii) A D-optimal design for children was obtained with PFIM by assuming the extrapolated design. Finally, the robustness of the recommended design was evaluated in terms of the relative bias and relative standard errors (RSE) of the parameters in a simulation study with four different models and was compared to the empirical design used for the pediatric study. Combining PK modeling, extrapolation, and design optimization led to a design for children with five sampling times. PK parameters were well estimated by this design with few RSE. Although the extrapolated model did not predict the observed mefloquine concentrations in children very accurately, it allowed precise and unbiased estimates across various model assumptions, contrary to the empirical design. Using information from adult studies combined with allometry and maturation can help provide robust designs for pediatric studies. PMID:26711749
NASA Astrophysics Data System (ADS)
Zhou, Shiqi
2017-11-01
A new scheme is put forward to determine the wetting temperature (Tw) by utilizing the adaptation of arc-length continuation algorithm to classical density functional theory (DFT) used originally by Frink and Salinger, and its advantages are summarized into four points: (i) the new scheme is applicable whether the wetting occurs near a planar or a non-planar surface, whereas a zero contact angle method is considered only applicable to a perfectly flat solid surface, as demonstrated previously and in this work, and essentially not fit for non-planar surface. (ii) The new scheme is devoid of an uncertainty, which plagues a pre-wetting extrapolation method and originates from an unattainability of the infinitely thick film in the theoretical calculation. (iii) The new scheme can be similarly and easily applied to extreme instances characterized by lower temperatures and/or higher surface attraction force field, which, however, can not be dealt with by the pre-wetting extrapolation method because of the pre-wetting transition being mixed with many layering transitions and the difficulty in differentiating varieties of the surface phase transitions. (iv) The new scheme still works in instance wherein the wetting transition occurs close to the bulk critical temperature; however, this case completely can not be managed by the pre-wetting extrapolation method because near the bulk critical temperature the pre-wetting region is extremely narrow, and no enough pre-wetting data are available for use of the extrapolation procedure.
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; ...
2016-02-18
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.
2016-01-01
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less
NASA Astrophysics Data System (ADS)
Duijster, Arno; van Groenestijn, Gert-Jan; van Neer, Paul; Blacquière, Gerrit; Volker, Arno
2018-04-01
The use of phased arrays is growing in the non-destructive testing industry and the trend is towards large 2D arrays, but due to limitations, it is currently not possible to record the signals from all elements, resulting in aliased data. In the past, we have presented a data interpolation scheme `beyond spatial aliasing' to overcome this aliasing. In this paper, we present a different approach: blending and deblending of data. On the hardware side, groups of receivers are blended (grouped) in only a few transmit/recording channels. This allows for transmission and recording with all elements, in a shorter acquisition time and with less channels. On the data processing side, this blended data is deblended (separated) by transforming it to a different domain and applying an iterative filtering and thresholding. Two different filtering methods are compared: f-k filtering and wavefield extrapolation filtering. The deblending and filtering methods are demonstrated on simulated experimental data. The wavefield extrapolation filtering proves to outperform f-k filtering. The wavefield extrapolation method can deal with groups of up to 24 receivers, in a phased array of 48 × 48 elements.
Application of the backward extrapolation method to pulsed neutron sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
NASA Astrophysics Data System (ADS)
Van Zandt, James R.
2012-05-01
Steady-state performance of a tracking filter is traditionally evaluated immediately after a track update. However, there is commonly a further delay (e.g., processing and communications latency) before the tracks can actually be used. We analyze the accuracy of extrapolated target tracks for four tracking filters: Kalman filter with the Singer maneuver model and worst-case correlation time, with piecewise constant white acceleration, and with continuous white acceleration, and the reduced state filter proposed by Mookerjee and Reifler.1, 2 Performance evaluation of a tracking filter is significantly simplified by appropriate normalization. For the Kalman filter with the Singer maneuver model, the steady-state RMS error immediately after an update depends on only two dimensionless parameters.3 By assuming a worst case value of target acceleration correlation time, we reduce this to a single parameter without significantly changing the filter performance (within a few percent for air tracking).4 With this simplification, we find for all four filters that the RMS errors for the extrapolated state are functions of only two dimensionless parameters. We provide simple analytic approximations in each case.
Application of the backward extrapolation method to pulsed neutron sources
Talamo, Alberto; Gohar, Yousry
2017-09-23
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Video error concealment using block matching and frequency selective extrapolation algorithms
NASA Astrophysics Data System (ADS)
P. K., Rajani; Khaparde, Arti
2017-06-01
Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.
EXTRAPOLATION METHOD FOR MAXIMAL AND 24-H AVERAGE LTE TDD EXPOSURE ESTIMATION.
Franci, D; Grillo, E; Pavoncello, S; Coltellacci, S; Buccella, C; Aureli, T
2018-01-01
The Long-Term Evolution (LTE) system represents the evolution of the Universal Mobile Telecommunication System technology. This technology introduces two duplex modes: Frequency Division Duplex and Time Division Duplex (TDD). Despite having experienced a limited expansion in the European countries since the debut of the LTE technology, a renewed commercial interest for LTE TDD technology has recently been shown. Therefore, the development of extrapolation procedures optimised for TDD systems becomes crucial, especially for the regulatory authorities. This article presents an extrapolation method aimed to assess the exposure to LTE TDD sources, based on the detection of the Cell-Specific Reference Signal power level. The method introduces a βTDD parameter intended to quantify the fraction of the LTE TDD frame duration reserved for downlink transmission. The method has been validated by experimental measurements performed on signals generated by both a vector signal generator and a test Base Transceiver Station installed at Linkem S.p.A facility in Rome. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overton, J.H.; Jarabek, A.M.
1989-01-01
The U.S. EPA advocates the assessment of health-effects data and calculation of inhaled reference doses as benchmark values for gauging systemic toxicity to inhaled gases. The assessment often requires an inter- or intra-species dose extrapolation from no observed adverse effect level (NOAEL) exposure concentrations in animals to human equivalent NOAEL exposure concentrations. To achieve this, a dosimetric extrapolation procedure was developed based on the form or type of equations that describe the uptake and disposition of inhaled volatile organic compounds (VOCs) in physiologically-based pharmacokinetic (PB-PK) models. The procedure assumes allometric scaling of most physiological parameters and that the value ofmore » the time-integrated human arterial-blood concentration must be limited to no more than to that of experimental animals. The scaling assumption replaces the need for most parameter values and allows the derivation of a simple formula for dose extrapolation of VOCs that gives equivalent or more-conservative exposure concentrations values than those that would be obtained using a PB-PK model in which scaling was assumed.« less
Tran, Van; Little, Mark P
2017-11-01
Murine experiments were conducted at the JANUS reactor in Argonne National Laboratory from 1970 to 1992 to study the effect of acute and protracted radiation dose from gamma rays and fission neutron whole body exposure. The present study reports the reanalysis of the JANUS data on 36,718 mice, of which 16,973 mice were irradiated with neutrons, 13,638 were irradiated with gamma rays, and 6107 were controls. Mice were mostly Mus musculus, but one experiment used Peromyscus leucopus. For both types of radiation exposure, a Cox proportional hazards model was used, using age as timescale, and stratifying on sex and experiment. The optimal model was one with linear and quadratic terms in cumulative lagged dose, with adjustments to both linear and quadratic dose terms for low-dose rate irradiation (<5 mGy/h) and with adjustments to the dose for age at exposure and sex. After gamma ray exposure there is significant non-linearity (generally with upward curvature) for all tumours, lymphoreticular, respiratory, connective tissue and gastrointestinal tumours, also for all non-tumour, other non-tumour, non-malignant pulmonary and non-malignant renal diseases (p < 0.001). Associated with this the low-dose extrapolation factor, measuring the overestimation in low-dose risk resulting from linear extrapolation is significantly elevated for lymphoreticular tumours 1.16 (95% CI 1.06, 1.31), elevated also for a number of non-malignant endpoints, specifically all non-tumour diseases, 1.63 (95% CI 1.43, 2.00), non-malignant pulmonary disease, 1.70 (95% CI 1.17, 2.76) and other non-tumour diseases, 1.47 (95% CI 1.29, 1.82). However, for a rather larger group of malignant endpoints the low-dose extrapolation factor is significantly less than 1 (implying downward curvature), with central estimates generally ranging from 0.2 to 0.8, in particular for tumours of the respiratory system, vasculature, ovary, kidney/urinary bladder and testis. For neutron exposure most endpoints, malignant and non-malignant, show downward curvature in the dose response, and for most endpoints this is statistically significant (p < 0.05). Associated with this, the low-dose extrapolation factor associated with neutron exposure is generally statistically significantly less than 1 for most malignant and non-malignant endpoints, with central estimates mostly in the range 0.1-0.9. In contrast to the situation at higher dose rates, there are statistically non-significant decreases of risk per unit dose at gamma dose rates of less than or equal to 5 mGy/h for most malignant endpoints, and generally non-significant increases in risk per unit dose at gamma dose rates ≤5 mGy/h for most non-malignant endpoints. Associated with this, the dose-rate extrapolation factor, the ratio of high dose-rate to low dose-rate (≤5 mGy/h) gamma dose response slopes, for many tumour sites is in the range 1.2-2.3, albeit not statistically significantly elevated from 1, while for most non-malignant endpoints the gamma dose-rate extrapolation factor is less than 1, with most estimates in the range 0.2-0.8. After neutron exposure there are non-significant indications of lower risk per unit dose at dose rates ≤5 mGy/h compared to higher dose rates for most malignant endpoints, and for all tumours (p = 0.001), and respiratory tumours (p = 0.007) this reduction is conventionally statistically significant; for most non-malignant outcomes risks per unit dose non-significantly increase at lower dose rates. Associated with this, the neutron dose-rate extrapolation factor is less than 1 for most malignant and non-malignant endpoints, in many cases statistically significantly so, with central estimates mostly in the range 0.0-0.2.
NASA Astrophysics Data System (ADS)
Panchenko, Yu. N.; De Maré, G. R.; Abramenkov, A. V.; Baird, M. S.; Tverezovsky, V. V.; Nizovtsev, A. V.; Bolesov, I. G.
2004-09-01
The effects of substitution of X=C by Si or Ge in X(CH 3) 3 moieties attached to the formal double bond of 3,3-dimethylcyclopropene are examined. Regularities in observed trends of vibrational frequencies implicating the moieties containing the X atom, as the X atomic mass is increased, are extrapolated to X=Sn. The results of this extrapolation made it possible to assign the known experimental vibrational frequencies of 3,3-dimethyl-1-(trimethylstannyl)cyclopropene and 3,3-dimethyl-1,2-bis(trimethylstannyl)cyclopropene.
Fourth order scheme for wavelet based solution of Black-Scholes equation
NASA Astrophysics Data System (ADS)
Finěk, Václav
2017-12-01
The present paper is devoted to the numerical solution of the Black-Scholes equation for pricing European options. We apply the Crank-Nicolson scheme with Richardson extrapolation for time discretization and Hermite cubic spline wavelets with four vanishing moments for space discretization. This scheme is the fourth order accurate both in time and in space. Computational results indicate that the Crank-Nicolson scheme with Richardson extrapolation significantly decreases the amount of computational work. We also numerically show that optimal convergence rate for the used scheme is obtained without using startup procedure despite the data irregularities in the model.
The solubility of the noble gases He, Ne, Ar, Kr, and Xe in water up to the critical point
Potter, R.W.; Clynne, M.A.
1978-01-01
The solubility of the noble gases Ar, He, Ne, Kr, and Xe in pure water was measured from 298 to 561??K. These data in turn were extrapolated to the critical point of water, thus providing a complete set of Henry's law constants from 274 to 647??K when combined with the existing literature data. Equations describing the behavior of the Henry's law constants over this temperature range are also given. The data do not confirm extrapolations of empirical correlations based on low-temperature solubility data. ?? 1978 Plenum Publishing Corporation.
Computer program for pulsed thermocouples with corrections for radiation effects
NASA Technical Reports Server (NTRS)
Will, H. A.
1981-01-01
A pulsed thermocouple was used for measuring gas temperatures above the melting point of common thermocouples. This was done by allowing the thermocouple to heat until it approaches its melting point and then turning on the protective cooling gas. This method required a computer to extrapolate the thermocouple data to the higher gas temperatures. A method that includes the effect of radiation in the extrapolation is described. Computations of gas temperature are provided, along with the estimate of the final thermocouple wire temperature. Results from tests on high temperature combustor research rigs are presented.
MEGA16 - Computer program for analysis and extrapolation of stress-rupture data
NASA Technical Reports Server (NTRS)
Ensign, C. R.
1981-01-01
The computerized form of the minimum commitment method of interpolating and extrapolating stress versus time to failure data, MEGA16, is described. Examples are given of its many plots and tabular outputs for a typical set of data. The program assumes a specific model equation and then provides a family of predicted isothermals for any set of data with at least 12 stress-rupture results from three different temperatures spread over reasonable stress and time ranges. It is written in FORTRAN 4 using IBM plotting subroutines and its runs on an IBM 370 time sharing system.
Regionalisation of parameters of a large-scale water quality model in Lithuania using PAIC-SWAT
NASA Astrophysics Data System (ADS)
Zarrineh, Nina; van Griensven, Ann; Sennikovs, Juris; Bekere, Liga; Plunge, Svajunas
2015-04-01
To comply with the EU Water Framework Directive, all water bodies need to achieve good ecological status. To reach these goals, the Environmental Protection Agency (AAA) has to elaborate river basin districts management plans and programmes of measures for all catchments in Lithuania. For this purpose, a Soil and Water Assessment Tool (SWAT) model was set up for all Lithuanian catchments using the most recent version of SWAT2012 rev627 implemented and imbedded in a Python workflow by the Center of Processes Analysis and Research (PAIC). The model was calibrated and evaluated using all monitoring data of river discharge, nitrogen and phosphorous concentrations and load. A regionalisation strategy has been set up by identifying 13 hydrological regions according to the runoff formation and hydrological conditions. In each region, a representative catchment was selected and calibrated using a combination of manual and automated calibration techniques. After final parameterization and fulfilling of calibrating and validating evaluation criteria, the same parameters sets have been extrapolated to other catchments within the same hydrological region. Multi variable cal/val strategy was implemented for the following variables: river flow and in-stream NO3, Total Nitrogen, PO4 and Total Phosphorous concentrations. The criteria used for calibration, validation and extrapolation are: Nash-Sutcliffe Efficiency (NSE) for flow and R-squared for water quality variables and PBIAS (percentage bias) for all variables. For the hydrological calibration, NSE values greater than 0.5 should be achieved, while for validation and extrapolation the threshold is respectively 0.4 and 0.3. PBIAS errors have to be less than 20% for calibration and for validation and extrapolation less than 25% and 30%, respectively. In water quality calibration, R-squared should be achieved to 0.5 for calibration and for validation and extrapolation to 0.4 and 0.3 respectively for nitrogen variables. Besides PBIAS error should be less than 40% for calibration, and less than 70% for validation and extrapolation for all mentioned water quality variables. For the flow calibration, daily discharge data for 62 stations were provided for the period 1997-2012. For more than 500 stations, water quality data was provided and 135 data-rich stations was pre-processed in a database containing all observations from 1997-2012. Finally by implementing this regionalisation strategy, the model could satisfactorily predict the selected variables so that in the hydrological part more than 90% of stations fulfilled the criteria and in the water quality part more than 95% of stations fulfilled the criteria. Keywords: Water Quality Modelling, Regionalisation, Parameterization, Nitrogen and Phosphorus Prediction, Calibration, PAIC-SWAT.
Scotcher, Daniel; Billington, Sarah; Brown, Jay; Jones, Christopher R.; Brown, Colin D. A.; Rostami-Hodjegan, Amin
2017-01-01
In vitro-in vivo extrapolation of drug metabolism data obtained in enriched preparations of subcellular fractions rely on robust estimates of physiologically relevant scaling factors for the prediction of clearance in vivo. The purpose of the current study was to measure the microsomal and cytosolic protein per gram of kidney (MPPGK and CPPGK) in dog and human kidney cortex using appropriate protein recovery marker and evaluate functional activity of human cortex microsomes. Cytochrome P450 (CYP) content and glucose-6-phosphatase (G6Pase) activity were used as microsomal protein markers, whereas glutathione-S-transferase activity was a cytosolic marker. Functional activity of human microsomal samples was assessed by measuring mycophenolic acid glucuronidation. MPPGK was 33.9 and 44.0 mg/g in dog kidney cortex, and 41.1 and 63.6 mg/g in dog liver (n = 17), using P450 content and G6Pase activity, respectively. No trends were noted between kidney, liver, and intestinal scalars from the same animals. Species differences were evident, as human MPPGK and CPPGK were 26.2 and 53.3 mg/g in kidney cortex (n = 38), respectively. MPPGK was 2-fold greater than the commonly used in vitro-in vivo extrapolation scalar; this difference was attributed mainly to tissue source (mixed kidney regions versus cortex). Robust human MPPGK and CPPGK scalars were measured for the first time. The work emphasized the importance of regional differences (cortex versus whole kidney–specific MPPGK, tissue weight, and blood flow) and a need to account for these to improve assessment of renal metabolic clearance and its extrapolation to in vivo. PMID:28270564
Thomas, Reuben; Thomas, Russell S.; Auerbach, Scott S.; Portier, Christopher J.
2013-01-01
Background Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. Objectives To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Methods Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Results Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Conclusions Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species. PMID:23737943
Wind Tunnel Measurements of Shuttle Orbiter Global Heating with Comparisons to Flight
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Merski, N. Ronald; Blanchard, Robert C.
2002-01-01
An aerothermodynamic database of global heating images was acquired of the Shuttle Orbiter in the NASA Langley Research Center 20-Inch Mach 6 Air Tunnel. These results were obtained for comparison to the global infrared images of the Orbiter in flight from the infrared sensing aeroheating flight experiment (ISAFE). The most recent ISAFE results from STS-103, consisted of port side images, at hypersonic conditions, of the surface features that result from the strake vortex scrubbing along the side of the vehicle. The wind tunnel results were obtained with the phosphor thermography system, which also provides global information and thus is ideally suited for comparison to the global flight results. The aerothermodynamic database includes both windward and port side heating images of the Orbiter for a range of angles of attack (20 to 40 deg), freestream unit Reynolds number (1 x 10(exp 6))/ft to 8 x 10(exp 6)/ft, body flap deflections (0, 5, and 10 deg), speed brake deflections (0 and 45 deg), as well as with boundary layer trips for forced transition to turbulence heating results. Sample global wind tunnel heat transfer images were extrapolated to flight conditions for comparison to Orbiter flight data. A windward laminar case for an angle of attack of 40 deg was extrapolated to Mach 11.6 flight conditions for comparison to STS-2 flight thermocouple results. A portside wind tunnel image for an angle of attack of 25 deg was extrapolated for Mach 5 flight conditions for comparison to STS-103 global surface temperatures. The comparisons showed excellent qualitative agreement, however the extrapolated wind tunnel results over-predicted the flight surface temperatures on the order of 5% on the windward surface and slightly higher on the portside.
Specific heat in KFe2As2 in zero and applied magnetic field
NASA Astrophysics Data System (ADS)
Kim, J. S.; Kim, E. G.; Stewart, G. R.; Chen, X. H.; Wang, X. F.
2011-05-01
The specific heat down to 0.08 K of the iron pnictide superconductor KFe2As2 was measured on a single-crystal sample with a residual resistivity ratio of ˜650, with a Tconset determined by a specific heat of 3.7 K. The zero-field normal-state specific heat divided by temperature, C/T, was extrapolated from above Tc to T=0 by insisting on agreement between the extrapolated normal-state entropy at Tc, Snextrap(Tc), and the measured superconducting-state entropy at Tc, Ssmeas(Tc), since for a second-order phase transition the two entropies must be equal. This extrapolation would indicate that this rather clean sample of KFe2As2 exhibits non-Fermi-liquid behavior; i.e., C/T increases at low temperatures, in agreement with the reported non-Fermi-liquid behavior in the resistivity. However, specific heat as a function of magnetic field shows that the shoulder feature around 0.7 K, which is commonly seen in KFe2As2 samples, is not evidence for a second superconducting gap as has been previously proposed but instead is due to an unknown magnetic impurity phase, which can affect the entropy balance and the extrapolation of the normal-state specific heat. This peak (somewhat larger in magnitude) with similar field dependence is also found in a less pure sample of KFe2As2, with a residual resistivity ratio of only 90 and Tconset=3.1 K. These data, combined with the measured normal-state specific heat in field to suppress superconductivity, allow the conclusion that an increase in the normal-state specific heat as T→0 is in fact not seen in KFe2As2; i.e., Fermi-liquid behavior is observed.
Grams, Paul E.; Topping, David J.; Schmidt, John C.; Hazel, Joseph E.; Kaplinski, Matt
2013-01-01
Measurements of morphologic change are often used to infer sediment mass balance. Such measurements may, however, result in gross errors when morphologic changes over short reaches are extrapolated to predict changes in sediment mass balance for long river segments. This issue is investigated by examination of morphologic change and sediment influx and efflux for a 100 km segment of the Colorado River in Grand Canyon, Arizona. For each of four monitoring intervals within a 7 year study period, the direction of sand-storage response within short morphologic monitoring reaches was consistent with the flux-based sand mass balance. Both budgeting methods indicate that sand storage was stable or increased during the 7 year period. Extrapolation of the morphologic measurements outside the monitoring reaches does not, however, provide a reasonable estimate of the magnitude of sand-storage change for the 100 km study area. Extrapolation results in large errors, because there is large local variation in site behavior driven by interactions between the flow and local bed topography. During the same flow regime and reach-average sediment supply, some locations accumulate sand while others evacuate sand. The interaction of local hydraulics with local channel geometry exerts more control on local morphodynamic response than sand supply over an encompassing river segment. Changes in the upstream supply of sand modify bed responses but typically do not completely offset the effect of local hydraulics. Thus, accurate sediment budgets for long river segments inferred from reach-scale morphologic measurements must incorporate the effect of local hydraulics in a sampling design or avoid extrapolation altogether.
Hand interception of occluded motion in humans: a test of model-based vs. on-line control.
La Scaleia, Barbara; Zago, Myrka; Lacquaniti, Francesco
2015-09-01
Two control schemes have been hypothesized for the manual interception of fast visual targets. In the model-free on-line control, extrapolation of target motion is based on continuous visual information, without resorting to physical models. In the model-based control, instead, a prior model of target motion predicts the future spatiotemporal trajectory. To distinguish between the two hypotheses in the case of projectile motion, we asked participants to hit a ball that rolled down an incline at 0.2 g and then fell in air at 1 g along a parabola. By varying starting position, ball velocity and trajectory differed between trials. Motion on the incline was always visible, whereas parabolic motion was either visible or occluded. We found that participants were equally successful at hitting the falling ball in both visible and occluded conditions. Moreover, in different trials the intersection points were distributed along the parabolic trajectories of the ball, indicating that subjects were able to extrapolate an extended segment of the target trajectory. Remarkably, this trend was observed even at the very first repetition of movements. These results are consistent with the hypothesis of model-based control, but not with on-line control. Indeed, ball path and speed during the occlusion could not be extrapolated solely from the kinematic information obtained during the preceding visible phase. The only way to extrapolate ball motion correctly during the occlusion was to assume that the ball would fall under gravity and air drag when hidden from view. Such an assumption had to be derived from prior experience. Copyright © 2015 the American Physiological Society.
De Vore, Karl W; Fatahi, Nadia M; Sass, John E
2016-08-01
Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.
Casting the Coronal Magnetic Field Reconstruction Tools in 3D Using the MHD Bifrost Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleishman, Gregory D.; Loukitcheva, Maria; Anfinogentov, Sergey
Quantifying the coronal magnetic field remains a central problem in solar physics. Nowadays, the coronal magnetic field is often modeled using nonlinear force-free field (NLFFF) reconstructions, whose accuracy has not yet been comprehensively assessed. Here we perform a detailed casting of the NLFFF reconstruction tools, such as π -disambiguation, photospheric field preprocessing, and volume reconstruction methods, using a 3D snapshot of the publicly available full-fledged radiative MHD model. Specifically, from the MHD model, we know the magnetic field vector in the entire 3D domain, which enables us to perform a “voxel-by-voxel” comparison of the restored and the true magnetic fieldsmore » in the 3D model volume. Our tests show that the available π -disambiguation methods often fail in the quiet-Sun areas dominated by small-scale magnetic elements, while they work well in the active region (AR) photosphere and (even better) chromosphere. The preprocessing of the photospheric magnetic field, although it does produce a more force-free boundary condition, also results in some effective “elevation” of the magnetic field components. This “elevation” height is different for the longitudinal and transverse components, which results in a systematic error in absolute heights in the reconstructed magnetic data cube. The extrapolations performed starting from the actual AR photospheric magnetogram are free from this systematic error, while other metrics are comparable with those for extrapolations from the preprocessed magnetograms. This finding favors the use of extrapolations from the original photospheric magnetogram without preprocessing. Our tests further suggest that extrapolations from a force-free chromospheric boundary produce measurably better results than those from a photospheric boundary.« less
Estimation of the global burden of mesothelioma deaths from incomplete national mortality data.
Odgerel, Chimed-Ochir; Takahashi, Ken; Sorahan, Tom; Driscoll, Tim; Fitzmaurice, Christina; Yoko-O, Makoto; Sawanyawisuth, Kittisak; Furuya, Sugio; Tanaka, Fumihiro; Horie, Seichi; Zandwijk, Nico van; Takala, Jukka
2017-12-01
Mesothelioma is increasingly recognised as a global health issue and the assessment of its global burden is warranted. To descriptively analyse national mortality data and to use reported and estimated data to calculate the global burden of mesothelioma deaths. For the study period of 1994 to 2014, we grouped 230 countries into 59 countries with quality mesothelioma mortality data suitable to be used for reference rates, 45 countries with poor quality data and 126 countries with no data, based on the availability of data in the WHO Mortality Database. To estimate global deaths, we extrapolated the gender-specific and age-specific mortality rates of the countries with quality data to all other countries. The global numbers and rates of mesothelioma deaths have increased over time. The 59 countries with quality data recorded 15 011 mesothelioma deaths per year over the 3 most recent years with available data (equivalent to 9.9 deaths per million per year). From these reference data, we extrapolated the global mesothelioma deaths to be 38 400 per year, based on extrapolations for asbestos use. Although the validity of our extrapolation method depends on the adequate identification of quality mesothelioma data and appropriate adjustment for other variables, our estimates can be updated, refined and verified because they are based on commonly accessible data and are derived using a straightforward algorithm. Our estimates are within the range of previously reported values but higher than the most recently reported values. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Thomas, Reuben; Thomas, Russell S; Auerbach, Scott S; Portier, Christopher J
2013-01-01
Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species.
Health effects of gasoline exposure. I. Exposure assessment for U.S. distribution workers.
Smith, T J; Hammond, S K; Wong, O
1993-01-01
Personal exposures were estimated for a large cohort of workers in the U.S. domestic system for distributing gasoline by trucks and marine vessels. This assessment included development of a rationale and methodology for extrapolating vapor exposures prior to the availability of measurement data, analysis of existing measurement data to estimate task and job exposures during 1975-1985, and extrapolation of truck and marine job exposures before 1975. A worker's vapor exposure was extrapolated from three sets of factors: the tasks in his or her job associated with vapor sources, the characteristics of vapor sources (equipment and other facilities) at the work site, and the composition of petroleum products producing vapors. Historical data were collected on the tasks in job definitions, on work-site facilities, and on product composition. These data were used in a model to estimate the overall time-weighted-average vapor exposure for jobs based on estimates of task exposures and their duration. Task exposures were highest during tank filling in trucks and marine vessels. Measured average annual, full-shift exposures during 1975-1985 ranged from 9 to 14 ppm of total hydrocarbon vapor for truck drivers and 2 to 35 ppm for marine workers on inland waterways. Extrapolated past average exposures in truck operations were highest for truck drivers before 1965 (range 140-220 ppm). Other jobs in truck operations resulted in much lower exposures. Because there were few changes in marine operations before 1979, exposures were assumed to be the same as those measured during 1975-1985. Well-defined exposure gradients were found across jobs within time periods, which were suitable for epidemiologic analyses. PMID:8020436
Mangold, Stefanie; Gatidis, Sergios; Luz, Oliver; König, Benjamin; Schabel, Christoph; Bongers, Malte N; Flohr, Thomas G; Claussen, Claus D; Thomas, Christoph
2014-12-01
The objective of this study was to retrospectively determine the potential of virtual monoenergetic (ME) reconstructions for a reduction of metal artifacts using a new-generation single-source computed tomographic (CT) scanner. The ethics committee of our institution approved this retrospective study with a waiver of the need for informed consent. A total of 50 consecutive patients (29 men and 21 women; mean [SD] age, 51.3 [16.7] years) with metal implants after osteosynthetic fracture treatment who had been examined using a single-source CT scanner (SOMATOM Definition Edge; Siemens Healthcare, Forchheim, Germany; consecutive dual-energy mode with 140 kV/80 kV) were selected. Using commercially available postprocessing software (syngo Dual Energy; Siemens AG), virtual ME data sets with extrapolated energy of 130 keV were generated (medium smooth convolution kernel D30) and compared with standard polyenergetic images reconstructed with a B30 (medium smooth) and a B70 (sharp) kernel. For quantification of the beam hardening artifacts, CT values were measured on circular lines surrounding bone and the osteosynthetic device, and frequency analyses of these values were performed using discrete Fourier transform. A high proportion of low frequencies to the spectrum indicates a high level of metal artifacts. The measurements in all data sets were compared using the Wilcoxon signed rank test. The virtual ME images with extrapolated energy of 130 keV showed significantly lower contribution of low frequencies after the Fourier transform compared with any polyenergetic data set reconstructed with D30, B70, and B30 kernels (P < 0.001). Sequential single-source dual-energy CT allows an efficient reduction of metal artifacts using high-energy ME extrapolation after osteosynthetic fracture treatment.
Acute toxicity value extrapolation with fish and aquatic invertebrates
Buckler, Denny R.; Mayer, Foster L.; Ellersieck, Mark R.; Asfaw, Amha
2005-01-01
Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled “Ecological Risk Analysis” (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more sensitive to contaminants than fish are.
NASA Astrophysics Data System (ADS)
Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo
2018-04-01
In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.
Vemer, Pepijn; Rutten-van Mölken, Maureen P M H; Kaper, Janneke; Hoogenveen, Rudolf T; van Schayck, C P; Feenstra, Talitha L
2010-06-01
Smoking cessation can be encouraged by reimbursing the costs of smoking cessation support (SCS). The short-term efficiency of reimbursement has been evaluated previously. However, a thorough estimate of the long-term cost-utility is lacking. To evaluate long-term effects of reimbursement of SCS. Results from a randomized controlled trial were extrapolated to long-term outcomes in terms of health care costs and (quality adjusted) life years (QALY) gained, using the Chronic Disease Model. Our first scenario was no reimbursement. In a second scenario, the short-term cessation rates from the trial were extrapolated directly. Sensitivity analyses were based on the trial's confidence intervals. In the third scenario the additional use of SCS as found in the trial was combined with cessation rates from international meta-analyses. Intervention costs per QALY gained compared to the reference scenario were approximately euro1200 extrapolating the trial effects directly, and euro4200 when combining the trial's use of SCS with the cessation rates from the literature. Taking all health care effects into account, even costs in life years gained, resulted in an estimated incremental cost-utility of euro4500 and euro7400, respectively. In both scenarios costs per QALY remained below euro16 000 in sensitivity analyses using a life-time horizon. Extrapolating the higher use of SCS due to reimbursement led to more successful quitters and a gain in life years and QALYs. Accounting for overheads, administration costs and the costs of SCS, these health gains could be obtained at relatively low cost, even when including costs in life years gained. Hence, reimbursement of SCS seems to be cost-effective from a health care perspective.
Dosing antibiotics in neonates: review of the pharmacokinetic data.
Rivera-Chaparro, Nazario D; Cohen-Wolkowiez, Michael; Greenberg, Rachel G
2017-09-01
Antibiotics are often used in neonates despite the absence of relevant dosing information in drug labels. For neonatal dosing, clinicians must extrapolate data from studies for adults and older children, who have strikingly different physiologies. As a result, dosing extrapolation can lead to increased toxicity or efficacy failures in neonates. Driven by these differences and recent legislation mandating the study of drugs in children and neonates, an increasing number of pharmacokinetic studies of antibiotics are being performed in neonates. These studies have led to new dosing recommendations with particular consideration for neonate body size and maturation. Herein, we highlight the available pharmacokinetic data for commonly used systemic antibiotics in neonates.
Predicting the future trend of popularity by network diffusion.
Zeng, An; Yeung, Chi Ho
2016-06-01
Conventional approaches to predict the future popularity of products are mainly based on extrapolation of their current popularity, which overlooks the hidden microscopic information under the macroscopic trend. Here, we study diffusion processes on consumer-product and citation networks to exploit the hidden microscopic information and connect consumers to their potential purchase, publications to their potential citers to obtain a prediction for future item popularity. By using the data obtained from the largest online retailers including Netflix and Amazon as well as the American Physical Society citation networks, we found that our method outperforms the accurate short-term extrapolation and identifies the potentially popular items long before they become prominent.
Predicting the future trend of popularity by network diffusion
NASA Astrophysics Data System (ADS)
Zeng, An; Yeung, Chi Ho
2016-06-01
Conventional approaches to predict the future popularity of products are mainly based on extrapolation of their current popularity, which overlooks the hidden microscopic information under the macroscopic trend. Here, we study diffusion processes on consumer-product and citation networks to exploit the hidden microscopic information and connect consumers to their potential purchase, publications to their potential citers to obtain a prediction for future item popularity. By using the data obtained from the largest online retailers including Netflix and Amazon as well as the American Physical Society citation networks, we found that our method outperforms the accurate short-term extrapolation and identifies the potentially popular items long before they become prominent.
Measurements of the Absorption by Auditorium SEATING—A Model Study
NASA Astrophysics Data System (ADS)
BARRON, M.; COLEMAN, S.
2001-01-01
One of several problems with seat absorption is that only small numbers of seats can be tested in standard reverberation chambers. One method proposed for reverberation chamber measurements involves extrapolation when the absorption coefficient results are applied to actual auditoria. Model seat measurements in an effectively large model reverberation chamber have allowed the validity of this extrapolation to be checked. The alternative barrier method for reverberation chamber measurements was also tested and the two methods were compared. The effect on the absorption of row-row spacing as well as absorption by small numbers of seating rows was also investigated with model seats.
Heats of NF(sub n) (n= 1-3) and NF(sub n)(+)(n = 1-3)
NASA Technical Reports Server (NTRS)
Ricca, Alessandra; Arnold, James (Technical Monitor)
1998-01-01
Accurate heats of formation are computed for NF(sub n) and NF(sub n)(+), for n = 1-3. The geometries and the vibrational frequencies are determined at the B3LYP level of theory. The energetics are determined at the CCSD(T) level of theory. Basis set limit values are obtained by extrapolation. In those cases where the CCSD(T) calculations become prohibitively large, the basis set extrapolation is performed at the MP2 level. The temperature dependence of the heat of formation, heat capacity, and entropy are computed for the temperature range 300 to 4000 K and fit to a polynomial.
Application of the Weibull extrapolation to 137Cs geochronology in Tokyo Bay and Ise Bay, Japan.
Lu, Xueqiang
2004-01-01
Considerable doubt surrounds the nature of processes by which 137Cs is deposited in marine sediments, leading to a situation where 137Cs geochronology cannot be always applied suitably. Based on extrapolation with Weibull distribution, the maximum concentration of 137Cs derived from asymptotic values for cumulative specific inventory was used to re-establish 137Cs geochronology, instead of original 137Cs profiles. Corresponding dating results for cores in Tokyo Bay and Ise Bay, Japan, by means of this new method, are in much closer agreement with those calculated from 210Pb method than the previous method.
Extrapolation of rotating sound fields.
Carley, Michael
2018-03-01
A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.
The effects of sunspots on solar irradiance
NASA Technical Reports Server (NTRS)
Hudson, H. S.; Silva, S.; Woodard, M.; Willson, R. C.
1982-01-01
It is pointed out that the darkness of a sunspot on the visible hemisphere of the sun will reduce the solar irradiance on the earth. Approaches are discussed for obtaining a crude estimate of the irradiance deficit produced by sunspots and of the total luminosity reduction for the whole global population of sunspots. Attention is given to a photometric sunspot index, a global measure of spot flux deficit, and models for the compensating flux excess. A model is shown for extrapolating visible-hemisphere spot areas to the invisible hemisphere. As an illustration, this extrapolation is used to calculate a very simple model for the reradiation necessary to balance the flux deficit.
Cathode fall measurement in a dielectric barrier discharge in helium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Yanpeng; Zheng, Bin; Liu, Yaoge
2013-11-15
A method based on the “zero-length voltage” extrapolation is proposed to measure cathode fall in a dielectric barrier discharge. Starting, stable, and discharge-maintaining voltages were measured to obtain the extrapolation zero-length voltage. Under our experimental conditions, the “zero-length voltage” gave a cathode fall of about 185 V. Based on the known thickness of the cathode fall region, the spatial distribution of the electric field strength in dielectric barrier discharge in atmospheric helium is determined. The strong cathode fall with a maximum field value of approximately 9.25 kV/cm was typical for the glow mode of the discharge.
Off disk-center potential field calculations using vector magnetograms
NASA Technical Reports Server (NTRS)
Venkatakrishnan, P.; Gary, G. Allen
1989-01-01
A potential field calculation for off disk-center vector magnetograms that uses all the three components of the measured field is investigated. There is neither any need for interpolation of grid points between the image plane and the heliographic plane nor for an extension or a truncation to a heliographic rectangle. Hence, the method provides the maximum information content from the photospheric field as well as the most consistent potential field independent of the viewing angle. The introduction of polarimetric noise produces a less tolerant extrapolation procedure than using the line-of-sight extrapolation, but the resultant standard deviation is still small enough for the practical utility of this method.
López-Mondéjar, Rubén; Antón, Anabel; Raidl, Stefan; Ros, Margarita; Pascual, José Antonio
2010-04-01
The species of the genus Trichoderma are used successfully as biocontrol agents against a wide range of phytopathogenic fungi. Among them, Trichoderma harzianum is especially effective. However, to develop more effective fungal biocontrol strategies in organic substrates and soil, tools for monitoring the control agents are required. Real-time PCR is potentially an effective tool for the quantification of fungi in environmental samples. The aim of this study consisted of the development and application of a real-time PCR-based method to the quantification of T. harzianum, and the extrapolation of these data to fungal biomass values. A set of primers and a TaqMan probe for the ITS region of the fungal genome were designed and tested, and amplification was correlated to biomass measurements obtained with optical microscopy and image analysis, of the hyphal length of the mycelium of the colony. A correlation of 0.76 between ITS copies and biomass was obtained. The extrapolation of the quantity of ITS copies, calculated based on real-time PCR data, into quantities of fungal biomass provides potentially a more accurate value of the quantity of soil fungi. Copyright 2009 Elsevier Ltd. All rights reserved.
Scientific Objectives of the Critical Viscosity Experiment
NASA Technical Reports Server (NTRS)
Berg, R. F.; Moldover, M. R.
1993-01-01
In microgravity, the Critical Viscosity Experiment will measure the viscosity of xenon 15 times closer to the critical point than is possible on earth. The results are expected to include the first direct observation of the predicted power-law divergence of viscosity in a pure fluid and they will test calculations of the value of the exponent associated with the divergence. The results, when combined with Zeno's decay-rate data, will strengthen the test of mode coupling theory. Without microgravity viscosity data, the Zeno test will require an extrapolation of existing 1-g viscosity data by as much as factor of 100 in reduced temperature. By necessity, the extrapolation would use an incompletely verified theory of viscosity crossover. With the microgravity viscosity data, the reliance on crossover models will be negligible allowing a more reliable extrapolation. During the past year, new theoretical calculations for the viscosity exponent finally achieved consistency with the best experimental data for pure fluids. This report gives the justification for the proposed microgravity Critical Viscosity Experiment in this new context. This report also combines for the first time the best available light scattering data with our recent viscosity data to demonstrate the current status of tests of mode coupling theory.
Casares, María Victoria; de Cabo, Laura I.; Seoane, Rafael S.; Natale, Oscar E.; Castro Ríos, Milagros; Weigandt, Cristian; de Iorio, Alicia F.
2012-01-01
In order to determine copper toxicity (LC50) to a local species (Cnesterodon decemmaculatus) in the South American Pilcomayo River water and evaluate a cross-fish-species extrapolation of Biotic Ligand Model, a 96 h acute copper toxicity test was performed. The dissolved copper concentrations tested were 0.05, 0.19, 0.39, 0.61, 0.73, 1.01, and 1.42 mg Cu L−1. The 96 h Cu LC50 calculated was 0.655 mg L−1 (0.823 − 0.488). 96-h Cu LC50 predicted by BLM for Pimephales promelas was 0.722 mg L−1. Analysis of the inter-seasonal variation of the main water quality parameters indicates that a higher protective effect of calcium, magnesium, sodium, sulphate, and chloride is expected during the dry season. The very high load of total suspended solids in this river might be a key factor in determining copper distribution between solid and solution phases. A cross-fish-species extrapolation of copper BLM is valid within the water quality parameters and experimental conditions of this toxicity test. PMID:22523491
Use of Physiologically Based Pharmacokinetic (PBPK) Models ...
EPA announced the availability of the final report, Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk Final Report for Cooperative Agreement. This report describes and demonstrates techniques necessary to extrapolate and incorporate in vitro derived metabolic rate constants in PBPK models. It also includes two case study examples designed to demonstrate the applicability of such data for health risk assessment and addresses the quantification, extrapolation and interpretation of advanced biochemical information on human interindividual variability of chemical metabolism for risk assessment application. It comprises five chapters; topics and results covered in the first four chapters have been published in the peer reviewed scientific literature. Topics covered include: Data Quality ObjectivesExperimental FrameworkRequired DataTwo example case studies that develop and incorporate in vitro metabolic rate constants in PBPK models designed to quantify human interindividual variability to better direct the choice of uncertainty factors for health risk assessment. This report is intended to serve as a reference document for risk assors to use when quantifying, extrapolating, and interpretating advanced biochemical information about human interindividual variability of chemical metabolism.
Estimating the global prevalence of transthyretin familial amyloid polyneuropathy
Waddington‐Cruz, Márcia; Botteman, Marc F.; Carter, John A.; Chopra, Avijeet S.; Hopps, Markay; Stewart, Michelle; Fallet, Shari; Amass, Leslie
2018-01-01
ABSTRACT Introduction: This study sought to estimate the global prevalence of transthyretin familial amyloid polyneuropathy (ATTR‐FAP). Methods: Prevalence estimates and information supporting prevalence calculations was extracted from records yielded by reference‐database searches (2005–2016), conference proceedings, and nonpeer reviewed sources. Prevalence was calculated as prevalence rate multiplied by general population size, then extrapolated to countries without prevalence estimates but with reported cases. Results: Searches returned 3,006 records; 1,001 were fully assessed and 10 retained, yielding prevalence for 10 “core” countries, then extrapolated to 32 additional countries. ATTR‐FAP prevalence in core countries, extrapolated countries, and globally was 3,762 (range 3639–3884), 6424 (range, 1,887–34,584), and 10,186 (range, 5,526–38,468) persons, respectively. Discussion: The mid global prevalence estimate (10,186) approximates the maximum commonly accepted estimate (5,000–10,000). The upper limit (38,468) implies potentially higher prevalence. These estimates should be interpreted carefully because contributing evidence was heterogeneous and carried an overall moderate risk of bias. This highlights the requirement for increasing rare‐disease epidemiological assessment and clinician awareness. Muscle Nerve 57: 829–837, 2018 PMID:29211930
Structural analysis of cylindrical thrust chambers, volume 1
NASA Technical Reports Server (NTRS)
Armstrong, W. H.
1979-01-01
Life predictions of regeneratively cooled rocket thrust chambers are normally derived from classical material fatigue principles. The failures observed in experimental thrust chambers do not appear to be due entirely to material fatigue. The chamber coolant walls in the failed areas exhibit progressive bulging and thinning during cyclic firings until the wall stress finally exceeds the material rupture stress and failure occurs. A preliminary analysis of an oxygen free high conductivity (OFHC) copper cylindrical thrust chamber demonstrated that the inclusion of cumulative cyclic plastic effects enables the observed coolant wall thinout to be predicted. The thinout curve constructed from the referent analysis of 10 firing cycles was extrapolated from the tenth cycle to the 200th cycle. The preliminary OFHC copper chamber 10-cycle analysis was extended so that the extrapolated thinout curve could be established by performing cyclic analysis of deformed configurations at 100 and 200 cycles. Thus the original range of extrapolation was reduced and the thinout curve was adjusted by using calculated thinout rates at 100 and 100 cycles. An analysis of the same underformed chamber model constructed of half-hard Amzirc to study the effect of material properties on the thinout curve is included.
Setty, O H; Shrager, R I; Bunow, B; Reynafarje, B; Lehninger, A L; Hendler, R W
1986-01-01
The problem of obtaining very early ratios for the H+/O stoichiometry accompanying succinate oxidation by rat liver mitochondria was attacked using new techniques for direct measurement rather than extrapolations based on data obtained after mixing and the recovery of the electrode from initial injection of O2. Respiration was quickly initiated in a thoroughly mixed O2-containing suspension of mitochondria under a CO atmosphere by photolysis of the CO-cytochrome c oxidase complex-. Fast responding O2 and pH electrodes were used to collect data every 10 ms. The response time for each electrode was experimentally measured in each experiment and suitable corrections for electrode relaxations were made. With uncorrected data obtained after 0.8 s, the extrapolation back to zero time on the basis of single-exponential curve fitting confirmed values close to 8.0 as previously reported (Costa et al., 1984). The data directly obtained, however, indicate an initial burst in H+/O ratio that peaked to values of approximately 20 to 30 prior to 50 ms and which was no longer evident after 0.3 s. Newer information and considerations that place all extrapolation methods in question are discussed. PMID:3019443
Inferring thermodynamic stability relationship of polymorphs from melting data.
Yu, L
1995-08-01
This study investigates the possibility of inferring the thermodynamic stability relationship of polymorphs from their melting data. Thermodynamic formulas are derived for calculating the Gibbs free energy difference (delta G) between two polymorphs and its temperature slope from mainly the temperatures and heats of melting. This information is then used to estimate delta G, thus relative stability, at other temperatures by extrapolation. Both linear and nonlinear extrapolations are considered. Extrapolating delta G to zero gives an estimation of the transition (or virtual transition) temperature, from which the presence of monotropy or enantiotropy is inferred. This procedure is analogous to the use of solubility data measured near the ambient temperature to estimate a transition point at higher temperature. For several systems examined, the two methods are in good agreement. The qualitative rule introduced this way for inferring the presence of monotropy or enantiotropy is approximately the same as The Heat of Fusion Rule introduced previously on a statistical mechanical basis. This method is applied to 96 pairs of polymorphs from the literature. In most cases, the result agrees with the previous determination. The deviation of the calculated transition temperatures from their previous values (n = 18) is 2% on average and 7% at maximum.
Incorporating contact angles in the surface tension force with the ACES interface curvature scheme
NASA Astrophysics Data System (ADS)
Owkes, Mark
2017-11-01
In simulations of gas-liquid flows interacting with solid boundaries, the contact line dynamics effect the interface motion and flow field through the surface tension force. The surface tension force is directly proportional to the interface curvature and the problem of accurately imposing a contact angle must be incorporated into the interface curvature calculation. Many commonly used algorithms to compute interface curvatures (e.g., height function method) require extrapolating the interface, with defined contact angle, into the solid to allow for the calculation of a curvature near a wall. Extrapolating can be an ill-posed problem, especially in three-dimensions or when multiple contact lines are near each other. We have developed an accurate methodology to compute interface curvatures that allows for contact angles to be easily incorporated while avoiding extrapolation and the associated challenges. The method, known as Adjustable Curvature Evaluation Scale (ACES), leverages a least squares fit of a polynomial to points computed on the volume-of-fluid (VOF) representation of the gas-liquid interface. The method is tested by simulating canonical test cases and then applied to simulate the injection and motion of water droplets in a channel (relevant to PEM fuel cells).
An Observationally Constrained Model of a Flux Rope that Formed in the Solar Corona
NASA Astrophysics Data System (ADS)
James, Alexander W.; Valori, Gherardo; Green, Lucie M.; Liu, Yang; Cheung, Mark C. M.; Guo, Yang; van Driel-Gesztelyi, Lidia
2018-03-01
Coronal mass ejections (CMEs) are large-scale eruptions of plasma from the coronae of stars. Understanding the plasma processes involved in CME initiation has applications for space weather forecasting and laboratory plasma experiments. James et al. used extreme-ultraviolet (EUV) observations to conclude that a magnetic flux rope formed in the solar corona above NOAA Active Region 11504 before it erupted on 2012 June 14 (SOL2012-06-14). In this work, we use data from the Solar Dynamics Observatory (SDO) to model the coronal magnetic field of the active region one hour prior to eruption using a nonlinear force-free field extrapolation, and find a flux rope reaching a maximum height of 150 Mm above the photosphere. Estimations of the average twist of the strongly asymmetric extrapolated flux rope are between 1.35 and 1.88 turns, depending on the choice of axis, although the erupting structure was not observed to kink. The decay index near the apex of the axis of the extrapolated flux rope is comparable to typical critical values required for the onset of the torus instability, so we suggest that the torus instability drove the eruption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogen, K T
A relatively simple, quantitative approach is proposed to address a specific, important gap in the appr approach recommended by the USEPA Guidelines for Cancer Risk Assessment to oach address uncertainty in carcinogenic mode of action of certain chemicals when risk is extrapolated from bioassay data. These Guidelines recognize that some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained usingmore » separate 'linear' (genotoxic) vs. 'nonlinear' (nongenotoxic) approaches to low low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient t to parameterize a biologically based model that reliably o extrapolates risk to low levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach - similar to that used in reference dose procedures for classic toxicity endpoints - can address MOA uncertainty in a way that avoids explicit modeling of low low-dose risk as a function of administere administered or internal dose. Even when a 'nonlinear' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was i illustrated llustrated for a likely DMOA rodent carcinogen naphthalene, specifically to the issue of risk extrapolation from bioassay data on naphthalene naphthalene-induced nasal tumors in rats. Bioassay data, supplemental toxicokinetic data, and related physiologically based p pharmacokinetic and 2 harmacokinetic 2-stage stochastic carcinogenesis modeling results all clearly indicate that naphthalene is a DMOA carcinogen. Plausibility bounds on rat rat-tumor tumor-type specific DMOA DMOA-related uncertainty were obtained using a 2-stage model adapted to reflec reflect the empirical link between genotoxic and cytotoxic effects of t the most potent identified genotoxic naphthalene metabolites, 1,2 1,2- and 1,4 1,4-naphthoquinone. Bound Bound-specific 'adjustment' factors were then used to reduce naphthalene risk estimated by linear ex extrapolation (under the default genotoxic MOA assumption), to account for the DMOA trapolation exhibited by this compound.« less
The Educated Guess: Determining Drug Doses in Exotic Animals Using Evidence-Based Medicine.
Visser, Marike; Oster, Seth C
2018-05-01
Lack of species-specific pharmacokinetic and pharmacodynamic data is a challenge for pharmaceutical and dose selection. If available, dose extrapolation can be accomplished via basic equations. If unavailable, several methods have been described. Linear scaling uses an established milligrams per kilograms dose based on weight. This does not allow for differences in species drug metabolism, sometimes resulting in toxicity. Allometric scaling correlates body weight and metabolic rate but fails for drugs with significant hepatic metabolism and cannot be extrapolated to avians or reptiles. Evidence-based veterinary medicine for dose design based on species similarity is discussed, considering physiologic differences between classes. Copyright © 2018 Elsevier Inc. All rights reserved.
Mixed-venous oxygen tension by nitrogen rebreathing - A critical, theoretical analysis.
NASA Technical Reports Server (NTRS)
Kelman, G. R.
1972-01-01
There is dispute about the validity of the nitrogen rebreathing technique for determination of mixed-venous oxygen tension. This theoretical analysis examines the circumstances under which the technique is likely to be applicable. When the plateau method is used the probable error in mixed-venous oxygen tension is plus or minus 2.5 mm Hg at rest, and of the order of plus or minus 1 mm Hg during exercise. Provided, that the rebreathing bag size is reasonably chosen, Denison's (1967) extrapolation technique gives results at least as accurate as those obtained by the plateau method. At rest, however, extrapolation should be to 30 rather than to 20 sec.
Lattice Simulations and Infrared Conformality
Appelquist, Thomas; Fleming, George T.; Lin, Meifeng; ...
2011-09-01
We examine several recent lattice-simulation data sets, asking whether they are consistent with infrared conformality. We observe, in particular, that for an SU(3) gauge theory with 12 Dirac fermions in the fundamental representation, recent simulation data can be described assuming infrared conformality. Lattice simulations include a fermion mass m which is then extrapolated to zero, and we note that this data can be fit by a small-m expansion, allowing a controlled extrapolation. We also note that the conformal hypothesis does not work well for two theories that are known or expected to be confining and chirally broken, and that itmore » does work well for another theory expected to be infrared conformal.« less
NASA Astrophysics Data System (ADS)
Poirier, Marc; Gagnon, Martin; Tahan, Antoine; Coutu, André; Chamberland-lauzon, Joël
2017-01-01
In this paper, we present the application of cyclostationary modelling for the extrapolation of short stationary load strain samples measured in situ on hydraulic turbine blades. Long periods of measurements allow for a wide range of fluctuations representative of long-term reality to be considered. However, sampling over short periods limits the dynamic strain fluctuations available for analysis. The purpose of the technique presented here is therefore to generate a representative signal containing proper long term characteristics and expected spectrum starting with a much shorter signal period. The final objective is to obtain a strain history that can be used to estimate long-term fatigue behaviour of hydroelectric turbine runners.
Correlation of Resonance Charge Exchange Cross-Section Data in the Low-Energy Range
NASA Technical Reports Server (NTRS)
Sheldon, John W.
1962-01-01
During the course of a literature survey concerning resonance charge exchange, an unusual degree of agreement was noted between an extrapolation of the data reported by Kushnir, Palyukh, and Sena and the data reported by Ziegler. The data of Kushnir et al. are for ion-atom relative energies from 10 to 1000 ev, while the data of Ziegler are for a relative energy of about 1 ev. Extrapolation of the data of Kushnir et al. was made in accordance with Holstein's theory, 3 which is a combination of time-dependent perturbation methods and classical orbit theory. The results of this theory may be discussed in terms of a critical impact parameter b(sub c).
Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc
2013-06-01
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.
UV testing of solar cells: Effects of antireflective coating, prior irradiation, and UV source
NASA Technical Reports Server (NTRS)
Meulenberg, A.
1993-01-01
Short-circuit current degradation of electron irradiated double-layer antireflective-coated cells after 3000 hours ultraviolet (UV) exposure exceeds 3 percent; extrapolation of the data to 10(exp 5) hours (11.4 yrs.) gives a degradation that exceeds 10 percent. Significant qualitative and quantitative differences in degradation were observed in cells with double- and single-layer antireflective coatings. The effects of UV-source age were observed and corrections were made to the data. An additional degradation mechanism was identified that occurs only in previously electron-irradiated solar cells since identical unirradiated cells degrade to only 6 +/- 3 percent when extrapolated 10(exp 5) hours of UV illumination.
Extrapolation of toxic indices among test objects
Tichý, Miloň; Rucki, Marián; Roth, Zdeněk; Hanzlíková, Iveta; Vlková, Alena; Tumová, Jana; Uzlová, Rút
2010-01-01
Oligochaeta Tubifex tubifex, fish fathead minnow (Pimephales promelas), hepatocytes isolated from rat liver and ciliated protozoan are absolutely different organisms and yet their acute toxicity indices correlate. Correlation equations for special effects were developed for a large heterogeneous series of compounds (QSAR, quantitative structure-activity relationships). Knowing those correlation equations and their statistic evaluation, one can extrapolate the toxic indices. The reason is that a common physicochemical property governs the biological effect, namely the partition coefficient between two unmissible phases, simulated generally by n-octanol and water. This may mean that the transport of chemicals towards a target is responsible for the magnitude of the effect, rather than reactivity, as one would assume suppose. PMID:21331180
Extending Supernova Spectral Templates for Next Generation Space Telescope Observations
NASA Astrophysics Data System (ADS)
Roberts-Pierel, Justin; Rodney, Steven A.; Steven Rodney
2018-01-01
Widely used empirical supernova (SN) Spectral Energy Distributions (SEDs) have not historically extended meaningfully into the ultraviolet (UV), or the infrared (IR). However, both are critical for current and future aspects of SN research including UV spectra as probes of poorly understood SN Ia physical properties, and expanding our view of the universe with high-redshift James Webb Space Telescope (JWST) IR observations. We therefore present a comprehensive set of SN SED templates that have been extended into the UV and IR, as well as an open-source software package written in Python that enables a user to generate their own extrapolated SEDs. We have taken a sampling of core-collapse (CC) and Type Ia SNe to get a time-dependent distribution of UV and IR colors (U-B,r’-[JHK]), and then generated color curves are used to extrapolate SEDs into the UV and IR. The SED extrapolation process is now easily duplicated using a user’s own data and parameters via our open-source Python package: SNSEDextend. This work develops the tools necessary to explore the JWST’s ability to discriminate between CC and Type Ia SNe, as well as provides a repository of SN SEDs that will be invaluable to future JWST and WFIRST SN studies.
NASA Astrophysics Data System (ADS)
De Niel, J.; Demarée, G.; Willems, P.
2017-10-01
Governments, policy makers, and water managers are pushed by recent socioeconomic developments such as population growth and increased urbanization inclusive of occupation of floodplains to impose very stringent regulations on the design of hydrological structures. These structures need to withstand storms with return periods typically ranging between 1,250 and 10,000 years. Such quantification involves extrapolations of systematically measured instrumental data, possibly complemented by quantitative and/or qualitative historical data and paleoflood data. The accuracy of the extrapolations is, however, highly unclear in practice. In order to evaluate extreme river peak flow extrapolation and accuracy, we studied historical and instrumental data of the past 500 years along the Meuse River. We moreover propose an alternative method for the estimation of the extreme value distribution of river peak flows, based on weather types derived by sea level pressure reconstructions. This approach results in a more accurate estimation of the tail of the distribution, where current methods are underestimating the design levels related to extreme high return periods. The design flood for a 1,250 year return period is estimated at 4,800 m3 s-1 for the proposed method, compared with 3,450 and 3,900 m3 s-1 for a traditional method and a previous study.
Filograna, Laura; Magarelli, Nicola; Leone, Antonio; Guggenberger, Roman; Winklhofer, Sebastian; Thali, Michael John; Bonomo, Lorenzo
2015-09-01
The aim of this ex vivo study was to assess the performance of monoenergetic dual-energy CT (DECT) reconstructions to reduce metal artefacts in bodies with orthopedic devices in comparison with standard single-energy CT (SECT) examinations in forensic imaging. Forensic and clinical impacts of this study are also discussed. Thirty metallic implants in 20 consecutive cadavers with metallic implants underwent both SECT and DECT with a clinically suitable scanning protocol. Extrapolated monoenergetic DECT images at 64, 69, 88, 105, 120, and 130 keV and individually adjusted monoenergy for optimized image quality (OPTkeV) were generated. Image quality of the seven monoenergetic images and of the corresponding SECT image was assessed qualitatively and quantitatively by visual rating and measurements of attenuation changes induced by streak artefact. Qualitative and quantitative analyses showed statistically significant differences between monoenergetic DECT extrapolated images and SECT, with improvements in diagnostic assessment in monoenergetic DECT at higher monoenergies. The mean value of OPTkeV was 137.6 ± 4.9 with a range of 130 to 148 keV. This study demonstrates that monoenergetic DECT images extrapolated at high energy levels significantly reduce metallic artefacts from orthopedic implants and improve image quality compared to SECT examination in forensic imaging.
Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation
NASA Astrophysics Data System (ADS)
Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong
2018-04-01
The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.
An Examination of the Quality of Wind Observations with Smartphones
NASA Astrophysics Data System (ADS)
Hintz, Kasper; Vedel, Henrik; Muñoz-Gomez, Juan; Woetmann, Niels
2017-04-01
Over the last years, the number of devices connected to the internet has increased significantly making it possible for internal and external sensors to communicate via the internet, opening up many possibilities for additional data for use in the atmospheric sciences. Vaavud has manufactured small anemometer devices which can measure wind speed and wind direction when connected to a smartphone. This work examines the quality of such crowdsourced Handheld Wind Observations (HWO). In order to examine the quality of the HWO, multiple idealised measurement sessions were performed at different sites in different atmospheric conditions. In these sessions, a high-precision ultrasonic anemometer was installed to work as a reference measurement. The HWO are extrapolated to 10 m in order to compare these to the reference observations. This allows us to examine the effect of stability correction in the surface layer and the quality of height extrapolated HWO. The height extrapolation is done using the logarithmic wind profile law with and without stability correction. Furthermore, this study examines the optimal ways of using traditional observations and numerical models to validate HWO. In order to do so, a series of numerical reanalysis have been run for a period of 5 months to quantise the effect of including crowdsourced HWO in a traditional observation dataset.
Integrated Idl Tool For 3d Modeling And Imaging Data Analysis
NASA Astrophysics Data System (ADS)
Nita, Gelu M.; Fleishman, G. D.; Gary, D. E.; Kuznetsov, A. A.; Kontar, E. P.
2012-05-01
Addressing many key problems in solar physics requires detailed analysis of non-simultaneous imaging data obtained in various wavelength domains with different spatial resolution and their comparison with each other supplied by advanced 3D physical models. To facilitate achieving this goal, we have undertaken a major enhancement and improvements of IDL-based simulation tools developed earlier for modeling microwave and X-ray emission. The greatly enhanced object-based architecture provides interactive graphic user interface that allows the user i) to import photospheric magnetic field maps and perform magnetic field extrapolations to almost instantly generate 3D magnetic field models, ii) to investigate the magnetic topology of these models by interactively creating magnetic field lines and associated magnetic field tubes, iii) to populate them with user-defined nonuniform thermal plasma and anisotropic nonuniform nonthermal electron distributions; and iv) to calculate the spatial and spectral properties of radio and X-ray emission. The application integrates DLL and Shared Libraries containing fast gyrosynchrotron emission codes developed in FORTRAN and C++, soft and hard X-ray codes developed in IDL, and a potential field extrapolation DLL produced based on original FORTRAN code developed by V. Abramenko and V. Yurchishin. The interactive interface allows users to add any user-defined IDL or external callable radiation code, as well as user-defined magnetic field extrapolation routines. To illustrate the tool capabilities, we present a step-by-step live computation of microwave and X-ray images from realistic magnetic structures obtained from a magnetic field extrapolation preceding a real event, and compare them with the actual imaging data produced by NORH and RHESSI instruments. This work was supported in part by NSF grants AGS-0961867, AST-0908344, AGS-0969761, and NASA grants NNX10AF27G and NNX11AB49G to New Jersey Institute of Technology, by a UK STFC rolling grant, the Leverhulme Trust, UK, and by the European Commission through the Radiosun and HESPE Networks.
Extrapolation of vertical target motion through a brief visual occlusion.
Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco
2010-03-01
It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects.
NASA Astrophysics Data System (ADS)
Beaufort, Aurélien; Lamouroux, Nicolas; Pella, Hervé; Datry, Thibault; Sauquet, Eric
2018-05-01
Headwater streams represent a substantial proportion of river systems and many of them have intermittent flows due to their upstream position in the network. These intermittent rivers and ephemeral streams have recently seen a marked increase in interest, especially to assess the impact of drying on aquatic ecosystems. The objective of this paper is to quantify how discrete (in space and time) field observations of flow intermittence help to extrapolate over time the daily probability of drying (defined at the regional scale). Two empirical models based on linear or logistic regressions have been developed to predict the daily probability of intermittence at the regional scale across France. Explanatory variables were derived from available daily discharge and groundwater-level data of a dense gauging/piezometer network, and models were calibrated using discrete series of field observations of flow intermittence. The robustness of the models was tested using an independent, dense regional dataset of intermittence observations and observations of the year 2017 excluded from the calibration. The resulting models were used to extrapolate the daily regional probability of drying in France: (i) over the period 2011-2017 to identify the regions most affected by flow intermittence; (ii) over the period 1989-2017, using a reduced input dataset, to analyse temporal variability of flow intermittence at the national level. The two empirical regression models performed equally well between 2011 and 2017. The accuracy of predictions depended on the number of continuous gauging/piezometer stations and intermittence observations available to calibrate the regressions. Regions with the highest performance were located in sedimentary plains, where the monitoring network was dense and where the regional probability of drying was the highest. Conversely, the worst performances were obtained in mountainous regions. Finally, temporal projections (1989-2016) suggested the highest probabilities of intermittence (> 35 %) in 1989-1991, 2003 and 2005. A high density of intermittence observations improved the information provided by gauging stations and piezometers to extrapolate the temporal variability of intermittent rivers and ephemeral streams.
Loucas, Bradford D.; Shuryak, Igor; Cornforth, Michael N.
2016-01-01
Whole-chromosome painting (WCP) typically involves the fluorescent staining of a small number of chromosomes. Consequently, it is capable of detecting only a fraction of exchanges that occur among the full complement of chromosomes in a genome. Mathematical corrections are commonly applied to WCP data in order to extrapolate the frequency of exchanges occurring in the entire genome [whole-genome equivalency (WGE)]. However, the reliability of WCP to WGE extrapolations depends on underlying assumptions whose conditions are seldom met in actual experimental situations, in particular the presumed absence of complex exchanges. Using multi-fluor fluorescence in situ hybridization (mFISH), we analyzed the induction of simple exchanges produced by graded doses of 137Cs gamma rays (0–4 Gy), and also 1.1 GeV 56Fe ions (0–1.5 Gy). In order to represent cytogenetic damage as it would have appeared to the observer following standard three-color WCP, all mFISH information pertaining to exchanges that did not specifically involve chromosomes 1, 2, or 4 was ignored. This allowed us to reconstruct dose–responses for three-color apparently simple (AS) exchanges. Using extrapolation methods similar to those derived elsewhere, these were expressed in terms of WGE for comparison to mFISH data. Based on AS events, the extrapolated frequencies systematically overestimated those actually observed by mFISH. For gamma rays, these errors were practically independent of dose. When constrained to a relatively narrow range of doses, the WGE corrections applied to both 56Fe and gamma rays predicted genome-equivalent damage with a level of accuracy likely sufficient for most applications. However, the apparent accuracy associated with WCP to WGE corrections is both fortuitous and misleading. This is because (in normal practice) such corrections can only be applied to AS exchanges, which are known to include complex aberrations in the form of pseudosimple exchanges. When WCP to WGE corrections are applied to true simple exchanges, the results are less than satisfactory, leading to extrapolated values that underestimate the true WGE response by unacceptably large margins. Likely explanations for these results are discussed, as well as their implications for radiation protection. Thus, in seeming contradiction to notion that complex aberrations be avoided altogether in WGE corrections – and in violation of assumptions upon which these corrections are based – their inadvertent inclusion in three-color WCP data is actually required in order for them to yield even marginally acceptable results. PMID:27014627
Asao, Shinichi; Bedoya-Arrieta, Ricardo; Ryan, Michael G
2015-02-01
As tropical forests respond to environmental change, autotrophic respiration may consume a greater proportion of carbon fixed in photosynthesis at the expense of growth, potentially turning the forests into a carbon source. Predicting such a response requires that we measure and place autotrophic respiration in a complete carbon budget, but extrapolating measurements of autotrophic respiration from chambers to ecosystem remains a challenge. High plant species diversity and complex canopy structure may cause respiration rates to vary and measurements that do not account for this complexity may introduce bias in extrapolation more detrimental than uncertainty. Using experimental plantations of four native tree species with two canopy layers, we examined whether species and canopy layers vary in foliar respiration and wood CO2 efflux and whether the variation relates to commonly used scalars of mass, nitrogen (N), photosynthetic capacity and wood size. Foliar respiration rate varied threefold between canopy layers, ∼0.74 μmol m(-2) s(-1) in the overstory and ∼0.25 μmol m(-2) s(-1) in the understory, but little among species. Leaf mass per area, N and photosynthetic capacity explained some of the variation, but height explained more. Chamber measurements of foliar respiration thus can be extrapolated to the canopy with rates and leaf area specific to each canopy layer or height class. If area-based rates are sampled across canopy layers, the area-based rate may be regressed against leaf mass per area to derive the slope (per mass rate) to extrapolate to the canopy using the total leaf mass. Wood CO2 efflux varied 1.0-1.6 μmol m(-2) s(-1) for overstory trees and 0.6-0.9 μmol m(-2) s(-1) for understory species. The variation in wood CO2 efflux rate was mostly related to wood size, and little to species, canopy layer or height. Mean wood CO2 efflux rate per surface area, derived by regressing CO2 efflux per mass against the ratio of surface area to mass, can be extrapolated to the stand using total wood surface area. The temperature response of foliar respiration was similar for three of the four species, and wood CO2 efflux was similar between wet and dry seasons. For these species and this forest, vertical sampling may yield more accurate estimates than would temporal sampling. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Temperature-dependent absorption cross sections for hydrogen peroxide vapor
NASA Technical Reports Server (NTRS)
Nicovich, J. M.; Wine, P. H.
1988-01-01
Relative absorption cross sections for hydrogen peroxide vapor were measured over the temperature ranges 285-381 K for lambda = 230 nm-295 nm and 300-381 K for lambda = 193 nm-350 nm. The well established 298 K cross sections at 202.6 and 228.8 nm were used as an absolute calibration. A significant temperature dependence was observed at the important tropospheric photolysis wavelengths lambda over 300 nm. Measured cross sections were extrapolated to lower temperatures, using a simple model which attributes the observed temperature dependence to enhanced absorption by molecules possessing one quantum of O-O stretch vibrational excitation. Upper tropospheric photodissociation rates calculated using the extrapolated cross sections are about 25 percent lower than those calculated using currently recommended 298 K cross sections.
[Requirements imposed on model objects in microevolutionary investigations].
Mina, M V
2015-01-01
Extrapolation of results of investigations of a model object is justified only within the limits of a set of objects that have essential properties in common with the modal object. Which properties are essential depends on the aim of a study. Similarity of objects emerged in the process of their independent evolution does not prove similarity of ways and mechanisms of their evolution. If the objects differ in their essential properties then extrapolation of results of investigation of an object on another one is risky because it may lead to wrong decisions and, moreover, to the loss of interest to alternative hypotheses. Positions formulated above are considered with the reference to species flocks of fishes, large African Barbus in particular.
Can Monkeys (Macaca mulatta) Represent Invisible Displacement?
NASA Technical Reports Server (NTRS)
Filion, Christine M.; Washburn, David A.; Gulledge, Jonathan P.
1996-01-01
Four experiments were conducted to assess whether or not rhesus macaques (Macaca mulatta) could represent the unperceived movements of a stimulus. Subjects were tested on 2 computerized tasks, HOLE (monkeys) and LASER (humans and monkeys), in which subjects needed to chase or shoot at, respectively, a moving target that either remained visible or became invisible for a portion of its path of movement. Response patterns were analyzed and compared between target-visible and target-invisible conditions. Results of Experiments 1, 2, and 3 demonstrated that the monkeys are capable of extrapolating movement. That this extrapolation involved internal representation of the target's invisible movement was suggested but not confirmed. Experiment 4, however, demonstrated that the monkeys are capable of representing the invisible displacements of a stimulus.
Review of Air Vitiation Effects on Scramjet Ignition and Flameholding Combustion Processes
NASA Technical Reports Server (NTRS)
Pellett, G. L.; Bruno, Claudio; Chinitz, W.
2002-01-01
This paper offers a detailed review and analysis of more than 100 papers on the physics and chemistry of scramjet ignition and flameholding combustion processes, and the known effects of air vitiation on these processes. The paper attempts to explain vitiation effects in terms of known chemical kinetics and flame propagation phenomena. Scaling methodology is also examined, and a highly simplified Damkoehler scaling technique based on OH radical production/destruction is developed to extrapolate ground test results, affected by vitiation, to flight testing conditions. The long term goal of this effort is to help provide effective means for extrapolating ground test data to flight, and thus to reduce the time and expense of both ground and flight testing.
Glycine's radiolytic destruction in ices: first in situ laboratory measurements for Mars.
Gerakines, Perry A; Hudson, Reggie L
2013-07-01
We report new laboratory studies of the radiation-induced destruction of glycine-containing ices for a range of temperatures and compositions that allow extrapolation to martian conditions. In situ infrared spectroscopy was used to study glycine decay rates as a function of temperature (from 15 to 280 K) and initial glycine concentrations in six mixtures whose compositions ranged from dry glycine to H2O+glycine (300:1). Results are presented in several systems of units, with cautions concerning their use. The half-life of glycine under the surface of Mars is estimated as an extrapolation of this data set to martian conditions, and trends in decay rates are described as are applications to Mars' near-surface chemistry.
The next 25 years: Industrialization of space - Rationale for planning
NASA Technical Reports Server (NTRS)
Von Puttkamer, J.
1977-01-01
A methodology for planning the industralization of space is discussed. The suggested approach combines the extrapolative ('push') approach, in which alternative futures are projected on the basis of past and current trends and tendencies, with the normative ('pull') view, in which an ideal state in the far future is postulated and policies and decisions are directed toward its attainment. Time-reversed vectors of the future are tied to extrapolated, trend-oriented vectors of the quasi-present to identify common plateaus or stepping stones in technological development. Important steps in the industrialization of space to attain the short-range goals of production of space-derived energy, goods and services and the long-range goal of space colonization are discussed.
Chiral extrapolations of the ρ ( 770 ) meson in N f = 2 + 1 lattice QCD simulations
Hu, B.; Molina, R.; Döring, M.; ...
2017-08-24
Recentmore » $$N_f=2+1$$ lattice data for meson-meson scattering in $p$-wave and isospin $I=1$ are analyzed using a unitarized model inspired by Chiral Perturbation Theory in the inverse-amplitude formulation for two and three flavors. We perform chiral extrapolations that postdict phase shifts extracted from experiment quite well. Additionally, the low-energy constants are compared to the ones from a recent analysis of $$N_f=2$$ lattice QCD simulations to check for the consistency of the hadronic model used here. Some inconsistencies are detected in the fits to $$N_f=2+1$$ data, in contrast to the previous analysis of $$N_f=2$$ data.« less
Acceleration of convergence of vector sequences
NASA Technical Reports Server (NTRS)
Sidi, A.; Ford, W. F.; Smith, D. A.
1983-01-01
A general approach to the construction of convergence acceleration methods for vector sequence is proposed. Using this approach, one can generate some known methods, such as the minimal polynomial extrapolation, the reduced rank extrapolation, and the topological epsilon algorithm, and also some new ones. Some of the new methods are easier to implement than the known methods and are observed to have similar numerical properties. The convergence analysis of these new methods is carried out, and it is shown that they are especially suitable for accelerating the convergence of vector sequences that are obtained when one solves linear systems of equations iteratively. A stability analysis is also given, and numerical examples are provided. The convergence and stability properties of the topological epsilon algorithm are likewise given.
NASA Technical Reports Server (NTRS)
Reagan, John A.; Pilewskie, Peter A.; Scott-Fleming, Ian C.; Herman, Benjamin M.; Ben-David, Avishai
1987-01-01
Techniques for extrapolating earth-based spectral band measurements of directly transmitted solar irradiance to equivalent exoatmospheric signal levels were used to aid in determining system gain settings of the Halogen Occultation Experiment (HALOE) sunsensor being developed for the NASA Upper Atmosphere Research Satellite and for the Stratospheric Aerosol and Gas (SAGE) 2 instrument on the Earth Radiation Budget Satellite. A band transmittance approach was employed for the HALOE sunsensor which has a broad-band channel determined by the spectral responsivity of a silicon detector. A modified Langley plot approach, assuming a square-root law behavior for the water vapor transmittance, was used for the SAGE-2 940 nm water vapor channel.
NASA Technical Reports Server (NTRS)
Reagan, J. A.; Pilewskie, P. A.; Scott-Fleming, I. C.; Hermann, B. M.
1986-01-01
Techniques for extrapolating Earth-based spectral band measurements of directly transmitted solar irradiance to equivalent exoatmospheric signal levels were used to aid in determining system gain settings of the Halogen Occultation Experiment (HALOE) sunsensor system being developed for the NASA Upper Atmosphere Research Satellite and for the Stratospheric Aerosol and Gas (SAGE) 2 instrument on the Earth Radiation Budget Satellite. A band transmittance approach was employed for the HALOE sunsensor which has a broad-band channel determined by the spectral responsivity of a silicon detector. A modified Langley plot approach, assuming a square-root law behavior for the water vapor transmittance, was used for the SAGE-2 940 nm water vapor channel.
NASA Technical Reports Server (NTRS)
Stevens, N. J.
1979-01-01
Cases where the charged-particle environment acts on the spacecraft (e.g., spacecraft charging phenomena) and cases where a system on the spacecraft causes the interaction (e.g., high voltage space power systems) are considered. Both categories were studied in ground simulation facilities to understand the processes involved and to measure the pertinent parameters. Computer simulations are based on the NASA Charging Analyzer Program (NASCAP) code. Analytical models are developed in this code and verified against the experimental data. Extrapolation from the small test samples to space conditions are made with this code. Typical results from laboratory and computer simulations are presented for both types of interactions. Extrapolations from these simulations to performance in space environments are discussed.
ERIC Educational Resources Information Center
Schroder, Peter C.
1994-01-01
Proposes the study of islands to develop a method of integrating sustainable development with sound resource management that can be extrapolated to more complex, highly populated continental coastal areas. (MDH)
Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development.
Burt, T; Yoshida, K; Lappin, G; Vuong, L; John, C; de Wildt, S N; Sugiyama, Y; Rowland, M
2016-04-01
A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications and design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. All phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.
NASA Astrophysics Data System (ADS)
Sun, M. L.; Peng, H. B.; Duan, B. H.; Liu, F. F.; Du, X.; Yuan, W.; Zhang, B. T.; Zhang, X. Y.; Wang, T. S.
2018-03-01
Borosilicate glass has potential application for vitrification of high-level radioactive waste, which attracts extensive interest in studying its radiation durability. In this study, sodium borosilicate glass samples were irradiated with 4 MeV Kr17+ ion, 5 MeV Xe26+ ion and 0.3 MeV P+ ion, respectively. The hardness of irradiated borosilicate glass samples was measured with nanoindentation in continuous stiffness mode and quasi continuous stiffness mode, separately. Extrapolation method, mean value method, squared extrapolation method and selected point method are used to obtain hardness of irradiated glass and a comparison among these four methods is conducted. The extrapolation method is suggested to analyze the hardness of ion irradiated glass. With increasing irradiation dose, the values of hardness for samples irradiated with Kr, Xe and P ions dropped and then saturated at 0.02 dpa. Besides, both the maximum variations and decay constants for three kinds of ions with different energies are similar indicates the similarity behind the hardness variation in glasses after irradiation. Furthermore, the hardness variation of low energy P ion irradiated samples whose range is much smaller than those of high energy Kr and Xe ions, has the same trend as that of Kr and Xe ions. It suggested that electronic energy loss did not play a significant role in hardness decrease for irradiation of low energy ions.
Kinetics of water loss and the likelihood of intracellular freezing in mouse ova
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazur, P.; Rall, W.F.; Leibo, S.P.
To avoid intracellular freezing and its usually lethal consequences, cells must lose their freezable water before reaching their ice-nucleation temperature. One major factor determining the rate of water loss is the temperature dependence of water permeability, L/sub p/ (hydraulic conductivity). Because of the paucity of water permeability measurements at subzero temperatures, that temperature dependence has usually been extrapolated from above-zero measurements. The extrapolation has often been based on an exponential dependence of L/sub p/ on temperature. This paper compares the kinetics of water loss based on that extrapolation with that based on an Arrhenius relation between L/sub p/ and temperature,more » and finds substantial differences below -20 to -25/sup 0/C. Since the ice-nucleation temperature of mouse ova in the cryoprotectants DMSO and glycerol is usually below -30/sup 0/C, the Arrhenius form of the water-loss equation was used to compute the extent of supercooling in ova cooled at rates between 1 and 8/sup 0/C/min and the consequent likelihood of intracellular freezing. The predicted likelihood agrees well with that previously observed. The water-loss equation was also used to compute the volumes of ova as a function of cooling rate and temperature. The computed cell volumes agree qualitatively with previously observed volumes, but differed quantitatively. 25 references, 5 figures, 3 tables.« less
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Progress in extrapolating divertor heat fluxes towards large fusion devices
NASA Astrophysics Data System (ADS)
Sieglin, B.; Faitsch, M.; Eich, T.; Herrmann, A.; Suttrop, W.; Collaborators, JET; the MST1 Team; the ASDEX Upgrade Team
2017-12-01
Heat load to the plasma facing components is one of the major challenges for the development and design of large fusion devices such as ITER. Nowadays fusion experiments can operate with heat load mitigation techniques, e.g. sweeping, impurity seeding, but do not generally require it. For large fusion devices however, heat load mitigation will be essential. This paper presents the current progress of the extrapolation of steady state and transient heat loads towards large fusion devices. For transient heat loads, so-called edge localized modes are considered a serious issue for the lifetime of divertor components. In this paper, the ITER operation at half field (2.65 T) and half current (7.5 MA) will be discussed considering the current material limit for the divertor peak energy fluence of 0.5 {MJ}/{{{m}}}2. Recent studies were successful in describing the observed energy fluence in the JET, MAST and ASDEX Upgrade using the pedestal pressure prior to the ELM crash. Extrapolating this towards ITER results in a more benign heat load compared to previous scalings. In the presence of magnetic perturbation, the axisymmetry is broken and a 2D heat flux pattern is induced on the divertor target, leading to local increase of the heat flux which is a concern for ITER. It is shown that for a moderate divertor broadening S/{λ }{{q}}> 0.5 the toroidal peaking of the heat flux disappears.
NASA Astrophysics Data System (ADS)
Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid
2016-02-01
In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.
XUV Photometer System (XPS): New Dark-Count Corrections Model and Improved Data Products
NASA Astrophysics Data System (ADS)
Elliott, J. P.; Vanier, B.; Woods, T. N.
2017-12-01
We present newly updated dark-count calibrations for the SORCE XUV Photometer System (XPS) and the resultant improved data products released in March of 2017. The SORCE mission has provided a 14-year solar spectral irradiance record, and the XPS contributes to this record in the 0.1 nm to 40 nm range. The SORCE spacecraft has been operating in what is known as Day-Only Operations (DO-Op) mode since February of 2014. In this mode it is not possible to collect data, including dark-counts, when the spacecraft is in eclipse as we did prior to DO-Op. Instead, we take advantage of the position of the XPS filter-wheel, and collect these data when the wheel position is in a "dark" position. Further, in this mode dark data are not always available for all observations, requiring an extrapolation in order to calibrate data at these times. To extrapolate, we model this with a piece-wise 2D nonlinear least squares surface fit in the time and temperature dimensions. Our model allows us to calibrate XPS data into the DO-Op phase of the mission by extrapolating along this surface. The XPS version 11 data product release benefits from this new calibration. We present comparisons of the previous and current calibration methods in addition to planned future upgrades of our data products.
SeqAPASS: Sequence alignment to predict across-species ...
Efforts to shift the toxicity testing paradigm from whole organism studies to those focused on the initiation of toxicity and relevant pathways have led to increased utilization of in vitro and in silico methods. Hence the emergence of high through-put screening (HTS) programs, such as U.S. EPA ToxCast, and application of the adverse outcome pathway (AOP) framework for identifying and defining biological key events triggered upon perturbation of molecular initiating events and leading to adverse outcomes occuring at a level of organization relevant for risk assessment [1]. With these recent initiatives to harness the power of “the pathway” in describing and evaluating toxicity comes the need to extrapolate data beyond the model species. Sequence alignment to predict across-species susceptibilty (SeqAPASS) is a web-based tool that allows the user to begin to understand how broadly HTS data or AOP constructs may plausibly be extrapolated across species, while describing the relative intrinsic susceptibiltiy of different taxa to chemicals with known modes of action (e.g., pharmaceuticals and pesticides). The tool rapidly and strategically assesses available molecular target information to describe protein sequence similarity at the primary amino acid sequence, conserved domain, and individual amino acid residue levels. This in silico approach to species extrapolation was designed to automate and streamline the relatively complex and time-consuming process of co
High speed civil transport: Sonic boom softening and aerodynamic optimization
NASA Technical Reports Server (NTRS)
Cheung, Samson
1994-01-01
An improvement in sonic boom extrapolation techniques has been the desire of aerospace designers for years. This is because the linear acoustic theory developed in the 60's is incapable of predicting the nonlinear phenomenon of shock wave propagation. On the other hand, CFD techniques are too computationally expensive to employ on sonic boom problems. Therefore, this research focused on the development of a fast and accurate sonic boom extrapolation method that solves the Euler equations for axisymmetric flow. This new technique has brought the sonic boom extrapolation techniques up to the standards of the 90's. Parallel computing is a fast growing subject in the field of computer science because of its promising speed. A new optimizer (IIOWA) for the parallel computing environment has been developed and tested for aerodynamic drag minimization. This is a promising method for CFD optimization making use of the computational resources of workstations, which unlike supercomputers can spend most of their time idle. Finally, the OAW concept is attractive because of its overall theoretical performance. In order to fully understand the concept, a wind-tunnel model was built and is currently being tested at NASA Ames Research Center. The CFD calculations performed under this cooperative agreement helped to identify the problem of the flow separation, and also aided the design by optimizing the wing deflection for roll trim.
Slaughter, Andrew R; Palmer, Carolyn G; Muller, Wilhelmine J
2007-04-01
In aquatic ecotoxicology, acute to chronic ratios (ACRs) are often used to predict chronic responses from available acute data to derive water quality guidelines, despite many problems associated with this method. This paper explores the comparative protectiveness and accuracy of predicted guideline values derived from the ACR, linear regression analysis (LRA), and multifactor probit analysis (MPA) extrapolation methods applied to acute toxicity data for aquatic macroinvertebrates. Although the authors of the LRA and MPA methods advocate the use of extrapolated lethal effects in the 0.01% to 10% lethal concentration (LC0.01-LC10) range to predict safe chronic exposure levels to toxicants, the use of an extrapolated LC50 value divided by a safety factor of 5 was in addition explored here because of higher statistical confidence surrounding the LC50 value. The LRA LC50/5 method was found to compare most favorably with available experimental chronic toxicity data and was therefore most likely to be sufficiently protective, although further validation with the use of additional species is needed. Values derived by the ACR method were the least protective. It is suggested that there is an argument for the replacement of ACRs in developing water quality guidelines by the LRA LC50/5 method.
Interim methods for development of inhalation reference concentrations. Draft report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blackburn, K.; Dourson, M.; Erdreich, L.
1990-08-01
An inhalation reference concentration (RfC) is an estimate of continuous inhalation exposure over a human lifetime that is unlikely to pose significant risk of adverse noncancer health effects and serves as a benchmark value for assisting in risk management decisions. Derivation of an RfC involves dose-response assessment of animal data to determine the exposure levels at which no significant increase in the frequency or severity of adverse effects between the exposed population and its appropriate control exists. The assessment requires an interspecies dose extrapolation from a no-observed-adverse-effect level (NOAEL) exposure concentration of an animal to a human equivalent NOAEL (NOAEL(HBC)).more » The RfC is derived from the NOAEL(HBC) by the application of generally order-of-magnitude uncertainty factors. Intermittent exposure scenarios in animals are extrapolated to chronic continuous human exposures. Relationships between external exposures and internal doses depend upon complex simultaneous and consecutive processes of absorption, distribution, metabolism, storage, detoxification, and elimination. To estimate NOAEL(HBC)s when chemical-specific physiologically-based pharmacokinetic models are not available, a dosimetric extrapolation procedure based on anatomical and physiological parameters of the exposed human and animal and the physical parameters of the toxic chemical has been developed which gives equivalent or more conservative exposure concentrations values than those that would be obtained with a PB-PK model.« less
Error analysis regarding the calculation of nonlinear force-free field
NASA Astrophysics Data System (ADS)
Liu, S.; Zhang, H. Q.; Su, J. T.
2012-02-01
Magnetic field extrapolation is an alternative method to study chromospheric and coronal magnetic fields. In this paper, two semi-analytical solutions of force-free fields (Low and Lou in Astrophys. J. 352:343, 1990) have been used to study the errors of nonlinear force-free (NLFF) fields based on force-free factor α. Three NLFF fields are extrapolated by approximate vertical integration (AVI) Song et al. (Astrophys. J. 649:1084, 2006), boundary integral equation (BIE) Yan and Sakurai (Sol. Phys. 195:89, 2000) and optimization (Opt.) Wiegelmann (Sol. Phys. 219:87, 2004) methods. Compared with the first semi-analytical field, it is found that the mean values of absolute relative standard deviations (RSD) of α along field lines are about 0.96-1.19, 0.63-1.07 and 0.43-0.72 for AVI, BIE and Opt. fields, respectively. While for the second semi-analytical field, they are about 0.80-1.02, 0.67-1.34 and 0.33-0.55 for AVI, BIE and Opt. fields, respectively. As for the analytical field, the calculation error of <| RSD|> is about 0.1˜0.2. It is also found that RSD does not apparently depend on the length of field line. These provide the basic estimation on the deviation of extrapolated field obtained by proposed methods from the real force-free field.
Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2.
Solomonik, Victor G; Smirnov, Alexander N; Navarkin, Ilya S
2016-04-14
The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.
Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2
NASA Astrophysics Data System (ADS)
Solomonik, Victor G.; Smirnov, Alexander N.; Navarkin, Ilya S.
2016-04-01
The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.
NASA Technical Reports Server (NTRS)
Potvin, Jean; Ray, Eric
2017-01-01
We describe a new calculation of the opening shock factor C (sub k) characterizing the inflation performance of NASA's Orion spacecraft main and drogue parachutes opening under a reefing constraint (1st stage reefing), as currently tested in the Capsule Parachute Assembly System (CPAS) program. This calculation is based on an application of the Momentum-Impulse Theorem at low mass ratio (R (sub m) is less than 10 (sup -1)) and on an earlier analysis of the opening performance of drogues decelerating point masses and inflating along horizontal trajectories. Herein we extend the reach of the Theorem to include the effects of payload drag and gravitational impulse during near-vertical motion - both important pre-requisites for CPAS parachute analysis. The result is a family of C (sub k) versus R (sub m) curves which can be used for extrapolating beyond the drop-tested envelope. The paper proves this claim in the case of the CPAS Mains and Drogues opening while trailing either a Parachute Compartment Drop Test Vehicle or a Parachute Test Vehicle (an Orion capsule boiler plate). It is seen that in all cases the values of the opening shock factor can be extrapolated over a range in mass ratio that is at least twice that of the test drop data.
Precise algorithm to generate random sequential adsorption of hard polygons at saturation
NASA Astrophysics Data System (ADS)
Zhang, G.
2018-04-01
Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.
Precise algorithm to generate random sequential adsorption of hard polygons at saturation.
Zhang, G
2018-04-01
Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.
Analysis of Impulse Load on VEGA SRM Nozzle During Ignition Transient and Effects on TVC Actuators
NASA Astrophysics Data System (ADS)
Fotino, Domenico; Leofanti, Jose Luis; Serraglia, Ferruccio
2012-07-01
During the VEGA development phase and in particular during the Zefiro 23 (second stage motor) on-ground firing tests, values of impulse load on the actuators very close to the requirement were experienced. As a consequence, an activity for the extrapolation of these loads in the flight configuration (longer nozzle and vacuum conditions) was carried out and a mathematical model has been developed with this aim. After providing an overview on the differences between the ground and flight case from the fluid dynamic point of view, the paper describes the results of the mathematical model both in terms of correlation with respect to ground tests and of extrapolation of the loads to the flight configuration. The main effects of this load on the actuators is also addressed.
Scaling laws for testing of high lift airfoils under heavy rainfall
NASA Technical Reports Server (NTRS)
Bilanin, A. J.
1985-01-01
The results of studies regarding the effect of rainfall about aircraft are briefly reviewed. It is found that performance penalties on airfoils have been identified in subscale tests. For this reason, it is of great importance that scaling laws be dveloped to aid in the extrapolation of these data to fullscale. The present investigation represents an attempt to develop scaling laws for testing subscale airfoils under heavy rain conditions. Attention is given to rain statistics, airfoil operation in heavy rain, scaling laws, thermodynamics of condensation and/or evaporation, rainfall and airfoil scaling, aspects of splash back, film thickness, rivulets, and flap slot blockage. It is concluded that the extrapolation of airfoil performance data taken at subscale under simulated heavy rain conditions to fullscale must be undertaken with caution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molina, Raquel; Hu, Bitao; Doering, Michael
Several lattice QCD simulations of meson-meson scattering in p-wave and Isospin = 1 in Nf = 2 + 1 flavours have been carried out recently. Unitarized Chiral Perturbation Theory is used to perform extrapolations to the physical point. In contrast to previous findings on the analyses of Nf = 2 lattice data, where most of the data seems to be in agreement, some discrepancies are detected in the Nf = 2 + 1 lattice data analyses, which could be due to different masses of the strange quark, meson decay constants, initial constraints in the simulation, or other lattice artifacts. Inmore » addition, the low-energy constants are compared to the ones from a recent analysis of Nf = 2 lattice data.« less
Fuzzy logic and causal reasoning with an 'n' of 1 for diagnosis and treatment of the stroke patient.
Helgason, Cathy M; Jobe, Thomas H
2004-03-01
The current scientific model for clinical decision-making is founded on binary or Aristotelian logic, classical set theory and probability-based statistics. Evidence-based medicine has been established as the basis for clinical recommendations. There is a problem with this scientific model when the physician must diagnose and treat the individual patient. The problem is a paradox, which is that the scientific model of evidence-based medicine is based upon a hypothesis aimed at the group and therefore, any conclusions cannot be extrapolated but to a degree to the individual patient. This extrapolation is dependent upon the expertise of the physician. A fuzzy logic multivalued-based scientific model allows this expertise to be numerically represented and solves the clinical paradox of evidence-based medicine.
NASA Astrophysics Data System (ADS)
Rau, T. H.
1982-07-01
Measured and extrapolated data define the bioacoustic environments produced by a gasoline engine driven cabin leakage tester operating outdoors on a concrete apron at normal rated conditions. Near field data are presented for 37 locations at a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise level, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 36 locations are normalized to standard meteorological conditions and extrapolated from 10 - 1600 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.
Interpolation/extrapolation technique with application to hypervelocity impact of space debris
NASA Technical Reports Server (NTRS)
Rule, William K.
1992-01-01
A new technique for the interpolation/extrapolation of engineering data is described. The technique easily allows for the incorporation of additional independent variables, and the most suitable data in the data base is automatically used for each prediction. The technique provides diagnostics for assessing the reliability of the prediction. Two sets of predictions made for known 5-degree-of-freedom, 15-parameter functions using the new technique produced an average coefficient of determination of 0.949. Here, the technique is applied to the prediction of damage to the Space Station from hypervelocity impact of space debris. A new set of impact data is presented for this purpose. Reasonable predictions for bumper damage were obtained, but predictions of pressure wall and multilayer insulation damage were poor.
NASA Astrophysics Data System (ADS)
Lee, G. H.; Arnold, S. T.; Eaton, J. G.; Sarkas, H. W.; Bowen, K. H.; Ludewigt, C.; Haberland, H.
1991-03-01
The photodetachment spectra of (H2O){/n =2-69/-} and (NH3){/n =41-1100/-} have been recorded, and vertical detachment energies (VDEs) were obtained from the spectra. For both systems, the cluster anion VDEs increase smoothly with increasing sizes and most species plot linearly with n -1/3, extrapolating to a VDE ( n=∞) value which is very close to the photoelectric threshold energy for the corresponding condensed phase solvated electron system. The linear extrapolation of this data to the analogous condensed phase property suggests that these cluster anions are gas phase counterparts to solvated electrons, i.e. they are embryonic forms of hydrated and ammoniated electrons which mature with increasing cluster size toward condensed phase solvated electrons.
NASA Technical Reports Server (NTRS)
Margaria, Tiziana (Inventor); Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor); Steffen, Bernard (Inventor)
2010-01-01
Systems, methods and apparatus are provided through which in some embodiments, automata learning algorithms and techniques are implemented to generate a more complete set of scenarios for requirements based programming. More specifically, a CSP-based, syntax-oriented model construction, which requires the support of a theorem prover, is complemented by model extrapolation, via automata learning. This may support the systematic completion of the requirements, the nature of the requirement being partial, which provides focus on the most prominent scenarios. This may generalize requirement skeletons by extrapolation and may indicate by way of automatically generated traces where the requirement specification is too loose and additional information is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirman, C R.; Sweeney, Lisa M.; Corley, Rick A.
2005-04-01
Reference values, including an oral reference dose (RfD) and an inhalation reference concentration (RfC), were derived for propylene glycol methyl ether (PGME), and an oral RfD was derived for its acetate (PGMEA). These values were based upon transient sedation observed in F344 rats and B6C3F1 mice during a two-year inhalation study. The dose-response relationship for sedation was characterized using internal dose measures as predicted by a physiologically based pharmacokinetic (PBPK) model for PGME and its acetate. PBPK modeling was used to account for changes in rodent physiology and metabolism due to aging and adaptation, based on data collected during weeksmore » 1, 2, 26, 52, and 78 of a chronic inhalation study. The peak concentration of PGME in richly perfused tissues was selected as the most appropriate internal dose measure based upon a consideration of the mode of action for sedation and similarities in tissue partitioning between brain and other richly perfused tissues. Internal doses (peak tissue concentrations of PGME) were designated as either no-observed-adverse-effect levels (NOAELs) or lowest-observed-adverse-effect levels (LOAELs) based upon the presence or absence of sedation at each time-point, species, and sex in the two year study. Distributions of the NOAEL and LOAEL values expressed in terms of internal dose were characterized using an arithmetic mean and standard deviation, with the mean internal NOAEL serving as the basis for the reference values, which was then divided by appropriate uncertainty factors. Where data were permitting, chemical-specific adjustment factors were derived to replace default uncertainty factor values of ten. Nonlinear kinetics are were predicted by the model in all species at PGME concentrations exceeding 100 ppm, which complicates interspecies and low-dose extrapolations. To address this complication, reference values were derived using two approaches which differ with respect to the order in which these extrapolations were performed: (1) uncertainty factor application followed by interspecies extrapolation (PBPK modeling); and (2) interspecies extrapolation followed by uncertainty factor application. The resulting reference values for these two approaches are substantially different, with values from the former approach being 7-fold higher than those from the latter approach. Such a striking difference between the two approaches reveals an underlying issue that has received little attention in the literature regarding the application of uncertainty factors and interspecies extrapolations to compounds where saturable kinetics occur in the range of the NOAEL. Until such discussions have taken place, reference values based on the latter approach are recommended for risk assessments involving human exposures to PGME and PGMEA.« less
A retrospective evaluation of traffic forecasting techniques.
DOT National Transportation Integrated Search
2016-08-01
Traffic forecasting techniquessuch as extrapolation of previous years traffic volumes, regional travel demand models, or : local trip generation rateshelp planners determine needed transportation improvements. Thus, knowing the accuracy of t...
Extrapolate the Past... or Invent the Future
Vinod Khosla
2017-12-09
Berkeley Lab's Environmental Energy Technologies Division launches its Distinguished Lecturer series with a talk by Vinod Khosla, founder of Khosla Ventures, whose mission is to "assist great entre... Â
Numerical methods in acoustics
NASA Astrophysics Data System (ADS)
Candel, S. M.
This paper presents a survey of some computational techniques applicable to acoustic wave problems. Recent advances in wave extrapolation methods, spectral methods and boundary integral methods are discussed and illustrated by specific calculations.
Transition probabilities of Br II
NASA Technical Reports Server (NTRS)
Bengtson, R. D.; Miller, M. H.
1976-01-01
Absolute transition probabilities of the three most prominent visible Br II lines are measured in emission. Results compare well with Coulomb approximations and with line strengths extrapolated from trends in homologous atoms.
Mapping the knowledge base for maritime health: 3 illness and injury in seafarers.
Carter, Tim
2011-01-01
Recent studies of illness and injury in seafarers and of disease risk factors have been mapped. There is a good knowledge base on some aspects of health, especially on causes of death. By contrast there are very few studies on aspects of current importance, such as illness at sea, the scope for its prevention, and its treatment and outcome. Results are presented in terms of the settings in which the investigations were conducted: medical fitness examinations at recruitment and periodically, illness and injury at sea, telemedical advice, evacuation and urgent port referrals, repatriations, illness at other times in serving seafarers, health related cessation of work, and illness after cessation of work. Mortality studies were mapped in a similar way, as were population-based surveys of health and of risk factors. The scope for valid extrapolation of the results from studies in other populations to seafarers is discussed. A more immediate problem of extrapolation relates to the current knowledge base, which is largely derived from own nationality seafarers of the traditional developed world maritime nations. It is uncertain whether this can be validly extrapolated to seafarers from the major crewing countries, who come from populations with very different patterns of illness. Existing studies mirror the priorities of those who commissioned them, in that many of the most valid ones relate to the overall lifetime risks of seafaring in developed countries. These enable comparisons to be made with other occupational groups. The major concerns of many interest groups in the maritime sector about health are now focused on the risks within a single contract period and how these can most efficiently be minimized. Studies on this are limited in scope, are of uncertain validity, and are often used for operational purposes rather than entering the scientific literature. Gaps in knowledge about health risks over a relatively short timescale in seafarers from the major crewing countries have been identified, and the uncertainties about extrapolating from studies in traditional maritime nations to the majority of the world's seafarers means that a major redirection of effort is needed if maritime health practice is to have a sound knowledge base on illness and injury risks in the future.
Animal models and conserved processes
2012-01-01
Background The concept of conserved processes presents unique opportunities for using nonhuman animal models in biomedical research. However, the concept must be examined in the context that humans and nonhuman animals are evolved, complex, adaptive systems. Given that nonhuman animals are examples of living systems that are differently complex from humans, what does the existence of a conserved gene or process imply for inter-species extrapolation? Methods We surveyed the literature including philosophy of science, biological complexity, conserved processes, evolutionary biology, comparative medicine, anti-neoplastic agents, inhalational anesthetics, and drug development journals in order to determine the value of nonhuman animal models when studying conserved processes. Results Evolution through natural selection has employed components and processes both to produce the same outcomes among species but also to generate different functions and traits. Many genes and processes are conserved, but new combinations of these processes or different regulation of the genes involved in these processes have resulted in unique organisms. Further, there is a hierarchy of organization in complex living systems. At some levels, the components are simple systems that can be analyzed by mathematics or the physical sciences, while at other levels the system cannot be fully analyzed by reducing it to a physical system. The study of complex living systems must alternate between focusing on the parts and examining the intact whole organism while taking into account the connections between the two. Systems biology aims for this holism. We examined the actions of inhalational anesthetic agents and anti-neoplastic agents in order to address what the characteristics of complex living systems imply for inter-species extrapolation of traits and responses related to conserved processes. Conclusion We conclude that even the presence of conserved processes is insufficient for inter-species extrapolation when the trait or response being studied is located at higher levels of organization, is in a different module, or is influenced by other modules. However, when the examination of the conserved process occurs at the same level of organization or in the same module, and hence is subject to study solely by reductionism, then extrapolation is possible. PMID:22963674
Lyons, Ronan A.; Kendrick, Denise; Towner, Elizabeth M.; Christie, Nicola; Macey, Steven; Coupland, Carol; Gabbe, Belinda J.
2011-01-01
Background Current methods of measuring the population burden of injuries rely on many assumptions and limited data available to the global burden of diseases (GBD) studies. The aim of this study was to compare the population burden of injuries using different approaches from the UK Burden of Injury (UKBOI) and GBD studies. Methods and Findings The UKBOI was a prospective cohort of 1,517 injured individuals that collected patient-reported outcomes. Extrapolated outcome data were combined with multiple sources of morbidity and mortality data to derive population metrics of the burden of injury in the UK. Participants were injured patients recruited from hospitals in four UK cities and towns: Swansea, Nottingham, Bristol, and Guildford, between September 2005 and April 2007. Patient-reported changes in quality of life using the EQ-5D at baseline, 1, 4, and 12 months after injury provided disability weights used to calculate the years lived with disability (YLDs) component of disability adjusted life years (DALYs). DALYs were calculated for the UK and extrapolated to global estimates using both UKBOI and GBD disability weights. Estimated numbers (and rates per 100,000) for UK population extrapolations were 750,999 (1,240) for hospital admissions, 7,982,947 (13,339) for emergency department (ED) attendances, and 22,185 (36.8) for injury-related deaths in 2005. Nonadmitted ED-treated injuries accounted for 67% of YLDs. Estimates for UK DALYs amounted to 1,771,486 (82% due to YLDs), compared with 669,822 (52% due to YLDs) using the GBD approach. Extrapolating patient-derived disability weights to GBD estimates would increase injury-related DALYs 2.6-fold. Conclusions The use of disability weights derived from patient experiences combined with additional morbidity data on ED-treated patients and inpatients suggests that the absolute burden of injury is higher than previously estimated. These findings have substantial implications for improving measurement of the national and global burden of injury. Please see later in the article for the Editors' Summary PMID:22162954
Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor
2018-01-01
Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over pH and BOD. Copyright © 2017 Elsevier B.V. All rights reserved.
Proton radius from electron scattering data
NASA Astrophysics Data System (ADS)
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent; Meekins, David; Norum, Blaine; Sawatzky, Brad
2016-05-01
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon, and Stanford. Methods: We make use of stepwise regression techniques using the F test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate error estimates. Results: Starting with the precision, low four-momentum transfer (Q2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q2 data on GE to select functions which extrapolate to high Q2, we find that a Padé (N =M =1 ) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, GE(Q2) =(1+Q2/0.66 GeV2) -2 . Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extremely-low-Q2 data or by use of the Padé approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering results and the muonic hydrogen results are consistent. It is the atomic hydrogen results that are the outliers.
NASA Astrophysics Data System (ADS)
Häberlen, Oliver D.; Chung, Sai-Cheong; Stener, Mauro; Rösch, Notker
1997-03-01
A series of gold clusters spanning the size range from Au6 through Au147 (with diameters from 0.7 to 1.7 nm) in icosahedral, octahedral, and cuboctahedral structure has been theoretically investigated by means of a scalar relativistic all-electron density functional method. One of the main objectives of this work was to analyze the convergence of cluster properties toward the corresponding bulk metal values and to compare the results obtained for the local density approximation (LDA) to those for a generalized gradient approximation (GGA) to the exchange-correlation functional. The average gold-gold distance in the clusters increases with their nuclearity and correlates essentially linearly with the average coordination number in the clusters. An extrapolation to the bulk coordination of 12 yields a gold-gold distance of 289 pm in LDA, very close to the experimental bulk value of 288 pm, while the extrapolated GGA gold-gold distance is 297 pm. The cluster cohesive energy varies linearly with the inverse of the calculated cluster radius, indicating that the surface-to-volume ratio is the primary determinant of the convergence of this quantity toward bulk. The extrapolated LDA binding energy per atom, 4.7 eV, overestimates the experimental bulk value of 3.8 eV, while the GGA value, 3.2 eV, underestimates the experiment by almost the same amount. The calculated ionization potentials and electron affinities of the clusters may be related to the metallic droplet model, although deviations due to the electronic shell structure are noticeable. The GGA extrapolation to bulk values yields 4.8 and 4.9 eV for the ionization potential and the electron affinity, respectively, remarkably close to the experimental polycrystalline work function of bulk gold, 5.1 eV. Gold 4f core level binding energies were calculated for sites with bulk coordination and for different surface sites. The core level shifts for the surface sites are all positive and distinguish among the corner, edge, and face-centered sites; sites in the first subsurface layer show still small positive shifts.
10 CFR 960.3-1-4-2 - Site nomination for characterization.
Code of Federal Regulations, 2012 CFR
2012-01-01
... testing of core samples for the evaluation of geochemical and engineering rock properties, and chemical... industrial activities; and extrapolations of regional data to estimate site-specific characteristics and...
10 CFR 960.3-1-4-2 - Site nomination for characterization.
Code of Federal Regulations, 2013 CFR
2013-01-01
... testing of core samples for the evaluation of geochemical and engineering rock properties, and chemical... industrial activities; and extrapolations of regional data to estimate site-specific characteristics and...
10 CFR 960.3-1-4-2 - Site nomination for characterization.
Code of Federal Regulations, 2011 CFR
2011-01-01
... testing of core samples for the evaluation of geochemical and engineering rock properties, and chemical... industrial activities; and extrapolations of regional data to estimate site-specific characteristics and...
10 CFR 960.3-1-4-2 - Site nomination for characterization.
Code of Federal Regulations, 2014 CFR
2014-01-01
... testing of core samples for the evaluation of geochemical and engineering rock properties, and chemical... industrial activities; and extrapolations of regional data to estimate site-specific characteristics and...
Surface to Borehole Procedures
There is a progression in both complexity and benefits from check shot and synthetic seismogram to vertical seismic profiles (VSP), three‑component VSP, offset VSP, and extrapolation and description of lithologic parameters into the geologic formations.
Effect of scrape-off-layer current on reconstructed tokamak equilibrium
King, J. R.; Kruger, S. E.; Groebner, R. J.; ...
2017-01-13
Methods are described that extend fields from reconstructed equilibria to include scrape-off-layer current through extrapolated parametrized and experimental fits. The extrapolation includes both the effects of the toroidal-field and pressure gradients which produce scrape-off-layer current after recomputation of the Grad-Shafranov solution. To quantify the degree that inclusion of scrape-off-layer current modifies the equilibrium, the χ-squared goodness-of-fit parameter is calculated for cases with and without scrape-off-layer current. The change in χ-squared is found to be minor when scrape-off-layer current is included however flux surfaces are shifted by up to 3 cm. Here the impact on edge modes of these scrape-off-layer modificationsmore » is also found to be small and the importance of these methods to nonlinear computation is discussed.« less
An efficient approach to imaging underground hydraulic networks
NASA Astrophysics Data System (ADS)
Kumar, Mohi
2012-07-01
To better locate natural resources, treat pollution, and monitor underground networks associated with geothermal plants, nuclear waste repositories, and carbon dioxide sequestration sites, scientists need to be able to accurately characterize and image fluid seepage pathways below ground. With these images, scientists can gain knowledge of soil moisture content, the porosity of geologic formations, concentrations and locations of dissolved pollutants, and the locations of oil fields or buried liquid contaminants. Creating images of the unknown hydraulic environments underfoot is a difficult task that has typically relied on broad extrapolations from characteristics and tests of rock units penetrated by sparsely positioned boreholes. Such methods, however, cannot identify small-scale features and are very expensive to reproduce over a broad area. Further, the techniques through which information is extrapolated rely on clunky and mathematically complex statistical approaches requiring large amounts of computational power.
Solution of the finite Milne problem in stochastic media with RVT Technique
NASA Astrophysics Data System (ADS)
Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.
2017-12-01
This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.
Sources of atmospheric methane - Measurements in rice paddies and a discussion
NASA Technical Reports Server (NTRS)
Cicerone, R. J.; Shetter, J. D.
1981-01-01
Field measurements of methane fluxes from rice paddies, fresh water lakes, and saltwater marshes have been made to infer estimates of the size of these sources of atmospheric methane. The rice-paddy measurements, the first of their kind, show that the principal means of methane escape is through the plants themselves as opposed to transport across the water-air interface via bubbles or molecular diffusion. Nitrogen-fertilized plants release much more methane than unfertilized plants but even these measured rates are only one fourth as large as those inferred earlier by Koyama (1963, 1964) and on which all global extrapolations have been based to date. Measured methane fluxes from lakes and marshes are also compared to similar earlier data and it is found that extant data and flux-measurement methods are insufficient for reliable global extrapolations.
Tests and applications of nonlinear force-free field extrapolations in spherical geometry
NASA Astrophysics Data System (ADS)
Guo, Y.; Ding, M. D.
2013-07-01
We test a nonlinear force-free field (NLFFF) optimization code in spherical geometry with an analytical solution from Low and Lou. The potential field source surface (PFSS) model is served as the initial and boundary conditions where observed data are not available. The analytical solution can be well recovered if the boundary and initial conditions are properly handled. Next, we discuss the preprocessing procedure for the noisy bottom boundary data, and find that preprocessing is necessary for NLFFF extrapolations when we use the observed photospheric magnetic field as bottom boundaries. Finally, we apply the NLFFF model to a solar area where four active regions interacting with each other. An M8.7 flare occurred in one active region. NLFFF modeling in spherical geometry simultaneously constructs the small and large scale magnetic field configurations better than the PFSS model does.
Precise algorithm to generate random sequential adsorption of hard polygons at saturation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, G.
Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation'' limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles, and could thus determine the saturation density of spheres with high accuracy. Here in this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensionalmore » polygons. We also calculate the saturation density for regular polygons of three to ten sides, and obtain results that are consistent with previous, extrapolation-based studies.« less
An improved finite-difference analysis of uncoupled vibrations of tapered cantilever beams
NASA Technical Reports Server (NTRS)
Subrahmanyam, K. B.; Kaza, K. R. V.
1983-01-01
An improved finite difference procedure for determining the natural frequencies and mode shapes of tapered cantilever beams undergoing uncoupled vibrations is presented. Boundary conditions are derived in the form of simple recursive relations involving the second order central differences. Results obtained by using the conventional first order central differences and the present second order central differences are compared, and it is observed that the present second order scheme is more efficient than the conventional approach. An important advantage offered by the present approach is that the results converge to exact values rapidly, and thus the extrapolation of the results is not necessary. Consequently, the basic handicap with the classical finite difference method of solution that requires the Richardson's extrapolation procedure is eliminated. Furthermore, for the cases considered herein, the present approach produces consistent lower bound solutions.
Li, Zenghui; Xu, Bin; Yang, Jian; Song, Jianshe
2015-01-01
This paper focuses on suppressing spectral overlap for sub-band spectral estimation, with which we can greatly decrease the computational complexity of existing spectral estimation algorithms, such as nonlinear least squares spectral analysis and non-quadratic regularized sparse representation. Firstly, our study shows that the nominal ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when filtering a finite-length sequence, because many meaningless zeros are used as samples in convolution operations. Next, an extrapolation-based filtering strategy is proposed to produce a series of estimates as the substitutions of the zeros and to recover the suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are applied to demonstrate the effectiveness of the proposed strategy. PMID:25609038
The next 25 years: Industrialization of space: Rationale for planning
NASA Technical Reports Server (NTRS)
Vonputtkamer, J.
1976-01-01
The goals of NASA's space industralization program include contributing to increased productivity on earth without taxing the environment, generating new values through extraterrestrial productivity, and providing new growth options for the future which include the permanent settlement of space and long-range colonization and exploration projects. In planning the long-range space program based on essentially utilitarian aspects, without losing sight of the more humanistically significant long term, and to forecast associated technology requirements, a realistic approach is obtained by combining extrapolative and normative planning modes so that common stepping stones can be identified. In the extrapolative view, alternative futures are projected on the basis of past and current trends and tendencies. In the normative view, some ideal state in the far future is envisioned or postulated, and policies and decisions are directed toward its attainment.
Nonperturbative study of dynamical SUSY breaking in N =(2 ,2 ) Yang-Mills theory
NASA Astrophysics Data System (ADS)
Catterall, Simon; Jha, Raghav G.; Joseph, Anosh
2018-03-01
We examine the possibility of dynamical supersymmetry breaking in two-dimensional N =(2 ,2 ) supersymmetric Yang-Mills theory. The theory is discretized on a Euclidean spacetime lattice using a supersymmetric lattice action. We compute the vacuum energy of the theory at finite temperature and take the zero-temperature limit. Supersymmetry will be spontaneously broken in this theory if the measured ground-state energy is nonzero. By performing simulations on a range of lattices up to 96 ×96 we are able to perform a careful extrapolation to the continuum limit for a wide range of temperatures. Subsequent extrapolations to the zero-temperature limit yield an upper bound on the ground-state energy density. We find the energy density to be statistically consistent with zero in agreement with the absence of dynamical supersymmetry breaking in this theory.
Environment applications for ion mobility spectrometry
NASA Technical Reports Server (NTRS)
Ritchie, Robert K.; Rudolph, Andreas
1995-01-01
The detection of environmentally important polychlorinated aromatics by ion mobility spectrometry (IMS) was investigated. Single polychlorinated biphenyl (PCB) isomers (congeners) having five or more chlorine atoms were reliably detected in isooctane solution at levels of 35 ng with a Barringer IONSCAN ion mobility spectrometer operating in negative mode; limits of detection (LOD) were extrapolated to be in the low ng region. Mixtures of up to four PCB congeners, showing characteristic multiple peaks, and complex commercial mixtures of PCBs (Aroclors) were also detected. Detection of Aroclors in transformer oil was suppressed by the presence of the antioxidant BHT (2,6-di-t-butyl4-methylphenol) in the oil. The wood preservative pentachlorophenol (PCP) was easily detected in recycled wood shavings at levels of 52 ppm with the IONSCAN; the LOD was extrapolated to be in the low ppm region.
Self-organization of the protocell was a forward process
NASA Technical Reports Server (NTRS)
Fox, S. W.; Matsuno, K.
1983-01-01
Yockey's (1981) interpretation of information theory relative to concepts of self-organization in the origin of life is criticized on the ground that it assumes that each amino acid residue type in a given sequence is an unaided information carrier throughout evolution. It is argued that more than one amino acid residue can act as a unit information carrier, and that this was the case in prebiotic protein evolution. Forward-extrapolation should be used to study prebiotic evolution, not backward-extrapolation. Transposing the near-random internal order of modern proteins to primitive proteins, as Yockey has done, is an unsupported assumption and disagrees with the results of experimental models of the primordial type. Studies indicate that early primary information carriers in evolution were mixtures of free alpha amino acids which necessarily had the capability of sequencing themselves.
Predictive data-based exposition of 5s5p{sup 1,3}P{sub 1} lifetimes in the Cd isoelectronic sequence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, L. J.; Matulioniene, R.; Ellis, D. G.
2000-11-01
Experimental and theoretical values for the lifetimes of the 5s5p{sup 1}P{sub 1} and {sup 3}P{sub 1} levels in the Cd isoelectronic sequence are examined in the context of a data-based isoelectronic systematization. Lifetime and energy-level data are combined to account for the effects of intermediate coupling, thereby reducing the data to a regular and slowly varying parametric mapping. This empirically characterizes small contributions due to spin-other-orbit interaction, spin dependences of the radial wave functions, and configuration interaction, and yields accurate interpolative and extrapolative predictions. Multiconfiguration Dirac-Hartree-Fock calculations are used to verify the regularity of these trends, and to examine themore » extent to which they can be extrapolated to high nuclear charge.« less
Precise algorithm to generate random sequential adsorption of hard polygons at saturation
Zhang, G.
2018-04-30
Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation'' limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles, and could thus determine the saturation density of spheres with high accuracy. Here in this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensionalmore » polygons. We also calculate the saturation density for regular polygons of three to ten sides, and obtain results that are consistent with previous, extrapolation-based studies.« less
Radar-acoustic interaction for IFF applications
NASA Astrophysics Data System (ADS)
Saffold, James A.; Williamson, Frank R.; Ahuja, Krishan; Stein, Lawrence R.; Muller, Marjorie
1998-08-01
This paper describes the results of an internal development program (IDP) No. 97-1 conducted from August 1-October 1 1996 at the Georgia Tech Research Institute. The IDP program was implemented to establish theoretical relationships and verify the interaction between X-band radar waves and ultrasonic acoustics. Low cost, off-the-shelf components were used for the verification in order to illustrate the cost savings potential of developing and utilizing these systems. The measured data was used to calibrate the developed models of the phenomenology and to support extrapolation for radar systems which can exploit these interactions. One such exploitation is for soldier identification IFF and radar taggant concepts. The described IDP program provided the phenomenological data which is being used to extrapolate concept system performances based on technological limitations and battlefield conditions for low cost IFF and taggant configurations.
Sahneh, Faryad Darabi; Scoglio, Caterina M; Monteiro-Riviere, Nancy A; Riviere, Jim E
2015-01-01
To assess the impact of biocorona kinetics on expected tissue distribution of nanoparticles (NPs) across species. The potential fate of NPs in vivo is described through a simple and descriptive pharmacokinetic model using rate processes dependent upon basal metabolic rate coupled to dynamics of protein corona. Mismatch of time scales between interspecies allometric scaling and the kinetics of corona formation is potentially a fundamental issue with interspecies extrapolations of NP biodistribution. The impact of corona evolution on NP biodistribution across two species is maximal when corona transition half-life is close to the geometric mean of NP half-lives of the two species. While engineered NPs can successfully reach target cells in rodent models, the results may be different in humans due to the fact that the longer circulation time allows for further biocorona evolution.
Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C
2010-09-21
We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.
Restricted Complexity Framework for Nonlinear Adaptive Control in Complex Systems
NASA Astrophysics Data System (ADS)
Williams, Rube B.
2004-02-01
Control law adaptation that includes implicit or explicit adaptive state estimation, can be a fundamental underpinning for the success of intelligent control in complex systems, particularly during subsystem failures, where vital system states and parameters can be impractical or impossible to measure directly. A practical algorithm is proposed for adaptive state filtering and control in nonlinear dynamic systems when the state equations are unknown or are too complex to model analytically. The state equations and inverse plant model are approximated by using neural networks. A framework for a neural network based nonlinear dynamic inversion control law is proposed, as an extrapolation of prior developed restricted complexity methodology used to formulate the adaptive state filter. Examples of adaptive filter performance are presented for an SSME simulation with high pressure turbine failure to support extrapolations to adaptive control problems.
Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer.
Le, Guigao; Oulaid, Othmane; Zhang, Junfeng
2015-03-01
In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.
Garcia, Mariano; Saatchi, Sassan; Casas, Angeles; Koltunov, Alexander; Ustin, Susan; Ramirez, Carlos; Garcia-Gutierrez, Jorge; Balzter, Heiko
2017-02-01
Quantifying biomass consumption and carbon release is critical to understanding the role of fires in the carbon cycle and air quality. We present a methodology to estimate the biomass consumed and the carbon released by the California Rim fire by integrating postfire airborne LiDAR and multitemporal Landsat Operational Land Imager (OLI) imagery. First, a support vector regression (SVR) model was trained to estimate the aboveground biomass (AGB) from LiDAR-derived metrics over the unburned area. The selected model estimated AGB with an R 2 of 0.82 and RMSE of 59.98 Mg/ha. Second, LiDAR-based biomass estimates were extrapolated to the entire area before and after the fire, using Landsat OLI reflectance bands, Normalized Difference Infrared Index, and the elevation derived from LiDAR data. The extrapolation was performed using SVR models that resulted in R 2 of 0.73 and 0.79 and RMSE of 87.18 (Mg/ha) and 75.43 (Mg/ha) for the postfire and prefire images, respectively. After removing bias from the AGB extrapolations using a linear relationship between estimated and observed values, we estimated the biomass consumption from postfire LiDAR and prefire Landsat maps to be 6.58 ± 0.03 Tg (10 12 g), which translate into 12.06 ± 0.06 Tg CO2 e released to the atmosphere, equivalent to the annual emissions of 2.57 million cars.
Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development
Burt, T.; Yoshida, K.; Lappin, G.; ...
2016-02-26
A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications andmore » design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. Lastly, all phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.« less
Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burt, T.; Yoshida, K.; Lappin, G.
A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications andmore » design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. Lastly, all phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.« less
NASA Astrophysics Data System (ADS)
Prasad, A.; Bhattacharyya, R.; Hu, Qiang; Kumar, Sanjay; Nayak, Sushree S.
2018-06-01
The magnetohydrodynamics of the solar corona is simulated numerically. The simulation is initialized with an extrapolated non-force-free magnetic field using the vector magnetogram of the active region NOAA 12192, which was obtained from the solar photosphere. Particularly, we focus on the magnetic reconnections (MRs) occurring close to a magnetic null point that resulted in the appearance of circular chromospheric flare ribbons on 2014 October 24 around 21:21 UT, after the peak of an X3.1 flare. The extrapolated field lines show the presence of the three-dimensional (3D) null near one of the polarity-inversion lines—where the flare was observed. In the subsequent numerical simulation, we find MRs occurring near the null point, where the magnetic field lines from the fan plane of the 3D null form a X-type configuration with underlying arcade field lines. The footpoints of the dome-shaped field lines, inherent to the 3D null, show high gradients of the squashing factor. We find slipping reconnections at these quasi-separatrix layers, which are co-located with the post-flare circular brightening observed at chromospheric heights. This demonstrates the viability of the initial non-force-free field, along with the dynamics it initiates. Moreover, the initial field and its simulated evolution are found to be devoid of any flux rope, which is congruent with the confined nature of the flare.
Extrapolating Solar Dynamo Models Throughout the Heliosphere
NASA Astrophysics Data System (ADS)
Cox, B. T.; Miesch, M. S.; Augustson, K.; Featherstone, N. A.
2014-12-01
There are multiple theories that aim to explain the behavior of the solar dynamo, and their associated models have been fiercely contested. The two prevailing theories investigated in this project are the Convective Dynamo model that arises from the pure solving of the magnetohydrodynamic equations, as well as the Babcock-Leighton model that relies on sunspot dissipation and reconnection. Recently, the supercomputer simulations CASH and BASH have formed models of the behavior of the Convective and Babcock-Leighton models, respectively, in the convective zone of the sun. These models show the behavior of the models within the sun, while much less is known about the effects these models may have further away from the solar surface. The goal of this work is to investigate any fundamental differences between the Convective and Babcock-Leighton models of the solar dynamo outside of the sun and extending into the solar system via the use of potential field source surface extrapolations implemented via python code that operates on data from CASH and BASH. The use of real solar data to visualize supergranular flow data in the BASH model is also used to learn more about the behavior of the Babcock-Leighton Dynamo. From the process of these extrapolations it has been determined that the Babcock-Leighton model, as represented by BASH, maintains complex magnetic fields much further into the heliosphere before reverting into a basic dipole field, providing 3D visualisations of the models distant from the sun.
Effective-range function methods for charged particle collisions
NASA Astrophysics Data System (ADS)
Gaspard, David; Sparenberg, Jean-Marc
2018-04-01
Different versions of the effective-range function method for charged particle collisions are studied and compared. In addition, a novel derivation of the standard effective-range function is presented from the analysis of Coulomb wave functions in the complex plane of the energy. The recently proposed effective-range function denoted as Δℓ [Ramírez Suárez and Sparenberg, Phys. Rev. C 96, 034601 (2017), 10.1103/PhysRevC.96.034601] and an earlier variant [Hamilton et al., Nucl. Phys. B 60, 443 (1973), 10.1016/0550-3213(73)90193-4] are related to the standard function. The potential interest of Δℓ for the study of low-energy cross sections and weakly bound states is discussed in the framework of the proton-proton S10 collision. The resonant state of the proton-proton collision is successfully computed from the extrapolation of Δℓ instead of the standard function. It is shown that interpolating Δℓ can lead to useful extrapolation to negative energies, provided scattering data are known below one nuclear Rydberg energy (12.5 keV for the proton-proton system). This property is due to the connection between Δℓ and the effective-range function by Hamilton et al. that is discussed in detail. Nevertheless, such extrapolations to negative energies should be used with caution because Δℓ is not analytic at zero energy. The expected analytic properties of the main functions are verified in the complex energy plane by graphical color-based representations.
Wildlife toxicity extrapolations: NOAEL versus LOAEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fairbrother, A.; Berg, M. van den
1995-12-31
Ecotoxicological assessments must rely on the extrapolation of toxicity data from a few indicator species to many species of concern. Data are available from laboratory studies (e.g., quail, mallards, rainbow trout, fathead minnow) and some planned or serendipitous field studies of a broader, but by no means comprehensive, suite of species. Yet all ecological risk assessments begin with an estimate of risk based on information gleaned from the literature. One is then confronted with the necessity of extrapolating toxicity information from a limited number of indicator species to all organisms of interest. This is a particularly acute problem when tryingmore » to estimate hazards to wildlife in terrestrial systems as there is an extreme paucity of data for most chemicals in all but a handful of species. This section continues the debate by six panelists of the ``correct`` approach for determining wildlife toxicity thresholds by debating which toxicity value should be used for setting threshold criteria. Should the lowest observable effect level (LOAEL) be used or is it more appropriate to use the no observable effect level (NOAEL)? What are the short-comings of using either of these point estimates? Should a ``benchmark`` approach, similar to that proposed for human health risk assessments, be used instead, where an EC{sub 5} or EC{sub 10} and associated confidence limits are determined and then divided by a safety factor? How should knowledge of the slope of the dose-response curve be incorporated into determination of toxicity threshold values?« less
Image-based optimization of coronal magnetic field models for improved space weather forecasting
NASA Astrophysics Data System (ADS)
Uritsky, V. M.; Davila, J. M.; Jones, S. I.; MacNeice, P. J.
2017-12-01
The existing space weather forecasting frameworks show a significant dependence on the accuracy of the photospheric magnetograms and the extrapolation models used to reconstruct the magnetic filed in the solar corona. Minor uncertainties in the magnetic field magnitude and direction near the Sun, when propagated through the heliosphere, can lead to unacceptible prediction errors at 1 AU. We argue that ground based and satellite coronagraph images can provide valid geometric constraints that could be used for improving coronal magnetic field extrapolation results, enabling more reliable forecasts of extreme space weather events such as major CMEs. In contrast to the previously developed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions up to 1-2 solar radii above the photosphere. By applying the developed image processing techniques to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code developed S. Jones at al. (ApJ 2016, 2017). Our tracing results are shown to be in a good qualitative agreement with the large-scale configuration of the optical corona, and lead to a more consistent reconstruction of the large-scale coronal magnetic field geometry, and potentially more accurate global heliospheric simulation results. Several upcoming data products for the space weather forecasting community will be also discussed.
Exposure Matching for Extrapolation of Efficacy in Pediatric Drug Development
Mulugeta, Yeruk; Barrett, Jeffrey S.; Nelson, Robert; Eshete, Abel Tilahun; Mushtaq, Alvina; Yao, Lynne; Glasgow, Nicole; Mulberg, Andrew E.; Gonzalez, Daniel; Green, Dionna; Florian, Jeffry; Krudys, Kevin; Seo, Shirley; Kim, Insook; Chilukuri, Dakshina; Burckart, Gilbert J.
2017-01-01
During drug development, matching adult systemic exposures of drugs is a common approach for dose selection in pediatric patients when efficacy is partially or fully extrapolated. This is a systematic review of approaches used for matching adult systemic exposures as the basis for dose selection in pediatric trials submitted to the U.S. Food and Drug Administration (FDA) between 1998 and 2012. The trial design of pediatric pharmacokinetic (PK) studies and the pediatric and adult systemic exposure data were obtained from FDA publicly available databases containing reviews of pediatric trials. Exposure matching approaches that were used as the basis for pediatric dose selection were reviewed. The PK data from the adult and pediatric populations were used to quantify exposure agreement between the two patient populations. The main measures were the pediatric PK studies trial design elements and drug systemic exposures (adult and pediatric). There were 31 products (86 trials) with full or partial extrapolation of efficacy with an available PK assessment. Pediatric exposures had a range of mean Cmax and AUC ratios (pediatric/adult) of 0.63-4.19 and 0.36-3.60 respectively. Seven of the 86 trials (8.1%) had a pre-defined acceptance boundary used to match adult exposures. The key PK parameter was consistently predefined for antiviral and anti-infective products. Approaches to match exposure in children and adults varied across products. A consistent approach for systemic exposure matching and evaluating pediatric PK studies is needed to guide future pediatric trials. PMID:27040726
GIS Well Temperature Data from the Roosevelt Hot Springs, Utah FORGE Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gwynn, Mark; Hill, Jay; Allis, Rick
This is a GIS point feature shapefile representing wells, and their temperatures, that are located in the general Utah FORGE area near Milford, Utah. There are also fields that represent interpolated temperature values at depths of 200 m, 1000 m, 2000 m, 3000 m, and 4000 m. in degrees Fahrenheit. The temperature values at specific depths as mentioned above were derived as follows. In cases where the well reached a given depth (200 m and 1, 2, 3, or 4 km), the temperature is the measured temperature. For the shallower wells (and at deeper depths in the wells reaching onemore » or more of the target depths), temperatures were extrapolated from the temperature-depth profiles that appeared to have stable (re-equilibrated after drilling) and linear profiles within the conductive regime (i.e. below the water table or other convective influences such as shallow hydrothermal outflow from the Roosevelt Hydrothermal System). Measured temperatures/gradients from deeper wells (when available and reasonably close to a given well) were used to help constrain the extrapolation to greater depths. Most of the field names in the attribute table are intuitive, however HF = heat flow, intercept = the temperature at the surface (x-axis of the temperature-depth plots) based on the linear segment of the plot that was used to extrapolate the temperature profiles to greater depths, and depth_m is the total well depth. This information is also present in the shapefile metadata.« less
Saatchi, Sassan; Casas, Angeles; Koltunov, Alexander; Ustin, Susan; Ramirez, Carlos; Garcia‐Gutierrez, Jorge; Balzter, Heiko
2017-01-01
Abstract Quantifying biomass consumption and carbon release is critical to understanding the role of fires in the carbon cycle and air quality. We present a methodology to estimate the biomass consumed and the carbon released by the California Rim fire by integrating postfire airborne LiDAR and multitemporal Landsat Operational Land Imager (OLI) imagery. First, a support vector regression (SVR) model was trained to estimate the aboveground biomass (AGB) from LiDAR‐derived metrics over the unburned area. The selected model estimated AGB with an R 2 of 0.82 and RMSE of 59.98 Mg/ha. Second, LiDAR‐based biomass estimates were extrapolated to the entire area before and after the fire, using Landsat OLI reflectance bands, Normalized Difference Infrared Index, and the elevation derived from LiDAR data. The extrapolation was performed using SVR models that resulted in R 2 of 0.73 and 0.79 and RMSE of 87.18 (Mg/ha) and 75.43 (Mg/ha) for the postfire and prefire images, respectively. After removing bias from the AGB extrapolations using a linear relationship between estimated and observed values, we estimated the biomass consumption from postfire LiDAR and prefire Landsat maps to be 6.58 ± 0.03 Tg (1012 g), which translate into 12.06 ± 0.06 Tg CO2e released to the atmosphere, equivalent to the annual emissions of 2.57 million cars. PMID:28405539
Miyaguchi, Takamori; Suemizu, Hiroshi; Shimizu, Makiko; Shida, Satomi; Nishiyama, Sayako; Takano, Ryohji; Murayama, Norie; Yamazaki, Hiroshi
2015-06-01
The aim of this study was to extrapolate to humans the pharmacokinetics of estrogen analog bisphenol A determined in chimeric mice transplanted with human hepatocytes. Higher plasma concentrations and urinary excretions of bisphenol A glucuronide (a primary metabolite of bisphenol A) were observed in chimeric mice than in control mice after oral administrations, presumably because of enterohepatic circulation of bisphenol A glucuronide in control mice. Bisphenol A glucuronidation was faster in mouse liver microsomes than in human liver microsomes. These findings suggest a predominantly urinary excretion route of bisphenol A glucuronide in chimeric mice with humanized liver. Reported human plasma and urine data for bisphenol A glucuronide after single oral administration of 0.1mg/kg bisphenol A were reasonably estimated using the current semi-physiological pharmacokinetic model extrapolated from humanized mice data using algometric scaling. The reported geometric mean urinary bisphenol A concentration in the U.S. population of 2.64μg/L underwent reverse dosimetry modeling with the current human semi-physiological pharmacokinetic model. This yielded an estimated exposure of 0.024μg/kg/day, which was less than the daily tolerable intake of bisphenol A (50μg/kg/day), implying little risk to humans. Semi-physiological pharmacokinetic modeling will likely prove useful for determining the species-dependent toxicological risk of bisphenol A. Copyright © 2015 Elsevier Inc. All rights reserved.
Dioxin equivalency: Challenge to dose extrapolation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.F. Jr.; Silkworth, J.B.
1995-12-31
Extensive research has shown that all biological effects of dioxin-like agents are mediated via a single biochemical target, the Ah receptor (AhR), and that the relative biologic potencies of such agents in any given system, coupled with their exposure levels, may be described in terms of toxic equivalents (TEQ). It has also shown that the TEQ sources include not only chlorinated species such as the dioxins (PCDDs), PCDFs, and coplanar PCBs, but also non-chlorinated substances such as the PAHs of wood smoke, the AhR agonists of cooked meat, and the indolocarbazol (ICZ) derived from cruciferous vegetables. Humans have probably hadmore » elevated exposures to these non-chlorinated TEQ sources ever since the discoveries of fire, cooking, and the culinary use of Brassica spp. Recent assays of CYP1A2 induction show that these ``natural`` or ``traditional`` AhR agonists are contributing 50--100 times as much to average human TEQ exposures as do the chlorinated xenobiotics. Currently, the safe doses of the xenobiotic TEQ sources are estimated from their NOAELs and large extrapolation factors, derived from arbitrary mathematical models, whereas the NOAELs themselves are regarded as the safe doses for the TEQs of traditional dietary components. Available scientific data can neither support nor refute either approach to assessing the health risk of an individual chemical substance. However, if two substances be toxicologically equivalent, then their TEQ-adjusted health risks must also be equivalent, and the same dose extrapolation procedure should be used for both.« less
Risk and safety assessments for early life exposures to environmental chemicals or pharmaceuticals based on cross-species extrapolation would greatly benefit from information on chemical dosimetry in the young.
Applications of adenine nucleotide measurements in oceanography
NASA Technical Reports Server (NTRS)
Holm-Hansen, O.; Hodson, R.; Azam, F.
1975-01-01
The methodology involved in nucleotide measurements is outlined, along with data to support the premise that ATP concentrations in microbial cells can be extrapolated to biomass parameters. ATP concentrations in microorganisms and nucleotide analyses are studied.
Technical Progress: Three Ways to Keep Up.
ERIC Educational Resources Information Center
Patterson, J. Wayne; And Others
1988-01-01
The authors analyzed three techniques employed in technological forecasting: (1) brainstorming, (2) extrapolation, and (3) scenario writing. They argue that these techniques have value to practitioners, particularly managers, who are often affected by technological change. (CH)
GENOMIC APPROACHES FOR CROSS-SPECIES EXTRAPOLATION IN TOXICOLOGY
The latest tools for investigating stress in organisms, genomic technologies provide great insight into how different organisms respond to environmental conditions. However, their usefulness needs testing, verification, and codification. Genomic Approaches for Cross-Species Extra...
An explicit predictor-corrector solver with applications to Burgers' equation
NASA Technical Reports Server (NTRS)
Dey, S. K.; Dey, C.
1983-01-01
Forward Euler's explicit, finite-difference formula of extrapolation, is used as a predictor and a convex formula as a corrector to integrate differential equations numerically. An application has been made to Burger's equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, M. C.; Truster, T. J.; Cochran, K. B.
Advanced reactors designed to operate at higher temperatures than current light water reactors require structural materials with high creep strength and creep-fatigue resistance to achieve long design lives. Grade 91 is a ferritic/martensitic steel designed for long creep life at elevated temperatures. It has been selected as a candidate material for sodium fast reactor intermediate heat exchangers and other advanced reactor structural components. This report focuses on the creep deformation and rupture life of Grade 91 steel. The time required to complete an experiment limits the availability of long-life creep data for Grade 91 and other structural materials. Design methodsmore » often extrapolate the available shorter-term experimental data to longer design lives. However, extrapolation methods tacitly assume the underlying material mechanisms causing creep for long-life/low-stress conditions are the same as the mechanisms controlling creep in the short-life/high-stress experiments. A change in mechanism for long-term creep could cause design methods based on extrapolation to be non-conservative. The goal for physically-based microstructural models is to accurately predict material response in experimentally-inaccessible regions of design space. An accurate physically-based model for creep represents all the material mechanisms that contribute to creep deformation and damage and predicts the relative influence of each mechanism, which changes with loading conditions. Ideally, the individual mechanism models adhere to the material physics and not an empirical calibration to experimental data and so the model remains predictive for a wider range of loading conditions. This report describes such a physically-based microstructural model for Grade 91 at 600° C. The model explicitly represents competing dislocation and diffusional mechanisms in both the grain bulk and grain boundaries. The model accurately recovers the available experimental creep curves at higher stresses and the limited experimental data at lower stresses, predominately primary creep rates. The current model considers only one temperature. However, because the model parameters are, for the most part, directly related to the physics of fundamental material processes, the temperature dependence of the properties are known. Therefore, temperature dependence can be included in the model with limited additional effort. The model predicts a mechanism shift for 600° C at approximately 100 MPa from a dislocation- dominated regime at higher stress to a diffusion-dominated regime at lower stress. This mechanism shift impacts the creep life, notch-sensitivity, and, likely, creep ductility of Grade 91. In particular, the model predicts existing extrapolation methods for creep life may be non-conservative when attempting to extrapolate data for higher stress creep tests to low stress, long-life conditions. Furthermore, the model predicts a transition from notchstrengthening behavior at high stress to notch-weakening behavior at lower stresses. Both behaviors may affect the conservatism of existing design methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sylvetsky, Nitai, E-mail: gershom@weizmann.ac.il; Martin, Jan M. L., E-mail: gershom@weizmann.ac.il; Peterson, Kirk A., E-mail: kipeters@wsu.edu
2016-06-07
In the context of high-accuracy computational thermochemistry, the valence coupled cluster with all singles and doubles (CCSD) correlation component of molecular atomization energies presents the most severe basis set convergence problem, followed by the (T) component. In the present paper, we make a detailed comparison, for an expanded version of the W4-11 thermochemistry benchmark, between, on the one hand, orbital-based CCSD/AV{5,6}Z + d and CCSD/ACV{5,6}Z extrapolation, and on the other hand CCSD-F12b calculations with cc-pVQZ-F12 and cc-pV5Z-F12 basis sets. This latter basis set, now available for H–He, B–Ne, and Al–Ar, is shown to be very close to the basis setmore » limit. Apparent differences (which can reach 0.35 kcal/mol for systems like CCl{sub 4}) between orbital-based and CCSD-F12b basis set limits disappear if basis sets with additional radial flexibility, such as ACV{5,6}Z, are used for the orbital calculation. Counterpoise calculations reveal that, while total atomization energies with V5Z-F12 basis sets are nearly free of BSSE, orbital calculations have significant BSSE even with AV(6 + d)Z basis sets, leading to non-negligible differences between raw and counterpoise-corrected extrapolated limits. This latter problem is greatly reduced by switching to ACV{5,6}Z core-valence basis sets, or simply adding an additional zeta to just the valence orbitals. Previous reports that all-electron approaches like HEAT (high-accuracy extrapolated ab-initio thermochemistry) lead to different CCSD(T) limits than “valence limit + CV correction” approaches like Feller-Peterson-Dixon and Weizmann-4 (W4) theory can be rationalized in terms of the greater radial flexibility of core-valence basis sets. For (T) corrections, conventional CCSD(T)/AV{Q,5}Z + d calculations are found to be superior to scaled or extrapolated CCSD(T)-F12b calculations of similar cost. For a W4-F12 protocol, we recommend obtaining the Hartree-Fock and valence CCSD components from CCSD-F12b/cc-pV{Q,5}Z-F12 calculations, but the (T) component from conventional CCSD(T)/aug’-cc-pV{Q,5}Z + d calculations using Schwenke’s extrapolation; post-CCSD(T), core-valence, and relativistic corrections are to be obtained as in the original W4 theory. W4-F12 is found to agree slightly better than W4 with ATcT (active thermochemical tables) data, at a substantial saving in computation time and especially I/O overhead. A W4-F12 calculation on benzene is presented as a proof of concept.« less
Scaling Factor Variability and Toxicokinetic Outcomes in Children
Abstract title: Scaling Factor Variability and Toxicokinetic Outcomes in ChildrenBackgroundBiotransformation rates (Vmax) extrapolated from in vitro data are used increasingly in human physiologically based pharmacokinetic (PBPK) models. PBPK models are widely used in human hea...
Peer Review Comments on the IRIS Assessment of Benzene
Attachment to IRIS file for benzene, January 19, 2000, RESPONSE TO THE PEER REVIEW COMMENTS, II. Extrapolation of the Benzene Inhalation Unit Risk Estimate to the Oral Route of Exposure (EPA/NCEA-W-0517, July 1999)
The capacity to perform route-to-route extrapolation of toxicity data is becoming increasingly crucial to the Agency, with a number of strategies suggested and demonstrated. One strategy involves using a combination of existing data and modeling approaches. This strategy propos...
A forecast of bridge engineering, 1980-2000.
DOT National Transportation Integrated Search
1979-01-01
A three-pronged study was undertaken to forecast the nature of bridge engineering and construction for the years 1980 to 2000. First, the history of bridge engineering was explored to extrapolate likely future developments. Second, a detailed questio...
Development of an accelerated creep testing procedure for geosynthetics : technical summary.
DOT National Transportation Integrated Search
1997-09-01
Temperature-creep relationships in geosynthetics vary for each type of geogrid and depend on many factors such as polymer structure, manufacture process, degree of crystallinity, and glass-transition temperature. The extrapolation procedures to predi...
In formulating hypothesis related to extrapolations across species and/or chemicals, the ECOTOX database provides researchers a means of locating high quality ecological effects data for a wide-range of terrestrial and aquatic receptors. Currently the database includes more than ...
Asteroid Studies: A 35-Year Forecast
NASA Astrophysics Data System (ADS)
Rivkin, A. S.; Denevi, B. W.; Klima, R. L.; Ernst, C. M.; Chabot, N. L.; Barnouin, O. S.; Cohen, B. A.
2017-02-01
We are in an active time for asteroid studies, which fall at the intersection of science, planetary defense, human exploration, and in situ resource utilization. We look forward and extrapolate what the future may hold for asteroid science.
Process scales in catchment science: a new synthesis
Concerns surrounding data resolution, choice of spatial and temporal scales in research design, and problems with extrapolation of processes across spatial and temporal scales differ greatly between scientific process-elucidation research and scenario exploration for watershed ma...
Five Year Computer Technology Forecast
DOT National Transportation Integrated Search
1972-12-01
The report delineates the various computer system components and extrapolates past trends in light of industry goals and physical limitations to predict what individual components and entire systems will look like in the second half of this decade. T...
Toxicokinetic Triage for Environmental Chemicals
Toxicokinetic (TK) models are essential for linking administered doses to blood and tissue concentrations. In vitro-to-in vivo extrapolation (IVIVE) methods have been developed to determine TK from limited in vitro measurements and chemical structure-based property predictions, p...
NASA Technical Reports Server (NTRS)
Jenkins, D. W.
1972-01-01
NASA chose the watershed of Rhode River, a small sub-estuary of the Bay, as a representative test area for intensive studies of remote sensing, the results of which could be extrapolated to other estuarine watersheds around the Bay. A broad program of ecological research was already underway within the watershed, conducted by the Smithsonian Institution's Chesapeake Bay Center for Environmental Studies (CBCES) and cooperating universities. This research program offered a unique opportunity to explore potential applications for remote sensing techniques. This led to a joint NASA-CBCES project with two basic objectives: to evaluate remote sensing data for the interpretation of ecological parameters, and to provide essential data for ongoing research at the CBCES. A third objective, dependent upon realization of the first two, was to extrapolate photointerpretive expertise gained at the Rhode River watershed to other portions of the Chesapeake Bay.
Accuracy of topological entanglement entropy on finite cylinders.
Jiang, Hong-Chen; Singh, Rajiv R P; Balents, Leon
2013-09-06
Topological phases are unique states of matter which support nonlocal excitations which behave as particles with fractional statistics. A universal characterization of gapped topological phases is provided by the topological entanglement entropy (TEE). We study the finite size corrections to the TEE by focusing on systems with a Z2 topological ordered state using density-matrix renormalization group and perturbative series expansions. We find that extrapolations of the TEE based on the Renyi entropies with a Renyi index of n≥2 suffer from much larger finite size corrections than do extrapolations based on the von Neumann entropy. In particular, when the circumference of the cylinder is about ten times the correlation length, the TEE obtained using von Neumann entropy has an error of order 10(-3), while for Renyi entropies it can even exceed 40%. We discuss the relevance of these findings to previous and future searches for topological ordered phases, including quantum spin liquids.
Research on camera on orbit radial calibration based on black body and infrared calibration stars
NASA Astrophysics Data System (ADS)
Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng
2018-05-01
Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.
Creatine supplementation and glycemic control: a systematic review.
Pinto, Camila Lemos; Botelho, Patrícia Borges; Pimentel, Gustavo Duarte; Campos-Ferraz, Patrícia Lopes; Mota, João Felipe
2016-09-01
The focus of this review is the effects of creatine supplementation with or without exercise on glucose metabolism. A comprehensive examination of the past 16 years of study within the field provided a distillation of key data. Both in animal and human studies, creatine supplementation together with exercise training demonstrated greater beneficial effects on glucose metabolism; creatine supplementation itself demonstrated positive results in only a few of the studies. In the animal studies, the effects of creatine supplementation on glucose metabolism were even more distinct, and caution is needed in extrapolating these data to different species, especially to humans. Regarding human studies, considering the samples characteristics, the findings cannot be extrapolated to patients who have poorer glycemic control, are older, are on a different pharmacological treatment (e.g., exogenous insulin therapy) or are physically inactive. Thus, creatine supplementation is a possible nutritional therapy adjuvant with hypoglycemic effects, particularly when used in conjunction with exercise.
Past and Future of Astronomy and SETI Cast in Maths
NASA Astrophysics Data System (ADS)
Maccone, C.
Assume that the history of Astronomy and SETI is the leading proof of the evolution of human knowledge on Earth over the last 3000 years. Then, human knowledge has increased a lot, although not at a uniform pace. A mathematical description of how much human knowledge has increased, however, is difficult to achieve. In this paper, we cast a mathematical model of the evolution of human knowledge over the last three thousand years that seems to reflect reasonably well both what is known from the past and might be extrapolated into the future. Our model is based on two seminal books by Sagan and Finney and Jones. Our model is based on the use of two cubic curves, representing the evolution of Astronomy and of SETI, respectively. We conclude by extrapolating these curves into the future and reach the conclusion that the "Star Trek" age of humankind might possibly begin by the end of this century.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spence, R.D.; Godbee, H.W.; Tallent, O.K.
1991-01-01
Despite the demonstrated importance of diffusion control in leaching, other mechanisms have been observed to play a role and leaching from porous solid bodies is not simple diffusion. Only simple diffusion theory has been developed well enough for extrapolation, as yet. The well developed diffusion theory, used in data analysis by ANSI/ANS-16.1 and the NEWBOX program, can help in trying to extrapolate and predict the performance of solidified waste forms over decades and centuries, but the limitations and increased uncertainty must be understood in so doing. Treating leaching as a semi-infinite medium problem, as done in the Cote model, resultsmore » in simpler equations, but limits, application to early leaching behavior when less than 20% of a given component has been leached. 18 refs., 2 tabs.« less
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
NASA Astrophysics Data System (ADS)
Coy, R. N.; Cunningham, G.; Pryke, C. L.; Watson, A. A.
1997-03-01
Measurements of the lateral distribution function (ldf) of Extensive Air Showers (EAS) as recorded by the array of water-Čerenkov detectors at Haverah Park are described, and accurate experimental parameterizations expressing the mean ldf for 2 × 10 17 < E < 4 × 10 18 eV, 50 < r < 700 m, and θ < 45° are given. An extrapolation of these relations to the regime E ≥ 10 19 eV and r > 700 m is described: extrapolation in this energy domain appears valid, and an approximate correction term is given for the larger core distances. The results of recent Monte Carlo simulations of shower development and detector behavior are compared to the parameterized ldf. The agreement is good increasing confidence that these simulations may be trusted as design tools for the Auger project, a proposed 'next generation' detector system.
Numerical solution of open string field theory in Schnabl gauge
NASA Astrophysics Data System (ADS)
Arroyo, E. Aldo; Fernandes-Silva, A.; Szitas, R.
2018-01-01
Using traditional Virasoro L 0 level-truncation computations, we evaluate the open bosonic string field theory action up to level (10 , 30). Extremizing this level-truncated potential, we construct a numerical solution for tachyon condensation in Schnabl gauge. We find that the energy associated to the numerical solution overshoots the expected value -1 at level L = 6. Extrapolating the level-truncation data for L ≤ 10 to estimate the vacuum energies for L > 10, we predict that the energy reaches a minimum value at L ˜ 12, and then turns back to approach -1 asymptotically as L → ∞. Furthermore, we analyze the tachyon vacuum expectation value (vev), for which by extrapolating its corresponding level-truncation data, we predict that the tachyon vev reaches a minimum value at L ˜ 26, and then turns back to approach the expected analytical result as L → ∞.
Core conditions for alpha heating attained in direct-drive inertial confinement fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, A.; Woo, K. M.; Betti, R.
It is shown that direct-drive implosions on the OMEGA laser have achieved core conditions that would lead to significant alpha heating at incident energies available on the National Ignition Facility (NIF) scale. The extrapolation of the experimental results from OMEGA to NIF energy assumes only that the implosion hydrodynamic efficiency is unchanged at higher energies. This approach is independent of the uncertainties in the physical mechanism that degrade implosions on OMEGA, and relies solely on a volumetric scaling of the experimentally observed core conditions. It is estimated that the current best-performing OMEGA implosion [Regan et al., Phys. Rev. Lett. 117,more » 025001 (2016)] extrapolated to a 1.9 MJ laser driver with the same illumination configuration and laser-target coupling would produce 125 kJ of fusion energy with similar levels of alpha heating observed in current highest performing indirect-drive NIF implosions.« less
NASA Technical Reports Server (NTRS)
Hunter, H. E.; Amato, R. A.
1972-01-01
The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.
Non-Arrhenius protein aggregation.
Wang, Wei; Roberts, Christopher J
2013-07-01
Protein aggregation presents one of the key challenges in the development of protein biotherapeutics. It affects not only product quality but also potentially impacts safety, as protein aggregates have been shown to be linked with cytotoxicity and patient immunogenicity. Therefore, investigations of protein aggregation remain a major focus in pharmaceutical companies and academic institutions. Due to the complexity of the aggregation process and temperature-dependent conformational stability, temperature-induced protein aggregation is often non-Arrhenius over even relatively small temperature windows relevant for product development, and this makes low-temperature extrapolation difficult based simply on accelerated stability studies at high temperatures. This review discusses the non-Arrhenius nature of the temperature dependence of protein aggregation, explores possible causes, and considers inherent hurdles for accurately extrapolating aggregation rates from conventional industrial approaches for selecting accelerated conditions and from conventional or more advanced methods of analyzing the resulting rate data.
Subsonic panel method for designing wing surfaces from pressure distribution
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Hawk, J. D.
1983-01-01
An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.
Determination of Extrapolation Distance with Measured Pressure Signatures from Two Low-Boom Models
NASA Technical Reports Server (NTRS)
Mack, Robert J.; Kuhn, Neil
2004-01-01
A study to determine a limiting distance to span ratio for the extrapolation of near-field pressure signatures is described and discussed. This study was to be done in two wind-tunnel facilities with two wind-tunnel models. At this time, only the first half had been completed, so the scope of this report is limited to the design of the models, and to an analysis of the first set of measured pressure signatures. The results from this analysis showed that the pressure signatures measured at separation distances of 2 to 5 span lengths did not show the desired low-boom shapes. However, there were indications that the pressure signature shapes were becoming 'flat-topped'. This trend toward a 'flat-top' pressure signatures shape was seen to be a gradual one at the distance ratios employed in this first series of wind-tunnel tests.
NASA Astrophysics Data System (ADS)
Kim, Hyun-Tae; Romanelli, M.; Yuan, X.; Kaye, S.; Sips, A. C. C.; Frassinetti, L.; Buchanan, J.; Contributors, JET
2017-06-01
This paper presents for the first time a statistical validation of predictive TRANSP simulations of plasma temperature using two transport models, GLF23 and TGLF, over a database of 80 baseline H-mode discharges in JET-ILW. While the accuracy of the predicted T e with TRANSP-GLF23 is affected by plasma collisionality, the dependency of predictions on collisionality is less significant when using TRANSP-TGLF, indicating that the latter model has a broader applicability across plasma regimes. TRANSP-TGLF also shows a good matching of predicted T i with experimental measurements allowing for a more accurate prediction of the neutron yields. The impact of input data and assumptions prescribed in the simulations are also investigated in this paper. The statistical validation and the assessment of uncertainty level in predictive TRANSP simulations for JET-ILW-DD will constitute the basis for the extrapolation to JET-ILW-DT experiments.
Cigarette sales in pharmacies in the USA (2005-2009).
Seidenberg, Andrew B; Behm, Ilan; Rees, Vaughan W; Connolly, Gregory N
2012-09-01
Several US jurisdictions have adopted policies prohibiting pharmacies from selling tobacco products. Little is known about how pharmacies contribute to total cigarette sales. Pharmacy and total cigarette sales in the USA were tabulated from AC Nielsen and Euromonitor, respectively, for the years 2005-2009. Linear regression was used to characterise trends over time, with observed trends extrapolated to 2020. Between 2005 and 2009, pharmacy cigarette sales increased 22.72% (p=0.004), while total cigarette sales decreased 17.43% (p=0.015). In 2005, pharmacy cigarette sales represented 3.05% of total cigarette sales, increasing to 4.54% by 2009. Extrapolation of these findings resulted in estimated pharmacy cigarette sales of 14.59% of total US cigarette sales by 2020. Cigarette sales in American pharmacies have risen in recent years, while cigarette sales nationally have declined. If current trends continue, pharmacy cigarette market share will, by 2020, increase to more than four times the 2005 share.
Accounting for measurement error in log regression models with applications to accelerated testing.
Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M
2018-01-01
In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.
Cao, Le; Wei, Bing
2014-08-25
Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.
Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario
2017-06-07
By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.
O'Reilly, J; Vintró, L León; Mitchell, P I; Donohue, I; Leira, M; Hobbs, W; Irvine, K
2011-05-01
The chronologies and sediment accumulation rates for a lake sediment sequence from Lough Carra (Co. Mayo, western Ireland) were established by applying the constant initial concentration (CIC) and constant rate of supply (CRS) hypotheses to the measured (210)Pb(excess) profile. The resulting chronologies were validated using the artificial fallout radionuclides (137)Cs and (241)Am, which provide independent chronostratigraphic markers for the second half of the 20th century. The validity of extrapolating the derived CIC and CRS dates below the (210)Pb dating horizon using average sedimentation rates was investigated using supplementary paleolimnological information and historical data. Our data confirm that such an extrapolation is well justified at sites characterised by relatively stable sedimentation conditions. Copyright © 2010 Elsevier Ltd. All rights reserved.
Core conditions for alpha heating attained in direct-drive inertial confinement fusion
Bose, A.; Woo, K. M.; Betti, R.; ...
2016-07-07
It is shown that direct-drive implosions on the OMEGA laser have achieved core conditions that would lead to significant alpha heating at incident energies available on the National Ignition Facility (NIF) scale. The extrapolation of the experimental results from OMEGA to NIF energy assumes only that the implosion hydrodynamic efficiency is unchanged at higher energies. This approach is independent of the uncertainties in the physical mechanism that degrade implosions on OMEGA, and relies solely on a volumetric scaling of the experimentally observed core conditions. It is estimated that the current best-performing OMEGA implosion [Regan et al., Phys. Rev. Lett. 117,more » 025001 (2016)] extrapolated to a 1.9 MJ laser driver with the same illumination configuration and laser-target coupling would produce 125 kJ of fusion energy with similar levels of alpha heating observed in current highest performing indirect-drive NIF implosions.« less
Three-body unitarity in the finite volume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mai, M.; Döring, M.
We present the physical interpretation of lattice QCD simulations, performed in a small volume, requires an extrapolation to the infinite volume. A method is proposed to perform such an extrapolation for three interacting particles at energies above threshold. For this, a recently formulated relativisticmore » $$3\\to 3$$ amplitude based on the isobar formulation is adapted to the finite volume. The guiding principle is two- and three-body unitarity that imposes the imaginary parts of the amplitude in the infinite volume. In turn, these imaginary parts dictate the leading power-law finite-volume effects. It is demonstrated that finite-volume poles arising from the singular interaction, from the external two-body sub-amplitudes, and from the disconnected topology cancel exactly leaving only the genuine three-body eigenvalues. Lastly, the corresponding quantization condition is derived for the case of three identical scalar-isoscalar particles and its numerical implementation is demonstrated.« less
Dead time corrections using the backward extrapolation method
NASA Astrophysics Data System (ADS)
Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.
2017-05-01
Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.
Electron impact cross sections for the 2,2P state excitation of lithium
NASA Technical Reports Server (NTRS)
Vuskovic, L.; Trajmar, S.; Register, D. F.
1982-01-01
Electron impact excitation of the 2p 2P state of Li was studied at 10, 20, 60, 100, 150 and 200 eV. Relative differential cross sections in the angular range 3-120 deg were measured and then normalized to the absolute scale by using the optical f value. Integral and momentum transfer cross sections were obtained by extrapolating the differential cross sections to 0 deg and to 180 deg. The question of normalizing electron-metal-atom collision cross sections in general was examined and the method of normalization to optical f values in particular was investigated in detail. It has been concluded that the extrapolation of the apparent generalized oscillator strength (obtained from the measured differential cross sections) to the zero momentum transfer limit with an expression using even powers of the momentum transfer and normalization of the limit to the optical f value yields reliable absolute cross sections.
USAF Bioenvironmental Noise Data Handbook. Volume 165: MC-1 heater, duct type, portable
NASA Astrophysics Data System (ADS)
Rau, T. H.
1982-06-01
The MC-1 heater is a gasoline-motor driven, portable ground heater used primarily for cockpit and cabin temperature control. This report provides measured and extrapolated data defining the bioacoustic environments produced by this unit operating outdoors on a concrete apron at normal rated conditions. Near-field data are reported for 37 locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise levels, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 36 locations are normalized to standard meteorological conditions and extrapolated from 10 1600 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.
USAF bioenvironmental noise data handbook. Volume 158: F-106A aircraft, near and far-field noise
NASA Astrophysics Data System (ADS)
Rau, T. H.
1982-05-01
The USAF F-106A is a single seat, all-weather fighter/interceptor aircraft powered by a J75-P-17 turbojet engine. This report provides measured and extrapolated data defining the bioacoustic environments produced by this aircraft operating on a concrete runup pad for five engine-power conditions. Near-field data are reported for five locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise levels, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 19 locations are normalized to standard meteorological conditions and extrapolated from 75 - 8000 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.
USAF bioenvironmental noise data handbook. Volume 163: GPC-28 compressor
NASA Astrophysics Data System (ADS)
Rau, T. H.
1982-05-01
The GPC-28 is a gasoline engine-driven compressor with a 120 volt 60 Hz generator used for general purpose maintenance. This report provides measured and extrapolated data defining the bioacoustic environments produced by this unit operating outdoors on a concrete apron at a normal rated condition. Near-field data are reported for 37 locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise level, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 36 locations are normalized to standard meteorological conditions and extrapolated from 10 - 1600 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.
USAF bioenvironmental noise data handbook. Volume 161: A/M32A-86 generator set, diesel engine driven
NASA Astrophysics Data System (ADS)
Rau, T. H.
1982-05-01
The A/M32A-86 generator set is a diesel engine driven source of electrical power used for the starting of aircraft, and for ground maintenance. This report provides measured and extrapolated data defining the bioacoustic environments produced by this unit operating outdoors on a concrete apron at normal rated/loaded conditions. Near-field data are reported for 37 locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise level, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 36 locations are normalized to standard meteorological conditions and extrapolated from 10 - 1600 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.
NASA Astrophysics Data System (ADS)
Blake, Thomas A.; Chackerian, Charles, Jr.; Podolske, James R.
1996-02-01
Mid-infrared magnetic rotation spectroscopy (MRS) experiments on nitric oxide (NO) are quantitatively modeled by theoretical calculations. The verified theory is used to specify an instrument that can make in situ measurements on NO and NO2 in the Earth's atmosphere at a sensitivity level of a few parts in 1012 by volume per second. The prototype instrument used in the experiments has an extrapolated detection limit for NO of 30 parts in 109 for a 1-s integration time over a 12-cm path length. The detection limit is an extrapolation of experimental results to a signal-to-noise ratio of one, where the noise is considered to be one-half the peak-to-peak baseline noise. Also discussed are the various factors that can limit the sensitivity of a MRS spectrometer that uses liquid-nitrogen-cooled lead-salt diode lasers and photovoltaic detectors.
In Vitro Measurements of Metabolism for Application in Pharmacokinetic Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipscomb, John C.; Poet, Torka S.
2008-04-01
Abstract Human risk and exposure assessments require dosimetry information. Species-specific tissue dose response will be driven by physiological and biochemical processes. While metabolism and pharmacokinetic data are often not available in humans, they are much more available in laboratory animals; metabolic rate constants can be readily derived in vitro. The physiological differences between laboratory animals and humans are known. Biochemical processes, especially metabolism, can be measured in vitro and extrapolated to account for in vivo metabolism through clearance models or when linked to a physiologically based biological (PBPK) model to describe the physiological processes, such as drug delivery to themore » metabolic organ. This review focuses on the different organ, cellular, and subcellular systems that can be used to measure in vitro metabolic rate constants and how that data is extrapolated to be used in biokinetic modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valoppi, L.; Carlisle, J.; Polisini, J.
1995-12-31
A component of both human health and ecological risk assessments is the evaluation of toxicity values. A comparison between the methodology for the development of Reference Doses (RfDs) to be protective of humans, and that developed for vertebrate wildlife species is presented. For all species, a chronic No Observable Adverse Effect Level (NOAEL) is developed by applying uncertainty factors (UFs) to literature-based toxicity values. Uncertainty factors are used to compensate for the length of exposure, sensitivity of endpoints, and cross-species extrapolations between the test species and the species being assessed. Differences between human and wildlife species could include the toxicologicalmore » endpoint, the critical study, and the magnitude of the cross-species extrapolation factor. Case studies for select chemicals are presented which contrast RfDs developed for humans and those developed for avian and mammalian wildlife.« less
Chenal, C; Legue, F; Nourgalieva, K; Brouazin-Jousseaume, V; Durel, S; Guitton, N
2000-01-01
In human radiation protection, the shape of the dose effects curve for low doses irradiation (LDI) is assumed to be linear, extrapolated from the clinical consequences of Hiroshima and Nagasaki nuclear explosions. This extrapolation probably overestimates the risk below 200 mSv. In many circumstances, the living species and cells can develop some mechanisms of adaptation. Classical epidemiological studies will not be able to answer the question and there is a need to assess more sensitive biological markers of the effects of LDI. The researches should be focused on DNA effects (strand breaks), radioinduced expression of new genes and proteins involved in the response to oxidative stress and DNA repair mechanisms. New experimental biomolecular techniques should be developed in parallel with more conventional ones. Such studies would permit to assess new biological markers of radiosensitivity, which could be of great interest in radiation protection and radio-oncology.
NASA Technical Reports Server (NTRS)
SHARDANAND; Rao, A. D. P.
1977-01-01
The laboratory measurements of absolute Rayleigh scattering cross sections as a function wavelength are reported for gas molecules He, Ne, Ar, N2, H2, O2, CO2, CH4 and for vapors of most commonly used freons CCl2F2, CBrF3, CF4, and CHClf2. These cross sections are determined from the measurements of photon scattering at an angle of 54 deg 44 min which yield the absolute values independent of the value of normal depolarization ratios. The present results show that in the spectral range 6943-3638A deg, the values of the Rayleigh scattering cross section can be extrapolated from one wavelength to the other using 1/lambda (4) law without knowing the values of the polarizabilities. However, such an extrapolation can not be done in the region of shorter wavelengths.
Combining uncertainty factors in deriving human exposure levels of noncarcinogenic toxicants.
Kodell, R L; Gaylor, D W
1999-01-01
Acceptable levels of human exposure to noncarcinogenic toxicants in environmental and occupational settings generally are derived by reducing experimental no-observed-adverse-effect levels (NOAELs) or benchmark doses (BDs) by a product of uncertainty factors (Barnes and Dourson, Ref. 1). These factors are presumed to ensure safety by accounting for uncertainty in dose extrapolation, uncertainty in duration extrapolation, differential sensitivity between humans and animals, and differential sensitivity among humans. The common default value for each uncertainty factor is 10. This paper shows how estimates of means and standard deviations of the approximately log-normal distributions of individual uncertainty factors can be used to estimate percentiles of the distribution of the product of uncertainty factors. An appropriately selected upper percentile, for example, 95th or 99th, of the distribution of the product can be used as a combined uncertainty factor to replace the conventional product of default factors.
Three-body unitarity in the finite volume
Mai, M.; Döring, M.
2017-12-18
We present the physical interpretation of lattice QCD simulations, performed in a small volume, requires an extrapolation to the infinite volume. A method is proposed to perform such an extrapolation for three interacting particles at energies above threshold. For this, a recently formulated relativisticmore » $$3\\to 3$$ amplitude based on the isobar formulation is adapted to the finite volume. The guiding principle is two- and three-body unitarity that imposes the imaginary parts of the amplitude in the infinite volume. In turn, these imaginary parts dictate the leading power-law finite-volume effects. It is demonstrated that finite-volume poles arising from the singular interaction, from the external two-body sub-amplitudes, and from the disconnected topology cancel exactly leaving only the genuine three-body eigenvalues. Lastly, the corresponding quantization condition is derived for the case of three identical scalar-isoscalar particles and its numerical implementation is demonstrated.« less
NASA Astrophysics Data System (ADS)
Andriessen, J. H. T. H.; van der Horst-Bruinsma, I. E.; ter Haar Romeny, B. M.
1989-05-01
The present phase of the clinical evaluation within the Dutch PACS project mainly focuses on the development and evaluation of a PACSystem for a few departments in the Utrecht University hospital (UUH). A report on the first clinical experiences and a detailed cost/savings analysis of the PACSystem in the UUH are presented elsewhere. However, an assessment of the wider fmancial and organizational implications for hospitals and for the health sector is also needed. To this end a model for (financial) cost assessment of PACSystems is being developed by BAZIS. Learning from the actual pilot implementation in UUH we realized that general Technology Assessment (TA) also calls for an extra-polation of the medical and organizational effects. After a short excursion into the various approaches towards TA, this paper discusses the (inter) organizational dimensions relevant to the development of the necessary exttapolationmodels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torok, Aaron
The {pi}{sup +}{Sigma}{sup +} and {pi}{sup +}{Xi}{sup 0} scattering lengths were calculated in mixed-action Lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations at four light-quark masses, and at two light-quark masses on the fine MILC configurations. Heavy Baryon Chiral Perturbation Theory with two and three flavors of light quarks was used to perform the chiral extrapolations. To NNLO in the three-flavor chiral expansion, the kaon-baryon processes that were investigated show no signs of convergence. Using the two-flavor chiral expansion for extrapolation, the pion-hyperon scattering lengths are found to be a{sub {pi}}{sup +}{sub {Sigma}}{sup +} = -0.197{+-}0.017more » fm, and a{sub {pi}}{sup +}{sub {Xi}}{sup 0} = -0.098{+-}0.017 fm, where the comprehensive error includes statistical and systematic uncertainties.« less
Chemical Evidence for Evolution of galaxies
NASA Astrophysics Data System (ADS)
Dutil, Yvan
I have compiled the very best data published on abundance gradients. From this sample of 29 galaxies, some information can be gained on the mecanism of morphological evolution in disk galaxies. From this sample, I find that early-type galaxies show an identical trend in the behavior of extrapolated central abundance versus morphological type to that shown by late-type galaxies with strong bars, even in the absence of bar! On a a diagram showing extrapolated central abundance versus morphological type, two sequences appear: late-type barred galaxies and early-type galaxies (barred or not barred) fall on sequence 0.5 dex below that of normal late-type galaxies. This behavior is consistent with a scenario of morphological evolution of disk galaxies by formation and dissolution of a bar over a period of a few 10^^9 yr, where later type galaxies (Sd,Sc,Sbc, evolve into earlier-type disk galaxies trough transitory SBc and SBb phases.
Milford, Utah FORGE Temperature Contours at 200 m
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joe Moore
The individual shapefiles in this dataset delineate estimated temperature contours (20, 40, 60, and 80) at a depth of 200 m in the Milford, Utah FORGE area. Contours were derived from 86 geothermal, gradient, and other wells drilled in the area since the mid-1970s with depths greater than 50 m. Conductive temperature profiles for wells less than 200 m were extrapolated to determine the temperature at the desired depth. Because 11 wells in the eastern section of the study area (in and around the Mineral Mountains) are at higher elevations compared to those closer to the center of the basin,more » temperature profiles were extrapolated to a constant elevation of 200 m below the 1830 m (6000 ft) a.s.l. datum (approximate elevation of alluvial fans at the base of the Mineral Mountains) to smooth the contours across the ridges and valleys.« less
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742
Unmasking the masked Universe: the 2M++ catalogue through Bayesian eyes
NASA Astrophysics Data System (ADS)
Lavaux, Guilhem; Jasche, Jens
2016-01-01
This work describes a full Bayesian analysis of the Nearby Universe as traced by galaxies of the 2M++ survey. The analysis is run in two sequential steps. The first step self-consistently derives the luminosity-dependent galaxy biases, the power spectrum of matter fluctuations and matter density fields within a Gaussian statistic approximation. The second step makes a detailed analysis of the three-dimensional large-scale structures, assuming a fixed bias model and a fixed cosmology. This second step allows for the reconstruction of both the final density field and the initial conditions at z = 1000 assuming a fixed bias model. From these, we derive fields that self-consistently extrapolate the observed large-scale structures. We give two examples of these extrapolation and their utility for the detection of structures: the visibility of the Sloan Great Wall, and the detection and characterization of the Local Void using DIVA, a Lagrangian based technique to classify structures.
Aquatic effects assessment: needs and tools.
Marchini, Silvia
2002-01-01
In the assessment of the adverse effects pollutants can produce on exposed ecosystems, different approaches can be followed depending on the quality and quantity of information available, whose advantages and limits are discussed with reference to the aquatic compartment. When experimental data are lacking, a predictive approach can be pursued by making use of validated quantitative structure-activity relationships (QSARs), which provide reliable ecotoxicity estimates only if appropriate models are applied. The experimental approach is central to any environmental hazard assessment procedure, although many uncertainties underlying the extrapolation from a limited set of single species laboratory data to the complexity of the ecosystem (e.g., the limitations of common summary statistics, the variability of species sensitivity, the need to consider alterations at higher level of integration) make the task difficult. When adequate toxicity information are available, the statistical extrapolation approach can be used to predict environmental compatible concentrations.
Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.
2018-03-01
We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.
Core conditions for alpha heating attained in direct-drive inertial confinement fusion.
Bose, A; Woo, K M; Betti, R; Campbell, E M; Mangino, D; Christopherson, A R; McCrory, R L; Nora, R; Regan, S P; Goncharov, V N; Sangster, T C; Forrest, C J; Frenje, J; Gatu Johnson, M; Glebov, V Yu; Knauer, J P; Marshall, F J; Stoeckl, C; Theobald, W
2016-07-01
It is shown that direct-drive implosions on the OMEGA laser have achieved core conditions that would lead to significant alpha heating at incident energies available on the National Ignition Facility (NIF) scale. The extrapolation of the experimental results from OMEGA to NIF energy assumes only that the implosion hydrodynamic efficiency is unchanged at higher energies. This approach is independent of the uncertainties in the physical mechanism that degrade implosions on OMEGA, and relies solely on a volumetric scaling of the experimentally observed core conditions. It is estimated that the current best-performing OMEGA implosion [Regan et al., Phys. Rev. Lett. 117, 025001 (2016)10.1103/PhysRevLett.117.025001] extrapolated to a 1.9 MJ laser driver with the same illumination configuration and laser-target coupling would produce 125 kJ of fusion energy with similar levels of alpha heating observed in current highest performing indirect-drive NIF implosions.
Increased spring freezing vulnerability for alpine shrubs under early snowmelt.
Wheeler, J A; Hoch, G; Cortés, A J; Sedlacek, J; Wipf, S; Rixen, C
2014-05-01
Alpine dwarf shrub communities are phenologically linked with snowmelt timing, so early spring exposure may increase risk of freezing damage during early development, and consequently reduce seasonal growth. We examined whether environmental factors (duration of snow cover, elevation) influenced size and the vulnerability of shrubs to spring freezing along elevational gradients and snow microhabitats by modelling the past frequency of spring freezing events. We sampled biomass and measured the size of Salix herbacea, Vaccinium myrtillus, Vaccinium uliginosum and Loiseleuria procumbens in late spring. Leaves were exposed to freezing temperatures to determine the temperature at which 50% of specimens are killed for each species and sampling site. By linking site snowmelt and temperatures to long-term climate measurements, we extrapolated the frequency of spring freezing events at each elevation, snow microhabitat and per species over 37 years. Snowmelt timing was significantly driven by microhabitat effects, but was independent of elevation. Shrub growth was neither enhanced nor reduced by earlier snowmelt, but decreased with elevation. Freezing resistance was strongly species dependent, and did not differ along the elevation or snowmelt gradient. Microclimate extrapolation suggested that potentially lethal freezing events (in May and June) occurred for three of the four species examined. Freezing events never occurred on late snow beds, and increased in frequency with earlier snowmelt and higher elevation. Extrapolated freezing events showed a slight, non-significant increase over the 37-year record. We suggest that earlier snowmelt does not enhance growth in four dominant alpine shrubs, but increases the risk of lethal spring freezing exposure for less freezing-resistant species.
NASA Astrophysics Data System (ADS)
Yuan, Shihao; Fuji, Nobuaki; Singh, Satish; Borisov, Dmitry
2017-06-01
We present a methodology to invert seismic data for a localized area by combining source-side wavefield injection and receiver-side extrapolation method. Despite the high resolving power of seismic full waveform inversion, the computational cost for practical scale elastic or viscoelastic waveform inversion remains a heavy burden. This can be much more severe for time-lapse surveys, which require real-time seismic imaging on a daily or weekly basis. Besides, changes of the structure during time-lapse surveys are likely to occur in a small area rather than the whole region of seismic experiments, such as oil and gas reservoir or CO2 injection wells. We thus propose an approach that allows to image effectively and quantitatively the localized structure changes far deep from both source and receiver arrays. In our method, we perform both forward and back propagation only inside the target region. First, we look for the equivalent source expression enclosing the region of interest by using the wavefield injection method. Second, we extrapolate wavefield from physical receivers located near the Earth's surface or on the ocean bottom to an array of virtual receivers in the subsurface by using correlation-type representation theorem. In this study, we present various 2-D elastic numerical examples of the proposed method and quantitatively evaluate errors in obtained models, in comparison to those of conventional full-model inversions. The results show that the proposed localized waveform inversion is not only efficient and robust but also accurate even under the existence of errors in both initial models and observed data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgess-Herbert, Sarah L., E-mail: sarah.burgess@alum.mit.edu; Euling, Susan Y.
A critical challenge for environmental chemical risk assessment is the characterization and reduction of uncertainties introduced when extrapolating inferences from one species to another. The purpose of this article is to explore the challenges, opportunities, and research needs surrounding the issue of how genomics data and computational and systems level approaches can be applied to inform differences in response to environmental chemical exposure across species. We propose that the data, tools, and evolutionary framework of comparative genomics be adapted to inform interspecies differences in chemical mechanisms of action. We compare and contrast existing approaches, from disciplines as varied as evolutionarymore » biology, systems biology, mathematics, and computer science, that can be used, modified, and combined in new ways to discover and characterize interspecies differences in chemical mechanism of action which, in turn, can be explored for application to risk assessment. We consider how genetic, protein, pathway, and network information can be interrogated from an evolutionary biology perspective to effectively characterize variations in biological processes of toxicological relevance among organisms. We conclude that comparative genomics approaches show promise for characterizing interspecies differences in mechanisms of action, and further, for improving our understanding of the uncertainties inherent in extrapolating inferences across species in both ecological and human health risk assessment. To achieve long-term relevance and consistent use in environmental chemical risk assessment, improved bioinformatics tools, computational methods robust to data gaps, and quantitative approaches for conducting extrapolations across species are critically needed. Specific areas ripe for research to address these needs are recommended.« less
NASA Astrophysics Data System (ADS)
Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming
2015-03-01
The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.
A generalized sound extrapolation method for turbulent flows
NASA Astrophysics Data System (ADS)
Zhong, Siyang; Zhang, Xin
2018-02-01
Sound extrapolation methods are often used to compute acoustic far-field directivities using near-field flow data in aeroacoustics applications. The results may be erroneous if the volume integrals are neglected (to save computational cost), while non-acoustic fluctuations are collected on the integration surfaces. In this work, we develop a new sound extrapolation method based on an acoustic analogy using Taylor's hypothesis (Taylor 1938 Proc. R. Soc. Lon. A 164, 476-490. (doi:10.1098/rspa.1938.0032)). Typically, a convection operator is used to filter out the acoustically inefficient components in the turbulent flows, and an acoustics dominant indirect variable Dcp‧ is solved. The sound pressure p' at the far field is computed from Dcp‧ based on the asymptotic properties of the Green's function. Validations results for benchmark problems with well-defined sources match well with the exact solutions. For aeroacoustics applications: the sound predictions by the aerofoil-gust interaction are close to those by an earlier method specially developed to remove the effect of vortical fluctuations (Zhong & Zhang 2017 J. Fluid Mech. 820, 424-450. (doi:10.1017/jfm.2017.219)); for the case of vortex shedding noise from a cylinder, the off-body predictions by the proposed method match well with the on-body Ffowcs-Williams and Hawkings result; different integration surfaces yield close predictions (of both spectra and far-field directivities) for a co-flowing jet case using an established direct numerical simulation database. The results suggest that the method may be a potential candidate for sound projection in aeroacoustics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelack, W.S.; Borsa, J.; Marquardt, R.R.
1991-09-01
The radiation sensitivity and the toxigenic potential of conidiospores of the fungus Aspergillus alutaceus var alutaceus were determined after irradiation with {sup 60}Co gamma rays and high-energy electrons. Over the pH range of 3.6 to 8.8, the doses required for a 1 log{sup 10} reduction in viability based on the exponential portion of the survival curve ranged from 0.21 to 0.22 kGy, with extrapolation numbers (extrapolation of the exponential portion of the survival curve to zero dose) of 1.01 to 1.33, for electron irradiation, and from 0.24 to 0.27 kGy, with extrapolation numbers of 2.26 to 5.13, for gamma irradiation.more » Nonsterile barley that was inoculated with conidia of the fungus and then irradiated with either electrons or gamma rays and incubated for prolonged periods at 28C and at a moisture content of 25% produced less ochratoxin levels compared with unirradiated controls. In these experiments, inoculation with 10{sup 2} spores per g produced greater radiation-induced enhancement than inoculation with 10{sup 5} spores per g. There was no radiation-induced enhancement when the barley was surface sterilized by chemical means prior to irradiation. These results are consistent with the hypothesis that a reduction in the competing microbial flora by irradiation is responsible for the enhanced mycotoxin production observed when nonsterile barley is inoculated with the toxigenic fungus A. alutaceus var. alutaceus after irradiation.« less
Lee, Yung-Shan; Lo, Justin C; Otton, S Victoria; Moore, Margo M; Kennedy, Chris J; Gobas, Frank A P C
2017-07-01
Incorporating biotransformation in bioaccumulation assessments of hydrophobic chemicals in both aquatic and terrestrial organisms in a simple, rapid, and cost-effective manner is urgently needed to improve bioaccumulation assessments of potentially bioaccumulative substances. One approach to estimate whole-animal biotransformation rate constants is to combine in vitro measurements of hepatic biotransformation kinetics with in vitro to in vivo extrapolation (IVIVE) and bioaccumulation modeling. An established IVIVE modeling approach exists for pharmaceuticals (referred to in the present study as IVIVE-Ph) and has recently been adapted for chemical bioaccumulation assessments in fish. The present study proposes and tests an alternative IVIVE-B technique to support bioaccumulation assessment of hydrophobic chemicals with a log octanol-water partition coefficient (K OW ) ≥ 4 in mammals. The IVIVE-B approach requires fewer physiological and physiochemical parameters than the IVIVE-Ph approach and does not involve interconversions between clearance and rate constants in the extrapolation. Using in vitro depletion rates, the results show that the IVIVE-B and IVIVE-Ph models yield similar estimates of rat whole-organism biotransformation rate constants for hypothetical chemicals with log K OW ≥ 4. The IVIVE-B approach generated in vivo biotransformation rate constants and biomagnification factors (BMFs) for benzo[a]pyrene that are within the range of empirical observations. The proposed IVIVE-B technique may be a useful tool for assessing BMFs of hydrophobic organic chemicals in mammals. Environ Toxicol Chem 2017;36:1934-1946. © 2016 SETAC. © 2016 SETAC.
Gorokhovich, Yuri; Reid, Matthew; Mignone, Erica; Voros, Andrew
2003-10-01
Coal mine reclamation projects are very expensive and require coordination of local and federal agencies to identify resources for the most economic way of reclaiming mined land. Location of resources for mine reclamation is a spatial problem. This article presents a methodology that allows the combination of spatial data on resources for the coal mine reclamation and uses GIS analysis to develop a priority list of potential mine reclamation sites within contiguous United States using the method of extrapolation. The extrapolation method in this study was based on the Bark Camp reclamation project. The mine reclamation project at Bark Camp, Pennsylvania, USA, provided an example of the beneficial use of fly ash and dredged material to reclaim 402,600 sq mi of a mine abandoned in the 1980s. Railroads provided transportation of dredged material and fly ash to the site. Therefore, four spatial elements contributed to the reclamation project at Bark Camp: dredged material, abandoned mines, fly ash sources, and railroads. Using spatial distribution of these data in the contiguous United States, it was possible to utilize GIS analysis to prioritize areas where reclamation projects similar to Bark Camp are feasible. GIS analysis identified unique occurrences of all four spatial elements used in the Bark Camp case for each 1 km of the United States territory within 20, 40, 60, 80, and 100 km radii from abandoned mines. The results showed the number of abandoned mines for each state and identified their locations. The federal or state governments can use these results in mine reclamation planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Jie; Li, Hui; Feng, Li
By using a new method of forced-field extrapolation, we study the emerging flux region AR11850 observed by the Interface Region Imaging Spectrograph and Solar Dynamical Observatory . Our results suggest that the bright points (BPs) in this emerging region exhibit responses in lines formed from the upper photosphere to the transition region, which have relatively similar morphologies. They have an oscillation of several minutes according to the Atmospheric Imaging Assembly data at 1600 and 1700 Å. The ratio between the BP intensities measured in 1600 and 1700 Å filtergrams reveals that these BPs are heated differently. Our analysis of themore » Helioseismic and Magnetic Imager vector magnetic field and the corresponding topology in AR11850 indicates that the BPs are located at the polarity inversion line and most of them are related to magnetic reconnection or cancelation. The heating of the BPs might be different due to different magnetic topology. We find that the heating due to the magnetic cancelation would be stronger than the case of bald patch reconnection. The plasma density rather than the magnetic field strength could play a dominant role in this process. Based on physical conditions in the lower atmosphere, our forced-field extrapolation shows consistent results between the bright arcades visible in slit-jaw image 1400 Å and the extrapolated field lines that pass through the bald patches. It provides reliable observational evidence for testing the mechanism of magnetic reconnection for the BPs and arcades in the emerging flux region, as proposed in simulation studies.« less
Ozone profile measurements at McMurdo Station Antarctica during the spring of 1987
NASA Technical Reports Server (NTRS)
Hofmann, D. J.; Harder, J. W.; Rosen, J. M.; Hereford, J.; Carpenter, J. R.
1988-01-01
During the Antarctic spring of 1986, 33 ozone soundings were conducted from McMurdo Station. These data indicated that the springtime decrease in ozone occurred rapidly between the altitudes of 12 and 20 km. During 1987, these measurements were repeated with 50 soundings between 29 August and 9 November. Digital conversions of standard electrochemical cell ozonesondes were again employed. The ozonesonde pumps were individually calibrated for flow rate as the high altitude performance of these pumps have been in question. While these uncertainties are not large in the region of the ozone hole, they are significant at high altitude and apparently resulted in an underestimate of total ozone of about 7 percent (average) as compared to the Total Ozone Mapping Spectrometer (TOMS) in 1986, when the flow rate recommended by the manufacturer was used. At the upper altitudes (approx. 30 km) the flow rate may be overestimated by as much as 15 percent using recommended values (see Harder et al., The UW Digital Ozonesonde: Characteristics and Flow Rate Calibration, poster paper, this workshop). These upper level values are used in the extrapolation, at constant mixing ratio, required to complete the sounding for total ozone. The first sounding was on 29 August, prior to major ozone depletion, when 274 DU total ozone (25 DU extrapolated) was observed. By early October total ozone had decreased to the 150 DU range; it then increased during mid-October owing to motion of the vortex and returned to a value of 148 DU (29 DU extrapolated) on 27 October.
Interstellar Pickup Ion Observations to 38 au
NASA Astrophysics Data System (ADS)
McComas, D. J.; Zirnstein, E. J.; Bzowski, M.; Elliott, H. A.; Randol, B.; Schwadron, N. A.; Sokół, J. M.; Szalay, J. R.; Olkin, C.; Spencer, J.; Stern, A.; Weaver, H.
2017-11-01
We provide the first direct observations of interstellar H+ and He+ pickup ions in the solar wind from 22 to 38 au. We use the Vasyliunas and Siscoe model functional form to quantify the pickup ion distributions, and while the fit parameters generally lie outside their physically expected ranges, this form allows fits that quantify variations in the pickup H+ properties with distance. By ˜20 au, the pickup ions already provide the dominant internal pressure in the solar wind. We determine the radial trends and extrapolate them to the termination shock at ˜90 au, where the pickup H+ to core solar wind density reaches ˜0.14. The pickup H+ temperature and thermal pressure increase from 22 to 38 au, indicating additional heating of the pickup ions. This produces very large extrapolated ratios of pickup H+ to solar wind temperature and pressure, and an extrapolated ratio of the pickup ion pressure to the solar wind dynamic pressure at the termination shock of ˜0.16. Such a large ratio has profound implications for moderating the termination shock and the overall outer heliospheric interaction. We also identify suprathermal tails in the H+ spectra and complex features in the He+ spectra, likely indicating variations in the pickup ion history and processing. Finally, we discover enhancements in both H+ and He+ populations just below their cutoff energies, which may be associated with enhanced local pickup. This study serves to document the release and serves as a citable reference of these pickup ion data for broad community use and analysis.
NASA Astrophysics Data System (ADS)
Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.
Ruiz-Angel, M J; Carda-Broch, S; García-Alvarez-Coque, M C; Berthod, A
2004-03-19
Logarithm of retention factors (log k) of a group of 14 ionizable diuretics were correlated with the molecular (log P o/w) and apparent (log P(app)) octanol-water partition coefficients. The compounds were chromatographed using aqueous-organic (reversed-phase liquid chromatography, RPLC) and micellar-organic mobile phases (micellar liquid chromatography, MLC) with the anionic surfactant sodium dodecyl sulfate (SDS), in the pH range 3-7, and a conventional octadecylsilane column. Acetonitrile was used as the organic modifier in both modes. The quality of the correlations obtained for log P(app) at varying ionization degree confirms that this correction is required in the aqueous-organic mixtures. The correlation is less improved with SDS micellar media because the acid-base equilibriums are shifted towards higher pH values for acidic compounds. In micellar chromatography, an electrostatic interaction with charged solutes is added to hydrophobic forces; consequently, different correlations should be established for neutral and acidic compounds, and for basic compounds. Correlations between log k and the isocratic descriptors log k(w), log k(wm) (extrapolated retention to pure water in the aqueous-organic and micellar-organic systems, respectively), and psi0 (extrapolated mobile phase composition giving a k = 1 retention factor or twice the dead time), and between these descriptors and log P(app) were also satisfactory, although poorer than those between log k and log P(app) due to the extrapolation. The study shows that, in the particular case of the ionizable diuretics studied, classical RPLC gives better results than MLC with SDS in the retention hydrophobicity correlations.
Nonlinear dynamics support a linear population code in a retinal target-tracking circuit.
Leonardo, Anthony; Meister, Markus
2013-10-23
A basic task faced by the visual system of many organisms is to accurately track the position of moving prey. The retina is the first stage in the processing of such stimuli; the nature of the transformation here, from photons to spike trains, constrains not only the ultimate fidelity of the tracking signal but also the ease with which it can be extracted by other brain regions. Here we demonstrate that a population of fast-OFF ganglion cells in the salamander retina, whose dynamics are governed by a nonlinear circuit, serve to compute the future position of the target over hundreds of milliseconds. The extrapolated position of the target is not found by stimulus reconstruction but is instead computed by a weighted sum of ganglion cell outputs, the population vector average (PVA). The magnitude of PVA extrapolation varies systematically with target size, speed, and acceleration, such that large targets are tracked most accurately at high speeds, and small targets at low speeds, just as is seen in the motion of real prey. Tracking precision reaches the resolution of single photoreceptors, and the PVA algorithm performs more robustly than several alternative algorithms. If the salamander brain uses the fast-OFF cell circuit for target extrapolation as we suggest, the circuit dynamics should leave a microstructure on the behavior that may be measured in future experiments. Our analysis highlights the utility of simple computations that, while not globally optimal, are efficiently implemented and have close to optimal performance over a limited but ethologically relevant range of stimuli.
Lithospheric Strength and Stress State: Persistent Challenges and New Directions in Geodynamics
NASA Astrophysics Data System (ADS)
Hirth, G.
2017-12-01
The strength of the lithosphere controls a broad array of geodynamic processes ranging from earthquakes, the formation and evolution of plate boundaries and the thermal evolution of the planet. A combination of laboratory, geologic and geophysical observations provides several independent constraints on the rheological properties of the lithosphere. However, several persistent challenges remain in the interpretation of these data. Problems related to extrapolation in both scale and time (rate) need to be addressed to apply laboratory data. Nonetheless, good agreement between extrapolation of flow laws and the interpretation of microstructures in viscously deformed lithospheric mantle rocks demonstrates a strong foundation to build on to explore the role of scale. Furthermore, agreement between the depth distribution of earthquakes and predictions based on extrapolation of high temperature friction relationships provides a basis to understand links between brittle deformation and stress state. In contrast, problems remain for rationalizing larger scale geodynamic processes with these same rheological constraints. For example, at face value the lab derived values for the activation energy for creep are too large to explain convective instabilities at the base of the lithosphere, but too low to explain the persistence of dangling slabs in the upper mantle. In this presentation, I will outline these problems (and successes) and provide thoughts on where new progress can be made to resolve remaining inconsistencies, including discussion of the role of the distribution of volatiles and alteration on the strength of the lithosphere, new data on the influence of pressure on friction and fracture strength, and links between the location of earthquakes, thermal structure, and stress state.
MECHANISTIC DOSIMETRY MODELS OF NANOMATERIAL DEPOSITION IN THE RESPIRATORY TRACT
Accurate health risk assessments of inhalation exposure to nanomaterials will require dosimetry models that account for interspecies differences in dose delivered to the respiratory tract. Mechanistic models offer the advantage to interspecies extrapolation that physicochemica...
A Selective Critique of Animal Experiments in Human-Orientated Biological Research.
ERIC Educational Resources Information Center
Webb, G. P.
1990-01-01
The advantages and justifications for using small animals in human-oriented research are reviewed. Some of the pitfalls of extrapolating animal-derived data to humans are discussed. Several specific problems with animal experimentation are highlighted. (CW)
Travtek Evaluation Modeling Study
DOT National Transportation Integrated Search
1996-03-01
THE FOLLOWING REPORT DESCRIBES A MODELING STUDY THAT WAS PERFORMED TO EXTRAPOLATE, FROM THE TRAVTEK OPERATIONAL TEST DATA, A SET OF SYSTEM WIDE BENEFITS AND PERFORMANCE VALUES FOR A WIDER-SCALE DEPLOYMENT OF A TRAVTEK-LIKE SYSTEM. IN THE FIRST PART O...
PROBABILISTIC AQUATIC EXPOSURE ASSESSMENT FOR PESTICIDES 1: FOUNDATIONS
Models that capture underlying mechanisms and processes are necessary for reliable extrapolation of laboratory chemical data to field conditions. For validation, these models require a major revision of the conventional model testing paradigm to better recognize the conflict betw...
DEMO: Sequence Alignment to Predict Across Species Susceptibility
The US Environmental Protection Agency Sequence Alignment to Predict Across Species Susceptibility tool (SeqAPASS; https://seqapass.epa.gov/seqapass/) was developed to comparatively evaluate protein sequence and structural similarity across species as a means to extrapolate toxic...
CAPSULE REPORT: HARD CHROME FUME SUPPRESSANTS & CONTROL TECHNOLOGIES
All existing information which includes the information extrapolated from the Hard Chrome Pollution Prevention Demonstration Project(s) and other sources derived from plating facilities and industry contacts, will be condensed and featured in this document. At least five chromium...
Failure rates for accelerated acceptance testing of silicon transistors
NASA Technical Reports Server (NTRS)
Toye, C. R.
1968-01-01
Extrapolation tables for the control of silicon transistor product reliability have been compiled. The tables are based on a version of the Arrhenius statistical relation and are intended to be used for low- and medium-power silicon transistors.
Assessing Uncertainty of Interspecies Correlation Estimation Models for Aromatic Compounds
We developed Interspecies Correlation Estimation (ICE) models for aromatic compounds containing 1 to 4 benzene rings to assess uncertainty in toxicity extrapolation in two data compilation approaches. ICE models are mathematical relationships between surrogate and predicted test ...
Higher Throughput Toxicokinetics to Allow Extrapolation (EPA-Japan Bilateral EDSP meeting)
As part of "Ongoing EDSP Directions & Activities" I will present CSS research on high throughput toxicokinetics, including in vitro data and models to allow rapid determination of the real world doses that may cause endocrine disruption.
INTER-SPECIES STEROID RECEPTOR EXTRAPOLATION STUDIES FOR ENDOCRINE DISRUPTING CHEMICALS
In: Environmental Sciences in the 21st Century: Paradigms, Opportunities, and Challenges: Abstract Book: SETAC 21st Annual Meeting, 12-16 November 2000, Nashville, TN. Society of Environmental Toxicology and Chemistry, Pensacola, FL. Pp. 117-118.
Predicting Maternal Rat and Pup Exposures: How Different Are They?
Risk and safety assessments for early life exposures to environmental chemicals or pharmaceuticals based on cross-species extrapolation would greatly benefit from information on chemical dosimetry in the young. Although relevant toxicity studies involve exposures during multiple ...
Proton radius from electron scattering data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon and Stanford. Methods: We make use of stepwise regression techniques using the F-test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate errormore » estimates. Results: Starting with the precision, low four-momentum transfer (Q 2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F-test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q 2 data on G E to select functions which extrapolate to high Q 2, we find that a Pad´e (N = M = 1) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, G E(Q 2) = (1 + Q 2/0.66 GeV 2) -2. Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extreme low-Q 2 data or by use of the Pad´e approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering result and the muonic hydrogen result are consistent. Lastly, it is the atomic hydrogen results that are the outliers.« less
Delle Monache, Sergio; Lacquaniti, Francesco; Bosco, Gianfranco
2017-09-01
The ability to catch objects when transiently occluded from view suggests their motion can be extrapolated. Intraparietal cortex (IPS) plays a major role in this process along with other brain structures, depending on the task. For example, interception of objects under Earth's gravity effects may depend on time-to-contact predictions derived from integration of visual signals processed by hMT/V5+ with a priori knowledge of gravity residing in the temporoparietal junction (TPJ). To investigate this issue further, we disrupted TPJ, hMT/V5+, and IPS activities with transcranial magnetic stimulation (TMS) while subjects intercepted computer-simulated projectile trajectories perturbed randomly with either hypo- or hypergravity effects. In experiment 1 , trajectories were occluded either 750 or 1,250 ms before landing. Three subject groups underwent triple-pulse TMS (tpTMS, 3 pulses at 10 Hz) on one target area (TPJ | hMT/V5+ | IPS) and on the vertex (control site), timed at either trajectory perturbation or occlusion. In experiment 2 , trajectories were entirely visible and participants received tpTMS on TPJ and hMT/V5+ with same timing as experiment 1 tpTMS of TPJ, hMT/V5+, and IPS affected differently the interceptive timing. TPJ stimulation affected preferentially responses to 1-g motion, hMT/V5+ all response types, and IPS stimulation induced opposite effects on 0-g and 2-g responses, being ineffective on 1-g responses. Only IPS stimulation was effective when applied after target disappearance, implying this area might elaborate memory representations of occluded target motion. Results are compatible with the idea that IPS, TPJ, and hMT/V5+ contribute to distinct aspects of visual motion extrapolation, perhaps through parallel processing. NEW & NOTEWORTHY Visual extrapolation represents a potential neural solution to afford motor interactions with the environment in the face of missing information. We investigated relative contributions by temporoparietal junction (TPJ), hMT/V5+, and intraparietal cortex (IPS), cortical areas potentially involved in these processes. Parallel organization of visual extrapolation processes emerged with respect to the target's motion causal nature: TPJ was primarily involved for visual motion congruent with gravity effects, IPS for arbitrary visual motion, whereas hMT/V5+ contributed at earlier processing stages. Copyright © 2017 the American Physiological Society.
Poppe, L.J.; Eliason, A.H.; Hastings, M.E.
2004-01-01
Measures that describe and summarize sediment grain-size distributions are important to geologists because of the large amount of information contained in textural data sets. Statistical methods are usually employed to simplify the necessary comparisons among samples and quantify the observed differences. The two statistical methods most commonly used by sedimentologists to describe particle distributions are mathematical moments (Krumbein and Pettijohn, 1938) and inclusive graphics (Folk, 1974). The choice of which of these statistical measures to use is typically governed by the amount of data available (Royse, 1970). If the entire distribution is known, the method of moments may be used; if the next to last accumulated percent is greater than 95, inclusive graphics statistics can be generated. Unfortunately, earlier programs designed to describe sediment grain-size distributions statistically do not run in a Windows environment, do not allow extrapolation of the distribution's tails, or do not generate both moment and graphic statistics (Kane and Hubert, 1963; Collias et al., 1963; Schlee and Webster, 1967; Poppe et al., 2000)1.Owing to analytical limitations, electro-resistance multichannel particle-size analyzers, such as Coulter Counters, commonly truncate the tails of the fine-fraction part of grain-size distributions. These devices do not detect fine clay in the 0.6–0.1 μm range (part of the 11-phi and all of the 12-phi and 13-phi fractions). Although size analyses performed down to 0.6 μm microns are adequate for most freshwater and near shore marine sediments, samples from many deeper water marine environments (e.g. rise and abyssal plain) may contain significant material in the fine clay fraction, and these analyses benefit from extrapolation.The program (GSSTAT) described herein generates statistics to characterize sediment grain-size distributions and can extrapolate the fine-grained end of the particle distribution. It is written in Microsoft Visual Basic 6.0 and provides a window to facilitate program execution. The input for the sediment fractions is weight percentages in whole-phi notation (Krumbein, 1934; Inman, 1952), and the program permits the user to select output in either method of moments or inclusive graphics statistics (Fig. 1). Users select options primarily with mouse-click events, or through interactive dialogue boxes.
Proton radius from electron scattering data
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent; ...
2016-05-31
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon and Stanford. Methods: We make use of stepwise regression techniques using the F-test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate errormore » estimates. Results: Starting with the precision, low four-momentum transfer (Q 2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F-test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q 2 data on G E to select functions which extrapolate to high Q 2, we find that a Pad´e (N = M = 1) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, G E(Q 2) = (1 + Q 2/0.66 GeV 2) -2. Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extreme low-Q 2 data or by use of the Pad´e approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering result and the muonic hydrogen result are consistent. Lastly, it is the atomic hydrogen results that are the outliers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motwani, Hitesh V., E-mail: hitesh.motwani@mmk.su.se; Törnqvist, Margareta
2014-12-15
1,3-Butadiene (BD) is a rodent and human carcinogen. In the cancer tests, mice have been much more susceptible than rats with regard to BD-induced carcinogenicity. The species-differences are dependent on metabolic formation/disappearance of the genotoxic BD epoxy-metabolites that lead to variations in the respective in vivo doses, i.e. “area under the concentration-time curve” (AUC). Differences in AUC of the most gentoxic BD epoxy-metabolite, diepoxybutane (DEB), are considered important with regard to cancer susceptibility. The present work describes: the application of cob(I)alamin for accurate measurements of in vitro enzyme kinetic parameters associated with BD epoxy-metabolites in human, mouse and rat; themore » use of published data on hemoglobin (Hb) adduct levels of BD epoxides from BD exposure studies on the three species to calculate the corresponding AUCs in blood; and a parallelogram approach for extrapolation of AUC of DEB based on the in vitro metabolism studies and adduct data from in vivo measurements. The predicted value of AUC of DEB for humans from the parallelogram approach was 0.078 nM · h for 1 ppm · h of BD exposure compared to 0.023 nM · h/ppm · h as calculated from Hb adduct levels observed in occupational exposure. The corresponding values in nM · h/ppm · h were for mice 41 vs. 38 and for rats 1.26 vs. 1.37 from the parallelogram approach vs. experimental exposures, respectively, showing a good agreement. This quantitative inter-species extrapolation approach will be further explored for the clarification of metabolic rates/pharmacokinetics and the AUC of other genotoxic electrophilic compounds/metabolites, and has a potential to reduce and refine animal experiments. - Highlights: • In vitro metabolism to in vivo dose extrapolation of butadiene metabolites was proposed. • A parallelogram approach was introduced to estimate dose (AUC) in humans and rodents. • AUC of diepoxybutane predicted in humans was 0.078 nM h/ppm h butadiene exposure. • The presented approach has a potential to reduce, refine and replace animal experiments.« less
Dormitory Solar-Energy-System Economics
NASA Technical Reports Server (NTRS)
1982-01-01
102-page report analyzes long-term economic performance of a prepackaged solar energy assembly system at a dormitory installation and extrapolates to four additional sites about the U.S. Method of evaluation is f-chart procedure for solar-heating and domestic hotwater systems.
MOMENTARY BRAIN CONCENTRATION OF TRICHLOROETHYLENE PREDICTS THE EFFECTS ON RAT VISUAL FUNCTION.
This manuscript demonstrates that the level neurological impairment following acute reversible exposure to trichloroethylene, a volatile organic compound, is more accurately described when extrapolations across exposure conditions are based on target tissue (brain) dose level, th...
High-Throughput Screening in ToxCast/Tox21 (FutureToxII)
Addressing safety aspects of drugs and environmental chemicals relies extensively on animal testing. However, the quantity of chemicals needing assessment and challenges of species extrapolation require development of alternative approaches. The EPA’s ToxCast program addresses th...
Titan is to Earth's Hydrological Cycle what Venus is to its Greenhouse Effect
NASA Astrophysics Data System (ADS)
Lorenz, R. D.
2012-06-01
Titan serves as an extreme extrapolation of Earth's possible present trend toward more violent rainstorms interspersed by long droughts, much as Venus has acted as a bogeyman to illustrate the perils of enhanced greenhouse warming.
A quantitative comparison of leading-edge vortices in incompressible and supersonic flows
DOT National Transportation Integrated Search
2002-01-14
When requiring quantitative data on delta-wing vortices for design purposes, low-speed results have often been extrapolated to configurations intended for supersonic operation. This practice stems from a lack of database owing to difficulties that pl...
Biologically-based pharmacokinetic models are being increasingly used in the risk assessment of environmental chemicals. These models are based on biological, mathematical, statistical and engineering principles. Their potential uses in risk assessment include extrapolation betwe...
Adverse Outcome Pathways and Extrapolation Tools to Advance the Three Rs in Ecotoxicology
Adverse outcome pathways (AOPs) are conceptual frameworks for identifying and organizing predictive and causal linkages between cellular-level responses and endpoints conventionally considered in ecological risk assessment (e.g., effects on survival, growth/development, and repro...
This study focuses on the application of electrochemical approaches to drinking water copper corrosion problems. Applying electrochemical approaches combined with copper solubility measurements, and solid surface analysis approaches were discussed. Tafel extrapolation and Electro...
This study focuses on the application of electrochemical approaches to drinking water copper corrosion problems. Applying electrochemical approaches combined with copper solubility measurements, and solid surface analysis approaches were discussed. Tafel extrapolation and Electro...
Developing integral projection models for aquatic ecotoxicology
Extrapolating laboratory measured effects of chemicals to ecologically relevant scales is a fundamental challenge in ecotoxicology. In addition to influencing survival in the wild (e.g., over-winter survival) size has been shown to control onset of reproduction for the toxicologi...
An assessment of the impact of radiofrequency interference on microwave SETI searches
NASA Technical Reports Server (NTRS)
Klein, M. J.; Gulkis, S.; Olsen, E. T.; Armstrong, E. F.; Jackson, E. B.
1992-01-01
Investigations are carried out at JPL on radiofrequency interferences at very low levels (-130 to -180 dBm) in various bands, especially the 1-2 GHz band. Extrapolation of interferences in the years to come is attempted.