Sample records for tafel extrapolation method

  1. Effects of scan rate on the corrosion behavior SS 304 stainless steel in the nanofluid measured by Tafel polarization methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajitno, Djoko Hadi

    The Effects of scan rate on the Tafel polarization curve that is obtained to determine corrosion rate are conducted. The tafel polarization curves are obtained at different scan rates for Stainless Steel 304 in nanofluids contain 0.01 gpl nano particle ZrO{sub 2}. The corrosion stainless steel in nanofluid contains adm+0.01 gpl ZrO{sub 2} nanoparticles at different scan rate was performed by Tafel polarization. The results show that according corrosion potential examination of the stainless steel in nanofluid media 0.01gpl ZrO{sub 2} nanoparticle was actively corroded. The value of cathodic Tafel slope stainless steel in nanofluid at different scan rate relativelymore » unchanged after polarization testing. Mean while the value of anodic Tafel slope stainless steel in nanofluid increase at different scan rate. The results of Tafel polarization technique show that corrosion rate of stainless steel in nanofluid increase with increasing scan rate. X ray diffraction examination of stainless steel after Tafel polarization depict that γ Fe phase is major phase in the surface of alloy.« less

  2. Extrapolation methods for vector sequences

    NASA Technical Reports Server (NTRS)

    Smith, David A.; Ford, William F.; Sidi, Avram

    1987-01-01

    This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.

  3. Evaluation of Electrochemical Methods for Electrolyte Characterization

    NASA Technical Reports Server (NTRS)

    Heidersbach, Robert H.

    2001-01-01

    This report documents summer research efforts in an attempt to develop an electrochemical method of characterizing electrolytes. The ultimate objective of the characterization would be to determine the composition and corrosivity of Martian soil. Results are presented using potentiodynamic scans, Tafel extrapolations, and resistivity tests in a variety of water-based electrolytes.

  4. Corrosion evaluation of heat recovery steam generator superheater tube in two methods of testing: Tafel polarization and electrochemical impedance spectroscopy (EIS)

    NASA Astrophysics Data System (ADS)

    Santoso, Rio Pudjidarma; Riastuti, Rini

    2018-05-01

    The purpose of this research is to evaluate the corrosion process which occurs on the water side of Heat Recovery Steam Generator (HRSG) superheater tube. The tube was 13CrMo44 and divided into 3 types of specimen: new tube, used tube (with oxide layer on surface), cleaned-used tube (without oxide layer on surface). The evaluation of corrosion parameters wasperformed using deaerated ultra-high purity water (boiler feed water) in two methods of testing: Tafel polarization and Electrochemical Impedance Spectroscopy (EIS). Tafel polarization was excellent as its capability to show the value of corrosion current and the corrosion rate explicitly, on the other hand, EIS was excellent as its capability to explain for corrosion mechanism on metal interface in detail. Both methods showed that the increase of electrolyte temperature from 25°C to 55°C would increase the corrosion rate with the mechanism of decreasing polarization resistance due to thinning out the passive film thickness and enlarge the area of reduction reaction of cathode. Magnetite oxide scale which is laid on the surface of used tube specimen shows protective nature to reduce the corrosion rate, and clear up this oxide would increase the corrosion rate back as new tube.

  5. Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.

    PubMed

    Sakaino, Hidetomo

    2016-09-01

    Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.

  6. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  7. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  8. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    NASA Astrophysics Data System (ADS)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  9. Determination of Tafel Constants in Nonlinear Polarization Curves.

    DTIC Science & Technology

    1987-12-01

    resulted in difficulty in determining the Tafel constants from such plots. A FORTRAN based program involving numerical differentiation techniques was...MASTER OF SCIENCE IN MECHANICAL ENGINEERING from the NAVAL POSTGRADUATE SCHOOL December 1987 Auho:Th as Edr L~oughlin Approved by: J erkins hesis Advisor...Inthony J.f Healey, Chai man, Departm o Mhnical E gineering ’ Gordon E. Schacher Dean of Science and Engineering 21 ABSTRACT The presence of non-linear

  10. A regularization method for extrapolation of solar potential magnetic fields

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  11. extrap: Software to assist the selection of extrapolation methods for moving-boat ADCP streamflow measurements

    NASA Astrophysics Data System (ADS)

    Mueller, David S.

    2013-04-01

    Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.

  12. Extrapolation of sonic boom pressure signatures by the waveform parameter method

    NASA Technical Reports Server (NTRS)

    Thomas, C. L.

    1972-01-01

    The waveform parameter method of sonic boom extrapolation is derived and shown to be equivalent to the F-function method. A computer program based on the waveform parameter method is presented and discussed, with a sample case demonstrating program input and output.

  13. A generalized sound extrapolation method for turbulent flows

    NASA Astrophysics Data System (ADS)

    Zhong, Siyang; Zhang, Xin

    2018-02-01

    Sound extrapolation methods are often used to compute acoustic far-field directivities using near-field flow data in aeroacoustics applications. The results may be erroneous if the volume integrals are neglected (to save computational cost), while non-acoustic fluctuations are collected on the integration surfaces. In this work, we develop a new sound extrapolation method based on an acoustic analogy using Taylor's hypothesis (Taylor 1938 Proc. R. Soc. Lon. A 164, 476-490. (doi:10.1098/rspa.1938.0032)). Typically, a convection operator is used to filter out the acoustically inefficient components in the turbulent flows, and an acoustics dominant indirect variable Dcp‧ is solved. The sound pressure p' at the far field is computed from Dcp‧ based on the asymptotic properties of the Green's function. Validations results for benchmark problems with well-defined sources match well with the exact solutions. For aeroacoustics applications: the sound predictions by the aerofoil-gust interaction are close to those by an earlier method specially developed to remove the effect of vortical fluctuations (Zhong & Zhang 2017 J. Fluid Mech. 820, 424-450. (doi:10.1017/jfm.2017.219)); for the case of vortex shedding noise from a cylinder, the off-body predictions by the proposed method match well with the on-body Ffowcs-Williams and Hawkings result; different integration surfaces yield close predictions (of both spectra and far-field directivities) for a co-flowing jet case using an established direct numerical simulation database. The results suggest that the method may be a potential candidate for sound projection in aeroacoustics applications.

  14. extrap: Software to assist the selection of extrapolation methods for moving-boat ADCP streamflow measurements

    USGS Publications Warehouse

    Mueller, David S.

    2013-01-01

    profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers’ software.

  15. Extrapolation techniques applied to matrix methods in neutron diffusion problems

    NASA Technical Reports Server (NTRS)

    Mccready, Robert R

    1956-01-01

    A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.

  16. Application of the backward extrapolation method to pulsed neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamo, Alberto; Gohar, Yousry

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  17. Application of the backward extrapolation method to pulsed neutron sources

    DOE PAGES

    Talamo, Alberto; Gohar, Yousry

    2017-09-23

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  18. MMOC- MODIFIED METHOD OF CHARACTERISTICS SONIC BOOM EXTRAPOLATION

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1994-01-01

    The Modified Method of Characteristics Sonic Boom Extrapolation program (MMOC) is a sonic boom propagation method which includes shock coalescence and incorporates the effects of asymmetry due to volume and lift. MMOC numerically integrates nonlinear equations from data at a finite distance from an airplane configuration at flight altitude to yield the sonic boom pressure signature at ground level. MMOC accounts for variations in entropy, enthalpy, and gravity for nonlinear effects near the aircraft, allowing extrapolation to begin nearer the body than in previous methods. This feature permits wind tunnel sonic boom models of up to three feet in length, enabling more detailed, realistic models than the previous six-inch sizes. It has been shown that elongated airplanes flying at high altitude and high Mach numbers can produce an acceptably low sonic boom. Shock coalescence in MMOC includes three-dimensional effects. The method is based on an axisymmetric solution with asymmetric effects determined by circumferential derivatives of the standard shock equations. Bow shocks and embedded shocks can be included in the near-field. The method of characteristics approach in MMOC allows large computational steps in the radial direction without loss of accuracy. MMOC is a propagation method rather than a predictive program. Thus input data (the flow field on a cylindrical surface at approximately one body length from the axis) must be supplied from calculations or experimental results. The MMOC package contains a uniform atmosphere pressure field program and interpolation routines for computing the required flow field data. Other user supplied input to MMOC includes Mach number, flow angles, and temperature. MMOC output tabulates locations of bow shocks and embedded shocks. When the calculations reach ground level, the overpressure and distance are printed, allowing the user to plot the pressure signature. MMOC is written in FORTRAN IV for batch execution and has been

  19. A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems

    NASA Astrophysics Data System (ADS)

    Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong

    2017-09-01

    In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.

  20. Heterogeneous Molecular Catalysis of Electrochemical Reactions: Volcano Plots and Catalytic Tafel Plots.

    PubMed

    Costentin, Cyrille; Savéant, Jean-Michel

    2017-06-14

    We analyze here, in the framework of heterogeneous molecular catalysis, the reasons for the occurrence or nonoccurrence of volcanoes upon plotting the kinetics of the catalytic reaction versus the stabilization free energy of the primary intermediate of the catalytic process. As in the case of homogeneous molecular catalysis or catalysis by surface-active metallic sites, a strong motivation of such studies relates to modern energy challenges, particularly those involving small molecules, such as water, hydrogen, oxygen, proton, and carbon dioxide. This motivation is particularly pertinent for what concerns heterogeneous molecular catalysis, since it is commonly preferred to homogeneous molecular catalysis by the same molecules if only for chemical separation purposes and electrolytic cell architecture. As with the two other catalysis modes, the main drawback of the volcano plot approach is the basic assumption that the kinetic responses depend on a single descriptor, viz., the stabilization free energy of the primary intermediate. More comprehensive approaches, investigating the responses to the maximal number of experimental factors, and conveniently expressed as catalytic Tafel plots, should clearly be preferred. This is more so in the case of heterogeneous molecular catalysis in that additional transport factors in the supporting film may additionally affect the current-potential responses. This is attested by the noteworthy presence of maxima in catalytic Tafel plots as well as their dependence upon the cyclic voltammetric scan rate.

  1. EXTRAPOLATION METHOD FOR MAXIMAL AND 24-H AVERAGE LTE TDD EXPOSURE ESTIMATION.

    PubMed

    Franci, D; Grillo, E; Pavoncello, S; Coltellacci, S; Buccella, C; Aureli, T

    2018-01-01

    The Long-Term Evolution (LTE) system represents the evolution of the Universal Mobile Telecommunication System technology. This technology introduces two duplex modes: Frequency Division Duplex and Time Division Duplex (TDD). Despite having experienced a limited expansion in the European countries since the debut of the LTE technology, a renewed commercial interest for LTE TDD technology has recently been shown. Therefore, the development of extrapolation procedures optimised for TDD systems becomes crucial, especially for the regulatory authorities. This article presents an extrapolation method aimed to assess the exposure to LTE TDD sources, based on the detection of the Cell-Specific Reference Signal power level. The method introduces a βTDD parameter intended to quantify the fraction of the LTE TDD frame duration reserved for downlink transmission. The method has been validated by experimental measurements performed on signals generated by both a vector signal generator and a test Base Transceiver Station installed at Linkem S.p.A facility in Rome. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Extrapolating bound state data of anions into the metastable domain

    NASA Astrophysics Data System (ADS)

    Feuerbacher, Sven; Sommerfeld, Thomas; Cederbaum, Lorenz S.

    2004-10-01

    Computing energies of electronically metastable resonance states is still a great challenge. Both scattering techniques and quantum chemistry based L2 methods are very time consuming. Here we investigate two more economical extrapolation methods. Extrapolating bound states energies into the metastable region using increased nuclear charges has been suggested almost 20 years ago. We critically evaluate this attractive technique employing our complex absorbing potential/Green's function method that allows us to follow a bound state into the continuum. Using the 2Πg resonance of N2- and the 2Πu resonance of CO2- as examples, we found that the extrapolation works suprisingly well. The second extrapolation method involves increasing of bond lengths until the sought resonance becomes stable. The keystone is to extrapolate the attachment energy and not the total energy of the system. This method has the great advantage that the whole potential energy curve is obtained with quite good accuracy by the extrapolation. Limitations of the two techniques are discussed.

  3. Correlation energy extrapolation by many-body expansion

    DOE PAGES

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...

    2017-01-09

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  4. Correlation energy extrapolation by many-body expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  5. A Method for Extrapolation of Atmospheric Soundings

    DTIC Science & Technology

    2014-05-01

    14 3.1.2 WRF Inter-Comparisons...8 Figure 5. Profiles comparing the 00 UTC 14 January 2013 GJT radiosonde to 1-km WRF data from 23 UTC extended from...comparing 1-km WRF data and 3-km WRF data extended from the “old surface” to the radiosonde surface using the standard extrapolation and extended

  6. New method of extrapolation of the resistance of a model planing boat to full size

    NASA Technical Reports Server (NTRS)

    Sottorf, W

    1942-01-01

    The previously employed method of extrapolating the total resistance to full size with lambda(exp 3) (model scale) and thereby foregoing a separate appraisal of the frictional resistance, was permissible for large models and floats of normal size. But faced with the ever increasing size of aircraft a reexamination of the problem of extrapolation to full size is called for. A method is described by means of which, on the basis of an analysis of tests on planing surfaces, the variation of the wetted surface over the take-off range is analytically obtained. The friction coefficients are read from Prandtl's curve for turbulent boundary layer with laminar approach. With these two values a correction for friction is obtainable.

  7. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    NASA Technical Reports Server (NTRS)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  8. Interspecies Extrapolation

    EPA Science Inventory

    Interspecies extrapolation encompasses two related but distinct topic areas that are germane to quantitative extrapolation and hence computational toxicology-dose scaling and parameter scaling. Dose scaling is the process of converting a dose determined in an experimental animal ...

  9. Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.

    2013-08-01

    In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.

  10. Dead time corrections using the backward extrapolation method

    NASA Astrophysics Data System (ADS)

    Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.

    2017-05-01

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.

  11. Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer.

    PubMed

    Le, Guigao; Oulaid, Othmane; Zhang, Junfeng

    2015-03-01

    In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.

  12. Partition resampling and extrapolation averaging: approximation methods for quantifying gene expression in large numbers of short oligonucleotide arrays.

    PubMed

    Goldstein, Darlene R

    2006-10-01

    Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.

  13. Short-range stabilizing potential for computing energies and lifetimes of temporary anions with extrapolation methods.

    PubMed

    Sommerfeld, Thomas; Ehara, Masahiro

    2015-01-21

    The energy of a temporary anion can be computed by adding a stabilizing potential to the molecular Hamiltonian, increasing the stabilization until the temporary state is turned into a bound state, and then further increasing the stabilization until enough bound state energies have been collected so that these can be extrapolated back to vanishing stabilization. The lifetime can be obtained from the same data, but only if the extrapolation is done through analytic continuation of the momentum as a function of the square root of a shifted stabilizing parameter. This method is known as analytic continuation of the coupling constant, and it requires--at least in principle--that the bound-state input data are computed with a short-range stabilizing potential. In the context of molecules and ab initio packages, long-range Coulomb stabilizing potentials are, however, far more convenient and have been used in the past with some success, although the error introduced by the long-rang nature of the stabilizing potential remains unknown. Here, we introduce a soft-Voronoi box potential that can serve as a short-range stabilizing potential. The difference between a Coulomb and the new stabilization is analyzed in detail for a one-dimensional model system as well as for the (2)Πu resonance of CO2(-), and in both cases, the extrapolation results are compared to independently computed resonance parameters, from complex scaling for the model, and from complex absorbing potential calculations for CO2(-). It is important to emphasize that for both the model and for CO2(-), all three sets of results have, respectively, been obtained with the same electronic structure method and basis set so that the theoretical description of the continuum can be directly compared. The new soft-Voronoi-box-based extrapolation is then used to study the influence of the size of diffuse and the valence basis sets on the computed resonance parameters.

  14. In situ LTE exposure of the general public: Characterization and extrapolation.

    PubMed

    Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc

    2012-09-01

    In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.

  15. Linear prediction data extrapolation superresolution radar imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing

    1993-05-01

    Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.

  16. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    PubMed

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  17. An extrapolation method for compressive strength prediction of hydraulic cement products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siqueira Tango, C.E. de

    1998-07-01

    The basis for the AMEBA Method is presented. A strength-time function is used to extrapolate the predicted cementitious material strength for a late (ALTA) age, based on two earlier age strengths--medium (MEDIA) and low (BAIXA) ages. The experimental basis for the method is data from the IPT-Brazil laboratory and the field, including a long-term study on concrete, research on limestone, slag, and fly-ash additions, and quality control data from a cement factory, a shotcrete tunnel lining, and a grout for structural repair. The method applicability was also verified for high-performance concrete with silica fume. The formula for predicting late agemore » (e.g., 28 days) strength, for a given set of involved ages (e.g., 28,7, and 2 days) is normally a function only of the two earlier ages` (e.g., 7 and 2 days) strengths. This equation has been shown to be independent on materials variations, including cement brand, and is easy to use also graphically. Using the AMEBA method, and only needing to know the type of cement used, it has been possible to predict strengths satisfactorily, even without the preliminary tests which are required in other methods.« less

  18. Properties of infrared extrapolations in a harmonic oscillator basis

    DOE PAGES

    Coon, Sidney A.; Kruse, Michael K. G.

    2016-02-22

    Here, the success and utility of effective field theory (EFT) in explaining the structure and reactions of few-nucleon systems has prompted the initiation of EFT-inspired extrapolations to larger model spaces in ab initio methods such as the no-core shell model (NCSM). In this contribution, we review and continue our studies of infrared (ir) and ultraviolet (uv) regulators of NCSM calculations in which the input is phenomenological NN and NNN interactions fitted to data. We extend our previous findings that an extrapolation in the ir cutoff with the uv cutoff above the intrinsic uv scale of the interaction is quite successful,more » not only for the eigenstates of the Hamiltonian but also for expectation values of operators, such as r 2, considered long range. The latter results are obtained with Hamiltonians transformed by the similarity renormalization group (SRG) evolution. On the other hand, a possible extrapolation of ground state energies in the uv cutoff when the ir cutoff is below the intrinsic ir scale is not robust and does not agree with the ir extrapolation of the same data or with independent calculations using other methods.« less

  19. The Extrapolation of Elementary Sequences

    NASA Technical Reports Server (NTRS)

    Laird, Philip; Saul, Ronald

    1992-01-01

    We study sequence extrapolation as a stream-learning problem. Input examples are a stream of data elements of the same type (integers, strings, etc.), and the problem is to construct a hypothesis that both explains the observed sequence of examples and extrapolates the rest of the stream. A primary objective -- and one that distinguishes this work from previous extrapolation algorithms -- is that the same algorithm be able to extrapolate sequences over a variety of different types, including integers, strings, and trees. We define a generous family of constructive data types, and define as our learning bias a stream language called elementary stream descriptions. We then give an algorithm that extrapolates elementary descriptions over constructive datatypes and prove that it learns correctly. For freely-generated types, we prove a polynomial time bound on descriptions of bounded complexity. An especially interesting feature of this work is the ability to provide quantitative measures of confidence in competing hypotheses, using a Bayesian model of prediction.

  20. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  1. Extrapolation of Functions of Many Variables by Means of Metric Analysis

    NASA Astrophysics Data System (ADS)

    Kryanev, Alexandr; Ivanov, Victor; Romanova, Anastasiya; Sevastianov, Leonid; Udumyan, David

    2018-02-01

    The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.

  2. The Extrapolation of High Altitude Solar Cell I(V) Characteristics to AM0

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Reinke, William; Blankenship, Kurt; Demers, James

    2007-01-01

    The high altitude aircraft method has been used at NASA GRC since the early 1960's to calibrate solar cell short circuit current, ISC, to Air Mass Zero (AMO). This method extrapolates ISC to AM0 via the Langley plot method, a logarithmic extrapolation to 0 air mass, and includes corrections for the varying Earth-Sun distance to 1.0 AU and compensating for the non-uniform ozone distribution in the atmosphere. However, other characteristics of the solar cell I(V) curve do not extrapolate in the same way. Another approach is needed to extrapolate VOC and the maximum power point (PMAX) to AM0 illumination. As part of the high altitude aircraft method, VOC and PMAX can be obtained as ISC changes during the flight. These values can then the extrapolated, sometimes interpolated, to the ISC(AM0) value. This approach should be valid as long as the shape of the solar spectra in the stratosphere does not change too much from AMO. As a feasibility check, the results are compared to AMO I(V) curves obtained using the NASA GRC X25 based multi-source simulator. This paper investigates the approach on both multi-junction solar cells and sub-cells.

  3. NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.

    PubMed

    Hinrichs, R N; McLean, S P

    1995-10-01

    This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.

  4. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    NASA Astrophysics Data System (ADS)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  5. The forecast for RAC extrapolation: mostly cloudy.

    PubMed

    Goldman, Elizabeth; Jacobs, Robert; Scott, Ellen; Scott, Bonnie

    2011-09-01

    The current statutory and regulatory guidance for recovery audit contractor (RAC) extrapolation leaves providers with minimal protection against the process and a limited ability to challenge overpayment demands. Providers not only should understand the statutory and regulatory basis for extrapolation forecast, but also should be able to assess their extrapolation risk and their recourse through regulatory safeguards against contractor error. Providers also should aggressively appeal all incorrect RAC denials to minimize the potential impact of extrapolation.

  6. Extrapolating Survival from Randomized Trials Using External Data: A Review of Methods

    PubMed Central

    Jackson, Christopher; Stevens, John; Ren, Shijie; Latimer, Nick; Bojke, Laura; Manca, Andrea; Sharples, Linda

    2016-01-01

    This article describes methods used to estimate parameters governing long-term survival, or times to other events, for health economic models. Specifically, the focus is on methods that combine shorter-term individual-level survival data from randomized trials with longer-term external data, thus using the longer-term data to aid extrapolation of the short-term data. This requires assumptions about how trends in survival for each treatment arm will continue after the follow-up period of the trial. Furthermore, using external data requires assumptions about how survival differs between the populations represented by the trial and external data. Study reports from a national health technology assessment program in the United Kingdom were searched, and the findings were combined with “pearl-growing” searches of the academic literature. We categorized the methods that have been used according to the assumptions they made about how the hazards of death vary between the external and internal data and through time, and we discuss the appropriateness of the assumptions in different circumstances. Modeling choices, parameter estimation, and characterization of uncertainty are discussed, and some suggestions for future research priorities in this area are given. PMID:27005519

  7. Extrapolation procedures in Mott electron polarimetry

    NASA Technical Reports Server (NTRS)

    Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.

    1992-01-01

    In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.

  8. Characterization of Copper Corrosion Products in Drinking Water by Combining Electrochemical and Surface Analyses

    EPA Science Inventory

    This study focuses on the application of electrochemical approaches to drinking water copper corrosion problems. Applying electrochemical approaches combined with copper solubility measurements, and solid surface analysis approaches were discussed. Tafel extrapolation and Electro...

  9. Characterization of Copper Corrosion Products Formed in Drinking Water by Combining Electrochemical and Surface Analyses

    EPA Science Inventory

    This study focuses on the application of electrochemical approaches to drinking water copper corrosion problems. Applying electrochemical approaches combined with copper solubility measurements, and solid surface analysis approaches were discussed. Tafel extrapolation and Electro...

  10. The impact of surface composition on Tafel kinetics leading to enhanced electrochemical insertion of hydrogen in palladium

    NASA Astrophysics Data System (ADS)

    Dmitriyeva, Olga; Hamm, Steven C.; Knies, David L.; Cantwell, Richard; McConnell, Matt

    2018-05-01

    Our previous work experimentally demonstrated the enhancement of electrochemical hydrogen insertion into palladium by modifying the chemical composition of the cathode surface with Pb, Pt and Bi, referred to as surface promoters. The experiment demonstrated that an optimal combination of the surface promoters led to an increase in hydrogen fugacity of more than three orders of magnitude, while maintaining the same current density. This manuscript discusses the application of Density Functional Theory (DFT) to elucidate the thermodynamics and kinetics of observed enhancement of electrochemical hydrogen insertion into palladium. We present theoretical simulations that: (1) establish the elevation of hydrogen's chemical potential on Pb and Bi surfaces to enhance hydrogen insertion, (2) confirm the increase of a Tafel activation barrier that results in a decrease of the reaction rate at the given hydrogen overpotential, and (3) explain why the surface promoter's coverage needs to be non-uniform, namely to allow hydrogen insertion into palladium bulk while simultaneously locking hydrogen below the surface (the corking effect). The discussed DFT-based method can be used for efficient scanning of different material configurations to design a highly effective hydrogen storage system.

  11. Simple extrapolation method to predict the electronic structure of conjugated polymers from calculations on oligomers

    DOE PAGES

    Larsen, Ross E.

    2016-04-12

    In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally,more » tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.« less

  12. Measurement accuracies in band-limited extrapolation

    NASA Technical Reports Server (NTRS)

    Kritikos, H. N.

    1982-01-01

    The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.

  13. Conic state extrapolation. [computer program for space shuttle navigation and guidance requirements

    NASA Technical Reports Server (NTRS)

    Shepperd, S. W.; Robertson, W. M.

    1973-01-01

    The Conic State Extrapolation Routine provides the capability to conically extrapolate any spacecraft inertial state vector either backwards or forwards as a function of time or as a function of transfer angle. It is merely the coded form of two versions of the solution of the two-body differential equations of motion of the spacecraft center of mass. Because of its relatively fast computation speed and moderate accuracy, it serves as a preliminary navigation tool and as a method of obtaining quick solutions for targeting and guidance functions. More accurate (but slower) results are provided by the Precision State Extrapolation Routine.

  14. AXES OF EXTRAPOLATION IN RISK ASSESSMENTS

    EPA Science Inventory

    Extrapolation in risk assessment involves the use of data and information to estimate or predict something that has not been measured or observed. Reasons for extrapolation include that the number of combinations of environmental stressors and possible receptors is too large to c...

  15. How Accurate Are Infrared Luminosities from Monochromatic Photometric Extrapolation?

    NASA Astrophysics Data System (ADS)

    Lin, Zesen; Fang, Guanwen; Kong, Xu

    2016-12-01

    Template-based extrapolations from only one photometric band can be a cost-effective method to estimate the total infrared (IR) luminosities ({L}{IR}) of galaxies. By utilizing multi-wavelength data that covers across 0.35-500 μm in GOODS-North and GOODS-South fields, we investigate the accuracy of this monochromatic extrapolated {L}{IR} based on three IR spectral energy distribution (SED) templates out to z˜ 3.5. We find that the Chary & Elbaz template provides the best estimate of {L}{IR} in Herschel/Photodetector Array Camera and Spectrometer (PACS) bands, while the Dale & Helou template performs best in Herschel/Spectral and Photometric Imaging Receiver (SPIRE) bands. To estimate {L}{IR}, we suggest that extrapolations from the available longest wavelength PACS band based on the Chary & Elbaz template can be a good estimator. Moreover, if the PACS measurement is unavailable, extrapolations from SPIRE observations but based on the Dale & Helou template can also provide a statistically unbiased estimate for galaxies at z≲ 2. The emission with a rest-frame 10-100 μm range of IR SED can be well described by all three templates, but only the Dale & Helou template shows a nearly unbiased estimate of the emission of the rest-frame submillimeter part.

  16. Effective orthorhombic anisotropic models for wavefield extrapolation

    NASA Astrophysics Data System (ADS)

    Ibanez-Jacome, Wilson; Alkhalifah, Tariq; Waheed, Umair bin

    2014-09-01

    Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth's subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the first-arrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.

  17. Predicting structural properties of fluids by thermodynamic extrapolation

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Jiao, Sally; Hatch, Harold W.; Blanco, Marco A.; Shen, Vincent K.

    2018-05-01

    We describe a methodology for extrapolating the structural properties of multicomponent fluids from one thermodynamic state to another. These properties generally include features of a system that may be computed from an individual configuration such as radial distribution functions, cluster size distributions, or a polymer's radius of gyration. This approach is based on the principle of using fluctuations in a system's extensive thermodynamic variables, such as energy, to construct an appropriate Taylor series expansion for these structural properties in terms of intensive conjugate variables, such as temperature. Thus, one may extrapolate these properties from one state to another when the series is truncated to some finite order. We demonstrate this extrapolation for simple and coarse-grained fluids in both the canonical and grand canonical ensembles, in terms of both temperatures and the chemical potentials of different components. The results show that this method is able to reasonably approximate structural properties of such fluids over a broad range of conditions. Consequently, this methodology may be employed to increase the computational efficiency of molecular simulations used to measure the structural properties of certain fluid systems, especially those used in high-throughput or data-driven investigations.

  18. In Vitro-In Vivo Extrapolation of Metabolism- and Transporter-Mediated Drug-Drug Interactions-Overview of Basic Prediction Methods.

    PubMed

    Yoshida, Kenta; Zhao, Ping; Zhang, Lei; Abernethy, Darrell R; Rekić, Dinko; Reynolds, Kellie S; Galetin, Aleksandra; Huang, Shiew-Mei

    2017-09-01

    Evaluation of drug-drug interaction (DDI) risk is vital to establish benefit-risk profiles of investigational new drugs during drug development. In vitro experiments are routinely conducted as an important first step to assess metabolism- and transporter-mediated DDI potential of investigational new drugs. Results from these experiments are interpreted, often with the aid of in vitro-in vivo extrapolation methods, to determine whether and how DDI should be evaluated clinically to provide the basis for proper DDI management strategies, including dosing recommendations, alternative therapies, or contraindications under various DDI scenarios and in different patient population. This article provides an overview of currently available in vitro experimental systems and basic in vitro-in vivo extrapolation methodologies for metabolism- and transporter-mediated DDIs. Published by Elsevier Inc.

  19. A thermal extrapolation method for the effective temperatures and internal energies of activated ions

    NASA Astrophysics Data System (ADS)

    Meot-Ner (Mautner), Michael; Somogyi, Árpád

    2007-11-01

    The internal energies of dissociating ions, activated chemically or collisionally, can be estimated using the kinetics of thermal dissociation. The thermal Arrhenius parameters can be combined with the observed dissociation rate of the activated ions using kdiss = Athermalexp(-Ea,thermal/RTeff). This Arrhenius-type relation yields the effective temperature, Teff, at which the ions would dissociate thermally at the same rate, or yield the same product distributions, as the activated ions. In turn, Teff is used to calculate the internal energy of the ions and the energy deposited by the activation process. The method yields an energy deposition efficiency of 10% for a chemical ionization proton transfer reaction and 8-26% for the surface collisions of various peptide ions. Internal energies of ions activated by chemical ionization or by gas phase collisions, and of ions produced by desorption methods such as fast atom bombardment, can be also evaluated. Thermal extrapolation is especially useful for ion-molecule reaction products and for biological ions, where other methods to evaluate internal energies are laborious or unavailable.

  20. Cosmogony as an extrapolation of magnetospheric research

    NASA Technical Reports Server (NTRS)

    Alfven, H.

    1984-01-01

    A theory of the origin and evolution of the Solar System which considered electromagnetic forces and plasma effects is revised in light of information supplied by space research. In situ measurements in the magnetospheres and solar wind can be extrapolated outwards in space, to interstellar clouds, and backwards in time, to the formation of the solar system. The first extrapolation leads to a revision of cloud properties essential for the early phases in the formation of stars and solar nebulae. The latter extrapolation facilitates analysis of the cosmogonic processes by extrapolation of magnetospheric phenomena. Pioneer-Voyager observations of the Saturnian rings indicate that essential parts of their structure are fossils from cosmogonic times. By using detailed information from these space missions, it is possible to reconstruct events 4 to 5 billion years ago with an accuracy of a few percent.

  1. CROSS-SPECIES DOSE EXTRAPOLATION FOR DIESEL EMISSIONS

    EPA Science Inventory

    Models for cross-species (rat to human) dose extrapolation of diesel emission were evaluated for purposes of establishing guidelines for human exposure to diesel emissions (DE) based on DE toxicological data obtained in rats. Ideally, a model for this extrapolation would provide...

  2. Extrapolation of rotating sound fields.

    PubMed

    Carley, Michael

    2018-03-01

    A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.

  3. Extrapolating target tracks

    NASA Astrophysics Data System (ADS)

    Van Zandt, James R.

    2012-05-01

    Steady-state performance of a tracking filter is traditionally evaluated immediately after a track update. However, there is commonly a further delay (e.g., processing and communications latency) before the tracks can actually be used. We analyze the accuracy of extrapolated target tracks for four tracking filters: Kalman filter with the Singer maneuver model and worst-case correlation time, with piecewise constant white acceleration, and with continuous white acceleration, and the reduced state filter proposed by Mookerjee and Reifler.1, 2 Performance evaluation of a tracking filter is significantly simplified by appropriate normalization. For the Kalman filter with the Singer maneuver model, the steady-state RMS error immediately after an update depends on only two dimensionless parameters.3 By assuming a worst case value of target acceleration correlation time, we reduce this to a single parameter without significantly changing the filter performance (within a few percent for air tracking).4 With this simplification, we find for all four filters that the RMS errors for the extrapolated state are functions of only two dimensionless parameters. We provide simple analytic approximations in each case.

  4. Line-of-sight extrapolation noise in dust polarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poh, Jason; Dodelson, Scott

    The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g., 350 GHz) is due solely to dust and then extrapolate the signal down to lower frequency (e.g., 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typicalmore » Galactic thermal dust temperatures of about 20K, these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r < 0.0015 .« less

  5. Extrapolation-Based References Improve Motion and Eddy-Current Correction of High B-Value DWI Data: Application in Parkinson's Disease Dementia.

    PubMed

    Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar

    2015-01-01

    Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.

  6. Temperature extrapolation of multicomponent grand canonical free energy landscapes

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-08-01

    We derive a method for extrapolating the grand canonical free energy landscape of a multicomponent fluid system from one temperature to another. Previously, we introduced this statistical mechanical framework for the case where kinetic energy contributions to the classical partition function were neglected for simplicity [N. A. Mahynski et al., J. Chem. Phys. 146, 074101 (2017)]. Here, we generalize the derivation to admit these contributions in order to explicitly illustrate the differences that result. Specifically, we show how factoring out kinetic energy effects a priori, in order to consider only the configurational partition function, leads to simpler mathematical expressions that tend to produce more accurate extrapolations than when these effects are included. We demonstrate this by comparing and contrasting these two approaches for the simple cases of an ideal gas and a non-ideal, square-well fluid.

  7. The design of L1-norm visco-acoustic wavefield extrapolators

    NASA Astrophysics Data System (ADS)

    Salam, Syed Abdul; Mousa, Wail A.

    2018-04-01

    Explicit depth frequency-space (f - x) prestack imaging is an attractive mechanism for seismic imaging. To date, the main focus of this method was data migration assuming an acoustic medium, but until now very little work assumed visco-acoustic media. Real seismic data usually suffer from attenuation and dispersion effects. To compensate for attenuation in a visco-acoustic medium, new operators are required. We propose using the L1-norm minimization technique to design visco-acoustic f - x extrapolators. To show the accuracy and compensation of the operators, prestack depth migration is performed on the challenging Marmousi model for both acoustic and visco-acoustic datasets. The final migrated images show that the proposed L1-norm extrapolation results in practically stable and improved resolution of the images.

  8. An analysis of shock coalescence including three-dimensional effects with application to sonic boom extrapolation. Ph.D. Thesis - George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1984-01-01

    A method for analyzing shock coalescence which includes three dimensional effects was developed. The method is based on an extension of the axisymmetric solution, with asymmetric effects introduced through an additional set of governing equations, derived by taking the second circumferential derivative of the standard shock equations in the plane of symmetry. The coalescence method is consistent with and has been combined with a nonlinear sonic boom extrapolation program which is based on the method of characteristics. The extrapolation program, is able to extrapolate pressure signatures which include embedded shocks from an initial data line in the plane of symmetry at approximately one body length from the axis of the aircraft to the ground. The axisymmetric shock coalescence solution, the asymmetric shock coalescence solution, the method of incorporating these solutions into the extrapolation program, and the methods used to determine spatial derivatives needed in the coalescence solution are described. Results of the method are shown for a body of revolution at a small, positive angle of attack.

  9. How to Appropriately Extrapolate Costs and Utilities in Cost-Effectiveness Analysis.

    PubMed

    Bojke, Laura; Manca, Andrea; Asaria, Miqdad; Mahon, Ronan; Ren, Shijie; Palmer, Stephen

    2017-08-01

    Costs and utilities are key inputs into any cost-effectiveness analysis. Their estimates are typically derived from individual patient-level data collected as part of clinical studies the follow-up duration of which is often too short to allow a robust quantification of the likely costs and benefits a technology will yield over the patient's entire lifetime. In the absence of long-term data, some form of temporal extrapolation-to project short-term evidence over a longer time horizon-is required. Temporal extrapolation inevitably involves assumptions regarding the behaviour of the quantities of interest beyond the time horizon supported by the clinical evidence. Unfortunately, the implications for decisions made on the basis of evidence derived following this practice and the degree of uncertainty surrounding the validity of any assumptions made are often not fully appreciated. The issue is compounded by the absence of methodological guidance concerning the extrapolation of non-time-to-event outcomes such as costs and utilities. This paper considers current approaches to predict long-term costs and utilities, highlights some of the challenges with the existing methods, and provides recommendations for future applications. It finds that, typically, economic evaluation models employ a simplistic approach to temporal extrapolation of costs and utilities. For instance, their parameters (e.g. mean) are typically assumed to be homogeneous with respect to both time and patients' characteristics. Furthermore, costs and utilities have often been modelled to follow the dynamics of the associated time-to-event outcomes. However, cost and utility estimates may be more nuanced, and it is important to ensure extrapolation is carried out appropriately for these parameters.

  10. Community assessment techniques and the implications for rarefaction and extrapolation with Hill numbers.

    PubMed

    Cox, Kieran D; Black, Morgan J; Filip, Natalia; Miller, Matthew R; Mohns, Kayla; Mortimor, James; Freitas, Thaise R; Greiter Loerzer, Raquel; Gerwing, Travis G; Juanes, Francis; Dudas, Sarah E

    2017-12-01

    Diversity estimates play a key role in ecological assessments. Species richness and abundance are commonly used to generate complex diversity indices that are dependent on the quality of these estimates. As such, there is a long-standing interest in the development of monitoring techniques, their ability to adequately assess species diversity, and the implications for generated indices. To determine the ability of substratum community assessment methods to capture species diversity, we evaluated four methods: photo quadrat, point intercept, random subsampling, and full quadrat assessments. Species density, abundance, richness, Shannon diversity, and Simpson diversity were then calculated for each method. We then conducted a method validation at a subset of locations to serve as an indication for how well each method captured the totality of the diversity present. Density, richness, Shannon diversity, and Simpson diversity estimates varied between methods, despite assessments occurring at the same locations, with photo quadrats detecting the lowest estimates and full quadrat assessments the highest. Abundance estimates were consistent among methods. Sample-based rarefaction and extrapolation curves indicated that differences between Hill numbers (richness, Shannon diversity, and Simpson diversity) were significant in the majority of cases, and coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. Method validation highlighted the inability of the tested methods to capture the totality of the diversity present, while further supporting the notion of extrapolating abundances. Our results highlight the need for consistency across research methods, the advantages of utilizing multiple diversity indices, and potential concerns and considerations when comparing data from multiple sources.

  11. Line-of-sight extrapolation noise in dust polarization

    NASA Astrophysics Data System (ADS)

    Poh, Jason; Dodelson, Scott

    2017-05-01

    The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g. 350 GHz) is due solely to dust and then extrapolate the signal down to a lower frequency (e.g. 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typical Galactic thermal dust temperatures of ˜20 K , these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise on a greybody dust model consistent with Planck and Pan-STARRS observations, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r ≲0.0015 in the greybody dust models considered in this

  12. Extra- and intracellular volume monitoring by impedance during haemodialysis using Cole-Cole extrapolation.

    PubMed

    Jaffrin, M Y; Maasrani, M; Le Gourrier, A; Boudailliez, B

    1997-05-01

    A method is presented for monitoring the relative variation of extracellular and intracellular fluid volumes using a multifrequency impedance meter and the Cole-Cole extrapolation technique. It is found that this extrapolation is necessary to obtain reliable data for the resistance of the intracellular fluid. The extracellular and intracellular resistances can be approached using frequencies of, respectively, 5 kHz and 1000 kHz, but the use of 100 kHz leads to unacceptable errors. In the conventional treatment the overall relative variation of intracellular resistance is found to be relatively small.

  13. Free magnetic energy and relative magnetic helicity diagnostics for the quality of NLFF field extrapolations

    NASA Astrophysics Data System (ADS)

    Moraitis, Kostas; Archontis, Vasilis; Tziotziou, Konstantinos; Georgoulis, Manolis K.

    We calculate the instantaneous free magnetic energy and relative magnetic helicity of solar active regions using two independent approaches: a) a non-linear force-free (NLFF) method that requires only a single photospheric vector magnetogram, and b) well known semi-analytical formulas that require the full three-dimensional (3D) magnetic field structure. The 3D field is obtained either from MHD simulations, or from observed magnetograms via respective NLFF field extrapolations. We find qualitative agreement between the two methods and, quantitatively, a discrepancy not exceeding a factor of 4. The comparison of the two methods reveals, as a byproduct, two independent tests for the quality of a given force-free field extrapolation. We find that not all extrapolations manage to achieve the force-free condition in a valid, divergence-free, magnetic configuration. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.

  14. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    NASA Astrophysics Data System (ADS)

    Spackman, Peter R.; Karton, Amir

    2015-05-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.

  15. Acute toxicity value extrapolation with fish and aquatic invertebrates

    USGS Publications Warehouse

    Buckler, Denny R.; Mayer, Foster L.; Ellersieck, Mark R.; Asfaw, Amha

    2005-01-01

    Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled “Ecological Risk Analysis” (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more

  16. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less

  17. Probabilistic precipitation nowcasting based on an extrapolation of radar reflectivity and an ensemble approach

    NASA Astrophysics Data System (ADS)

    Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch

    2017-09-01

    A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.

  18. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  19. Application of a framework for extrapolating chemical effects ...

    EPA Pesticide Factsheets

    Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetics, life-stage, and pathway similarities/differences. Here we propose a framework using a tiered approach for species extrapolation that enables a transparent weight-of-evidence driven evaluation of pathway conservation (or lack thereof) in the context of adverse outcome pathways. Adverse outcome pathways describe the linkages from a molecular initiating event, defined as the chemical-biomolecule interaction, through subsequent key events leading to an adverse outcome of regulatory concern (e.g., mortality, reproductive dysfunction). Tier 1 of the extrapolation framework employs in silico evaluations of sequence and structural conservation of molecules (e.g., receptors, enzymes) associated with molecular initiating events or upstream key events. Such evaluations make use of available empirical and sequence data to assess taxonomic relevance. Tier 2 uses in vitro bioassays, such as enzyme inhibition/activation, competitive receptor binding, and transcriptional activation assays to explore functional conservation of pathways across taxa. Finally, Tier 3 provides a comparative analysis of in vivo responses between species utilizing well-established model organisms to assess departure from

  20. Smooth extrapolation of unknown anatomy via statistical shape models

    NASA Astrophysics Data System (ADS)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  1. An extrapolation scheme for solid-state NMR chemical shift calculations

    NASA Astrophysics Data System (ADS)

    Nakajima, Takahito

    2017-06-01

    Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.

  2. Resolution enhancement by extrapolation of coherent diffraction images: a quantitative study on the limits and a numerical study of nonbinary and phase objects.

    PubMed

    Latychevskaia, T; Chushkin, Y; Fink, H-W

    2016-10-01

    In coherent diffractive imaging, the resolution of the reconstructed object is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by postextrapolation of coherent diffraction images, such as diffraction patterns or holograms. We demonstrate that a diffraction pattern can unambiguously be extrapolated from only a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal is linearly proportional to the oversampling ratio. Although there could be in principle other methods to achieve extrapolation, we devote our discussion to employing iterative phase retrieval methods and demonstrate their limits. We present two numerical studies; namely, the extrapolation of diffraction patterns of nonbinary and that of phase objects together with a discussion of the optimal extrapolation procedure. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  3. Cross-species extrapolation of chemical effects: Challenges and new insights

    EPA Science Inventory

    One of the greatest uncertainties in chemical risk assessment is extrapolation of effects from tested to untested species. While this undoubtedly is a challenge in the human health arena, species extrapolation is a particularly daunting task in ecological assessments, where it is...

  4. Visualization and Nowcasting for Aviation using online verified ensemble weather radar extrapolation.

    NASA Astrophysics Data System (ADS)

    Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan

    2013-04-01

    Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation

  5. Magnetic field extrapolation with MHD relaxation using AWSoM

    NASA Astrophysics Data System (ADS)

    Shi, T.; Manchester, W.; Landi, E.

    2017-12-01

    Coronal mass ejections are known to be the major source of disturbances in the solar wind capable of affecting geomagnetic environments. In order for accurate predictions of such space weather events, a data-driven simulation is needed. The first step towards such a simulation is to extrapolate the magnetic field from the observed field that is only at the solar surface. Here we present results of a new code of magnetic field extrapolation with direct magnetohydrodynamics (MHD) relaxation using the Alfvén Wave Solar Model (AWSoM) in the Space Weather Modeling Framework. The obtained field is self-consistent with our model and can be used later in time-dependent simulations without modifications of the equations. We use the Low and Lou analytical solution to test our results and they reach a good agreement. We also extrapolate the magnetic field from the observed data. We then specify the active region corona field with this extrapolation result in the AWSoM model and self-consistently calculate the temperature of the active region loops with Alfvén wave dissipation. Multi-wavelength images are also synthesized.

  6. Approach for extrapolating in vitro metabolism data to refine bioconcentration factor estimates.

    PubMed

    Cowan-Ellsberry, Christina E; Dyer, Scott D; Erhardt, Susan; Bernhard, Mary Jo; Roe, Amy L; Dowty, Martin E; Weisbrod, Annie V

    2008-02-01

    National and international chemical management programs are assessing thousands of chemicals for their persistence, bioaccumulative and environmental toxic properties; however, data for evaluating the bioaccumulation potential for fish are limited. Computer based models that account for the uptake and elimination processes that contribute to bioaccumulation may help to meet the need for reliable estimates. One critical elimination process of chemicals is metabolic transformation. It has been suggested that in vitro metabolic transformation tests using fish liver hepatocytes or S9 fractions can provide rapid and cost-effective measurements of fish metabolic potential, which could be used to refine bioconcentration factor (BCF) computer model estimates. Therefore, recent activity has focused on developing in vitro methods to measure metabolic transformation in cellular and subcellular fish liver fractions. A method to extrapolate in vitro test data to the whole body metabolic transformation rates is presented that could be used to refine BCF computer model estimates. This extrapolation approach is based on concepts used to determine the fate and distribution of drugs within the human body which have successfully supported the development of new pharmaceuticals for years. In addition, this approach has already been applied in physiologically-based toxicokinetic models for fish. The validity of the in vitro to in vivo extrapolation is illustrated using the rate of loss of parent chemical measured in two independent in vitro test systems: (1) subcellular enzymatic test using the trout liver S9 fraction, and (2) primary hepatocytes isolated from the common carp. The test chemicals evaluated have high quality in vivo BCF values and a range of logK(ow) from 3.5 to 6.7. The results show very good agreement between the measured BCF and estimated BCF values when the extrapolated whole body metabolism rates are included, thus suggesting that in vitro biotransformation data

  7. Extrapolation of operators acting into quasi-Banach spaces

    NASA Astrophysics Data System (ADS)

    Lykov, K. V.

    2016-01-01

    Linear and sublinear operators acting from the scale of L_p spaces to a certain fixed quasinormed space are considered. It is shown how the extrapolation construction proposed by Jawerth and Milman at the end of 1980s can be used to extend a bounded action of an operator from the L_p scale to wider spaces. Theorems are proved which generalize Yano's extrapolation theorem to the case of a quasinormed target space. More precise results are obtained under additional conditions on the quasinorm. Bibliography: 35 titles.

  8. 3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer

    NASA Technical Reports Server (NTRS)

    Lane, John

    2012-01-01

    Determining the Z-R relationship (where Z is the radar reflectivity factor and R is rainfall rate) from disdrometer data has been and is a common goal of cloud physicists and radar meteorology researchers. The usefulness of this quantity has traditionally been limited since radar represents a volume measurement, while a disdrometer corresponds to a point measurement. To solve that problem, a 3D-DSD (drop-size distribution) method of determining an equivalent 3D Z-R was developed at the University of Central Florida and tested at the Kennedy Space Center, FL. Unfortunately, that method required a minimum of three disdrometers clustered together within a microscale network (.1-km separation). Since most commercial disdrometers used by the radar meteorology/cloud physics community are high-cost instruments, three disdrometers located within a microscale area is generally not a practical strategy due to the limitations of these kinds of research budgets. A relatively simple modification to the 3D-DSD algorithm provides an estimate of the 3D-DSD and therefore, a 3D Z-R measurement using a single disdrometer. The basis of the horizontal extrapolation is mass conservation of a drop size increment, employing the mass conservation equation. For vertical extrapolation, convolution of a drop size increment using raindrop terminal velocity is used. Together, these two independent extrapolation techniques provide a complete 3DDSD estimate in a volume around and above a single disdrometer. The estimation error is lowest along a vertical plane intersecting the disdrometer position in the direction of wind advection. This work demonstrates that multiple sensors are not required for successful implementation of the 3D interpolation/extrapolation algorithm. This is a great benefit since it is seldom that multiple sensors in the required spatial arrangement are available for this type of analysis. The original software (developed at the University of Central Florida, 1998.- 2000) has

  9. The Extrapolation of Families of Curves by Recurrence Relations, with Application to Creep-Rupture Data

    NASA Technical Reports Server (NTRS)

    Mendelson, A.; Manson, S. S.

    1960-01-01

    A method using finite-difference recurrence relations is presented for direct extrapolation of families of curves. The method is illustrated by applications to creep-rupture data for several materials and it is shown that good results can be obtained without the necessity for any of the usual parameter concepts.

  10. Physiologically based pharmacokinetic model for quinocetone in pigs and extrapolation to mequindox.

    PubMed

    Zhu, Xudong; Huang, Lingli; Xu, Yamei; Xie, Shuyu; Pan, Yuanhu; Chen, Dongmei; Liu, Zhenli; Yuan, Zonghui

    2017-02-01

    Physiologically based pharmacokinetic (PBPK) models are scientific methods used to predict veterinary drug residues that may occur in food-producing animals, and which have powerful extrapolation ability. Quinocetone (QCT) and mequindox (MEQ) are widely used in China for the prevention of bacterial infections and promoting animal growth, but their abuse causes a potential threat to human health. In this study, a flow-limited PBPK model was developed to simulate simultaneously residue depletion of QCT and its marker residue dideoxyquinocetone (DQCT) in pigs. The model included compartments for blood, liver, kidney, muscle and fat and an extra compartment representing the other tissues. Physiological parameters were obtained from the literature. Plasma protein binding rates, renal clearances and tissue/plasma partition coefficients were determined by in vitro and in vivo experiments. The model was calibrated and validated with several pharmacokinetic and residue-depletion datasets from the literature. Sensitivity analysis and Monte Carlo simulations were incorporated into the PBPK model to estimate individual variation of residual concentrations. The PBPK model for MEQ, the congener compound of QCT, was built through cross-compound extrapolation based on the model for QCT. The QCT model accurately predicted the concentrations of QCT and DQCT in various tissues at most time points, especially the later time points. Correlation coefficients between predicted and measured values for all tissues were greater than 0.9. Monte Carlo simulations showed excellent consistency between estimated concentration distributions and measured data points. The extrapolation model also showed good predictive power. The present models contribute to improve the residue monitoring systems of QCT and MEQ, and provide evidence of the usefulness of PBPK model extrapolation for the same kinds of compounds.

  11. Patient-bounded extrapolation using low-dose priors for volume-of-interest imaging in C-arm CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Y.; Maier, A.; Berger, M.

    2015-04-15

    Purpose: Three-dimensional (3D) volume-of-interest (VOI) imaging with C-arm systems provides anatomical information in a predefined 3D target region at a considerably low x-ray dose. However, VOI imaging involves laterally truncated projections from which conventional reconstruction algorithms generally yield images with severe truncation artifacts. Heuristic based extrapolation methods, e.g., water cylinder extrapolation, typically rely on techniques that complete the truncated data by means of a continuity assumption and thus appear to be ad-hoc. It is our goal to improve the image quality of VOI imaging by exploiting existing patient-specific prior information in the workflow. Methods: A necessary initial step prior tomore » a 3D acquisition is to isocenter the patient with respect to the target to be scanned. To this end, low-dose fluoroscopic x-ray acquisitions are usually applied from anterior–posterior (AP) and medio-lateral (ML) views. Based on this, the patient is isocentered by repositioning the table. In this work, we present a patient-bounded extrapolation method that makes use of these noncollimated fluoroscopic images to improve image quality in 3D VOI reconstruction. The algorithm first extracts the 2D patient contours from the noncollimated AP and ML fluoroscopic images. These 2D contours are then combined to estimate a volumetric model of the patient. Forward-projecting the shape of the model at the eventually acquired C-arm rotation views gives the patient boundary information in the projection domain. In this manner, we are in the position to substantially improve image quality by enforcing the extrapolated line profiles to end at the known patient boundaries, derived from the 3D shape model estimate. Results: The proposed method was evaluated on eight clinical datasets with different degrees of truncation. The proposed algorithm achieved a relative root mean square error (rRMSE) of about 1.0% with respect to the reference reconstruction on

  12. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    USGS Publications Warehouse

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.

    2016-01-01

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.

  13. SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C; Kumarasiri, A; Chetvertkov, M

    2014-06-01

    Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformablymore » registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (μ±σ) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.« less

  14. Comparison of one-particle basis set extrapolation to explicitly correlated methods for the calculation of accurate quartic force fields, vibrational frequencies, and spectroscopic constants: Application to H2O, N2H+, NO2+, and C2H2

    NASA Astrophysics Data System (ADS)

    Huang, Xinchuan; Valeev, Edward F.; Lee, Timothy J.

    2010-12-01

    One-particle basis set extrapolation is compared with one of the new R12 methods for computing highly accurate quartic force fields (QFFs) and spectroscopic data, including molecular structures, rotational constants, and vibrational frequencies for the H2O, N2H+, NO2+, and C2H2 molecules. In general, agreement between the spectroscopic data computed from the best R12 and basis set extrapolation methods is very good with the exception of a few parameters for N2H+ where it is concluded that basis set extrapolation is still preferred. The differences for H2O and NO2+ are small and it is concluded that the QFFs from both approaches are more or less equivalent in accuracy. For C2H2, however, a known one-particle basis set deficiency for C-C multiple bonds significantly degrades the quality of results obtained from basis set extrapolation and in this case the R12 approach is clearly preferred over one-particle basis set extrapolation. The R12 approach used in the present study was modified in order to obtain high precision electronic energies, which are needed when computing a QFF. We also investigated including core-correlation explicitly in the R12 calculations, but conclude that current approaches are lacking. Hence core-correlation is computed as a correction using conventional methods. Considering the results for all four molecules, it is concluded that R12 methods will soon replace basis set extrapolation approaches for high accuracy electronic structure applications such as computing QFFs and spectroscopic data for comparison to high-resolution laboratory or astronomical observations, provided one uses a robust R12 method as we have done here. The specific R12 method used in the present study, CCSD(T)R12, incorporated a reformulation of one intermediate matrix in order to attain machine precision in the electronic energies. Final QFFs for N2H+ and NO2+ were computed, including basis set extrapolation, core-correlation, scalar relativity, and higher

  15. Endangered species toxicity extrapolation using ICE models

    EPA Science Inventory

    The National Research Council’s (NRC) report on assessing pesticide risks to threatened and endangered species (T&E) included the recommendation of using interspecies correlation models (ICE) as an alternative to general safety factors for extrapolating across species. ...

  16. Toward a Quantitative Comparison of Magnetic Field Extrapolations and Observed Coronal Loops

    NASA Astrophysics Data System (ADS)

    Warren, Harry P.; Crump, Nicholas A.; Ugarte-Urra, Ignacio; Sun, Xudong; Aschwanden, Markus J.; Wiegelmann, Thomas

    2018-06-01

    It is widely believed that loops observed in the solar atmosphere trace out magnetic field lines. However, the degree to which magnetic field extrapolations yield field lines that actually do follow loops has yet to be studied systematically. In this paper, we apply three different extrapolation techniques—a simple potential model, a nonlinear force-free (NLFF) model based on photospheric vector data, and an NLFF model based on forward fitting magnetic sources with vertical currents—to 15 active regions that span a wide range of magnetic conditions. We use a distance metric to assess how well each of these models is able to match field lines to the 12202 loops traced in coronal images. These distances are typically 1″–2″. We also compute the misalignment angle between each traced loop and the local magnetic field vector, and find values of 5°–12°. We find that the NLFF models generally outperform the potential extrapolation on these metrics, although the differences between the different extrapolations are relatively small. The methodology that we employ for this study suggests a number of ways that both the extrapolations and loop identification can be improved.

  17. Can Tauc plot extrapolation be used for direct-band-gap semiconductor nanocrystals?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Y., E-mail: yu.feng@unsw.edu.au; Lin, S.; Huang, S.

    Despite that Tauc plot extrapolation has been widely adopted for extracting bandgap energies of semiconductors, there is a lack of theoretical support for applying it to nanocrystals. In this paper, direct-allowed optical transitions in semiconductor nanocrystals have been formulated based on a purely theoretical approach. This result reveals a size-dependant transition of the power factor used in Tauc plot, increasing from one half used in the 3D bulk case to one in the 0D case. This size-dependant intermediate value of power factor allows a better extrapolation of measured absorption data. Being a material characterization technique, the generalized Tauc extrapolation givesmore » a more reasonable and accurate acquisition of the intrinsic bandgap, while the unjustified purpose of extrapolating any elevated bandgap caused by quantum confinement is shown to be incorrect.« less

  18. Chiral extrapolation of nucleon axial charge gA in effective field theory

    NASA Astrophysics Data System (ADS)

    Li, Hong-na; Wang, P.

    2016-12-01

    The extrapolation of nucleon axial charge gA is investigated within the framework of heavy baryon chiral effective field theory. The intermediate octet and decuplet baryons are included in the one loop calculation. Finite range regularization is applied to improve the convergence in the quark-mass expansion. The lattice data from three different groups are used for the extrapolation. At physical pion mass, the extrapolated gA are all smaller than the experimental value. Supported by National Natural Science Foundation of China (11475186) and Sino-German CRC 110 (NSFC 11621131001)

  19. An Improved Extrapolation Scheme for Truncated CT Data Using 2D Fourier-Based Helgason-Ludwig Consistency Conditions.

    PubMed

    Xia, Yan; Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre; Maier, Andreas

    2017-01-01

    We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method.

  20. An Improved Extrapolation Scheme for Truncated CT Data Using 2D Fourier-Based Helgason-Ludwig Consistency Conditions

    PubMed Central

    Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre

    2017-01-01

    We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method. PMID:28808441

  1. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    NASA Astrophysics Data System (ADS)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  2. Resolution enhancement in digital holography by self-extrapolation of holograms.

    PubMed

    Latychevskaia, Tatiana; Fink, Hans-Werner

    2013-03-25

    It is generally believed that the resolution in digital holography is limited by the size of the captured holographic record. Here, we present a method to circumvent this limit by self-extrapolating experimental holograms beyond the area that is actually captured. This is done by first padding the surroundings of the hologram and then conducting an iterative reconstruction procedure. The wavefront beyond the experimentally detected area is thus retrieved and the hologram reconstruction shows enhanced resolution. To demonstrate the power of this concept, we apply it to simulated as well as experimental holograms.

  3. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    DOE PAGES

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; ...

    2016-02-18

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less

  4. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less

  5. TLD extrapolation for skin dose determination in vivo.

    PubMed

    Kron, T; Butson, M; Hunt, F; Denham, J

    1996-11-01

    Prediction of skin reactions requires knowledge of the dose at various depths in the human skin. Using thermoluminescence dosimeters of three different thicknesses, the dose can be extrapolated to the surface and interpolated between the different depths. A TLD holder was designed for these TLD extrapolation measurements on patients during treatment which allowed measurements of entrance and exit skin dose with a day to day variability of +/-7% (S.D. of mean reading). In a pilot study on 18 patients undergoing breast irradiation, it was found that the angle of incidence of the radiation beam is the most significant factor influencing skin entrance dose. In most of these measurements the beam exit dose contributed 50% more to the surface dose than the entrance dose.

  6. On the existence of the optimal order for wavefunction extrapolation in Born-Oppenheimer molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Jun; Wang, Han, E-mail: wang-han@iapcm.ac.cn; CAEP Software Center for High Performance Numerical Simulation, Beijing

    2016-06-28

    Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn–Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps ormore » more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.« less

  7. Why do people appear not to extrapolate trajectories during multiple object tracking? A computational investigation

    PubMed Central

    Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I

    2014-01-01

    Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300

  8. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong

    2018-04-01

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.

  9. Combining extrapolation with ghost interaction correction in range-separated ensemble density functional theory for excited states

    NASA Astrophysics Data System (ADS)

    Alam, Md. Mehboob; Deur, Killian; Knecht, Stefan; Fromager, Emmanuel

    2017-11-01

    The extrapolation technique of Savin [J. Chem. Phys. 140, 18A509 (2014)], which was initially applied to range-separated ground-state-density-functional Hamiltonians, is adapted in this work to ghost-interaction-corrected (GIC) range-separated ensemble density-functional theory (eDFT) for excited states. While standard extrapolations rely on energies that decay as μ-2 in the large range-separation-parameter μ limit, we show analytically that (approximate) range-separated GIC ensemble energies converge more rapidly (as μ-3) towards their pure wavefunction theory values (μ → +∞ limit), thus requiring a different extrapolation correction. The purpose of such a correction is to further improve on the convergence and, consequently, to obtain more accurate excitation energies for a finite (and, in practice, relatively small) μ value. As a proof of concept, we apply the extrapolation method to He and small molecular systems (viz., H2, HeH+, and LiH), thus considering different types of excitations such as Rydberg, charge transfer, and double excitations. Potential energy profiles of the first three and four singlet Σ+ excitation energies in HeH+ and H2, respectively, are studied with a particular focus on avoided crossings for the latter. Finally, the extraction of individual state energies from the ensemble energy is discussed in the context of range-separated eDFT, as a perspective.

  10. Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)

    1998-01-01

    The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bockris, J.O.; Devanathan, M.A.V.

    The galvanostatic double charging method was applied to determine the coverage of Ni cathodes with adsorbed atomic H in 2 N NaOH solutions. Anodic current densities were varied from 0.05 to 1.8 amp/sq cm. The plateau indicating absence of readsorption was between 0.6 and 1.8 amp/sq cm, for a constant cathodic c.d. of 1/10,000 amp/sq cm. The variation of the adsorbed H over cathodic c.d.'s ranging from 10 to the -6th power to 1/10 at a constant anodic c.d. of 1 amp/sq cm were calculated and the coverage calculated. The mechanism of the H evolution reaction was elucidated. The ratemore » determining step is discharge from a water molecules followed by rapid Tafel recombination. The rate constants for these processes and the rate constant for the ionisation, calculated with the extrapolated value of coverage for the reversible H electrode, were determined. A modification of the Tafel equation which takes into account both coverage and ionisation is in harmony with the results. A new method for the determination of coverage suitable for corrodible metals is described which involves the measurement of the rate of permeation of H by electrochemical techniques which enhances the sensitivity of the method. (Author)« less

  12. X-ray surface dose measurements using TLD extrapolation.

    PubMed

    Kron, T; Elliot, A; Wong, T; Showell, G; Clubb, B; Metcalfe, P

    1993-01-01

    Surface dose measurements in therapeutic x-ray beams are of importance in determining the dose to the skin of patients undergoing radiotherapy. Measurements were performed in the 6-MV beam of a medical linear accelerator with LiF thermoluminescence dosimeters (TLD) using a solid water phantom. TLD chips (surface area 3.17 x 3.17 cm2) of three different thicknesses (0.230, 0.099, and 0.038 g/cm2) were used to extrapolate dose readings to an infinitesimally thin layer of LiF. This surface dose was measured for field sizes ranging from 1 x 1 cm2 to 40 x 40 cm2. The surface dose relative to maximum dose was found to be 10.0% for a field size of 5 x 5 cm2, 16.3% for 10 x 10 cm2, and 26.9% for 20 x 20 cm2. Using a 6-mm Perspex block tray in the beam increased the surface dose in these fields to 10.7%, 17.7%, and 34.2% respectively. Due to the small size of the TLD chips, TLD extrapolation is applicable also for intracavity and exit dose determinations. The technique used for in vivo dosimetry could provide clinicians information about the build up of dose up to 1-mm depth in addition to an extrapolated surface dose measurement.

  13. Statistical modeling for Bayesian extrapolation of adult clinical trial information in pediatric drug evaluation.

    PubMed

    Gamalo-Siebers, Margaret; Savic, Jasmina; Basu, Cynthia; Zhao, Xin; Gopalakrishnan, Mathangi; Gao, Aijun; Song, Guochen; Baygani, Simin; Thompson, Laura; Xia, H Amy; Price, Karen; Tiwari, Ram; Carlin, Bradley P

    2017-07-01

    Children represent a large underserved population of "therapeutic orphans," as an estimated 80% of children are treated off-label. However, pediatric drug development often faces substantial challenges, including economic, logistical, technical, and ethical barriers, among others. Among many efforts trying to remove these barriers, increased recent attention has been paid to extrapolation; that is, the leveraging of available data from adults or older age groups to draw conclusions for the pediatric population. The Bayesian statistical paradigm is natural in this setting, as it permits the combining (or "borrowing") of information across disparate sources, such as the adult and pediatric data. In this paper, authored by the pediatric subteam of the Drug Information Association Bayesian Scientific Working Group and Adaptive Design Working Group, we develop, illustrate, and provide suggestions on Bayesian statistical methods that could be used to design improved pediatric development programs that use all available information in the most efficient manner. A variety of relevant Bayesian approaches are described, several of which are illustrated through 2 case studies: extrapolating adult efficacy data to expand the labeling for Remicade to include pediatric ulcerative colitis and extrapolating adult exposure-response information for antiepileptic drugs to pediatrics. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Molecular Target Homology as a Basis for Species Extrapolation to Assess the Ecological Risk of Veterinary Drugs

    EPA Science Inventory

    Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...

  15. Nowcasting of deep convective clouds and heavy precipitation: Comparison study between NWP model simulation and extrapolation

    NASA Astrophysics Data System (ADS)

    Bližňák, Vojtěch; Sokol, Zbyněk; Zacharov, Petr

    2017-02-01

    An evaluation of convective cloud forecasts performed with the numerical weather prediction (NWP) model COSMO and extrapolation of cloud fields is presented using observed data derived from the geostationary satellite Meteosat Second Generation (MSG). The present study focuses on the nowcasting range (1-5 h) for five severe convective storms in their developing stage that occurred during the warm season in the years 2012-2013. Radar reflectivity and extrapolated radar reflectivity data were assimilated for at least 6 h depending on the time of occurrence of convection. Synthetic satellite imageries were calculated using radiative transfer model RTTOV v10.2, which was implemented into the COSMO model. NWP model simulations of IR10.8 μm and WV06.2 μm brightness temperatures (BTs) with a horizontal resolution of 2.8 km were interpolated into the satellite projection and objectively verified against observations using Root Mean Square Error (RMSE), correlation coefficient (CORR) and Fractions Skill Score (FSS) values. Naturally, the extrapolation of cloud fields yielded an approximately 25% lower RMSE, 20% higher CORR and 15% higher FSS at the beginning of the second forecasted hour compared to the NWP model forecasts. On the other hand, comparable scores were observed for the third hour, whereas the NWP forecasts outperformed the extrapolation by 10% for RMSE, 15% for CORR and up to 15% for FSS during the fourth forecasted hour and 15% for RMSE, 27% for CORR and up to 15% for FSS during the fifth forecasted hour. The analysis was completed by a verification of the precipitation forecasts yielding approximately 8% higher RMSE, 15% higher CORR and up to 45% higher FSS when the NWP model simulation is used compared to the extrapolation for the first hour. Both the methods yielded unsatisfactory level of precipitation forecast accuracy from the fourth forecasted hour onward.

  16. SNSEDextend: SuperNova Spectral Energy Distributions extrapolation toolkit

    NASA Astrophysics Data System (ADS)

    Pierel, Justin D. R.; Rodney, Steven A.; Avelino, Arturo; Bianco, Federica; Foley, Ryan J.; Friedman, Andrew; Hicken, Malcolm; Hounsell, Rebekah; Jha, Saurabh W.; Kessler, Richard; Kirshner, Robert; Mandel, Kaisey; Narayan, Gautham; Filippenko, Alexei V.; Scolnic, Daniel; Strolger, Louis-Gregory

    2018-05-01

    SNSEDextend extrapolates core-collapse and Type Ia Spectral Energy Distributions (SEDs) into the UV and IR for use in simulations and photometric classifications. The user provides a library of existing SED templates (such as those in the authors' SN SED Repository) along with new photometric constraints in the UV and/or NIR wavelength ranges. The software then extends the existing template SEDs so their colors match the input data at all phases. SNSEDextend can also extend the SALT2 spectral time-series model for Type Ia SN for a "first-order" extrapolation of the SALT2 model components, suitable for use in survey simulations and photometric classification tools; as the code does not do a rigorous re-training of the SALT2 model, the results should not be relied on for precision applications such as light curve fitting for cosmology.

  17. Efficient numerical methods for the random-field Ising model: Finite-size scaling, reweighting extrapolation, and computation of response functions.

    PubMed

    Fytas, Nikolaos G; Martín-Mayor, Víctor

    2016-06-01

    It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)PRLTAO0031-900710.1103/PhysRevLett.110.227201] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero- and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent α of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.

  18. Monte Carlo based approach to the LS-NaI 4πβ-γ anticoincidence extrapolation and uncertainty

    PubMed Central

    Fitzgerald, R.

    2016-01-01

    The 4πβ-γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone. PMID:27358944

  19. The value of remote sensing techniques in supporting effective extrapolation across multiple marine spatial scales.

    PubMed

    Strong, James Asa; Elliott, Michael

    2017-03-15

    The reporting of ecological phenomena and environmental status routinely required point observations, collected with traditional sampling approaches to be extrapolated to larger reporting scales. This process encompasses difficulties that can quickly entrain significant errors. Remote sensing techniques offer insights and exceptional spatial coverage for observing the marine environment. This review provides guidance on (i) the structures and discontinuities inherent within the extrapolative process, (ii) how to extrapolate effectively across multiple spatial scales, and (iii) remote sensing techniques and data sets that can facilitate this process. This evaluation illustrates that remote sensing techniques are a critical component in extrapolation and likely to underpin the production of high-quality assessments of ecological phenomena and the regional reporting of environmental status. Ultimately, is it hoped that this guidance will aid the production of robust and consistent extrapolations that also make full use of the techniques and data sets that expedite this process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. COMPARISON OF CORONAL EXTRAPOLATION METHODS FOR CYCLE 24 USING HMI DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arden, William M.; Norton, Aimee A.; Sun, Xudong

    2016-05-20

    Two extrapolation models of the solar coronal magnetic field are compared using magnetogram data from the Solar Dynamics Observatory /Helioseismic and Magnetic Imager instrument. The two models, a horizontal current–current sheet–source surface (HCCSSS) model and a potential field–source surface (PFSS) model, differ in their treatment of coronal currents. Each model has its own critical variable, respectively, the radius of a cusp surface and a source surface, and it is found that adjusting these heights over the period studied allows for a better fit between the models and the solar open flux at 1 au as calculated from the Interplanetary Magneticmore » Field (IMF). The HCCSSS model provides the better fit for the overall period from 2010 November to 2015 May as well as for two subsets of the period: the minimum/rising part of the solar cycle and the recently identified peak in the IMF from mid-2014 to mid-2015 just after solar maximum. It is found that an HCCSSS cusp surface height of 1.7 R {sub ⊙} provides the best fit to the IMF for the overall period, while 1.7 and 1.9 R {sub ⊙} give the best fits for the two subsets. The corresponding values for the PFSS source surface height are 2.1, 2.2, and 2.0 R {sub ⊙} respectively. This means that the HCCSSS cusp surface rises as the solar cycle progresses while the PFSS source surface falls.« less

  1. MULTIPLE SOLVENT EXPOSURE IN HUMANS: CROSS-SPECIES EXTRAPOLATIONS

    EPA Science Inventory

    Multiple Solvent Exposures in Humans:
    Cross-Species Extrapolations
    (Future Research Plan)

    Vernon A. Benignus1, Philip J. Bushnell2 and William K. Boyes2

    A few solvents can be safely studied in acute experiments in human subjects. Data exist in rats f...

  2. Approximations to complete basis set-extrapolated, highly correlated non-covalent interaction energies.

    PubMed

    Mackie, Iain D; DiLabio, Gino A

    2011-10-07

    The first-principles calculation of non-covalent (particularly dispersion) interactions between molecules is a considerable challenge. In this work we studied the binding energies for ten small non-covalently bonded dimers with several combinations of correlation methods (MP2, coupled-cluster single double, coupled-cluster single double (triple) (CCSD(T))), correlation-consistent basis sets (aug-cc-pVXZ, X = D, T, Q), two-point complete basis set energy extrapolations, and counterpoise corrections. For this work, complete basis set results were estimated from averaged counterpoise and non-counterpoise-corrected CCSD(T) binding energies obtained from extrapolations with aug-cc-pVQZ and aug-cc-pVTZ basis sets. It is demonstrated that, in almost all cases, binding energies converge more rapidly to the basis set limit by averaging the counterpoise and non-counterpoise corrected values than by using either counterpoise or non-counterpoise methods alone. Examination of the effect of basis set size and electron correlation shows that the triples contribution to the CCSD(T) binding energies is fairly constant with the basis set size, with a slight underestimation with CCSD(T)∕aug-cc-pVDZ compared to the value at the (estimated) complete basis set limit, and that contributions to the binding energies obtained by MP2 generally overestimate the analogous CCSD(T) contributions. Taking these factors together, we conclude that the binding energies for non-covalently bonded systems can be accurately determined using a composite method that combines CCSD(T)∕aug-cc-pVDZ with energy corrections obtained using basis set extrapolated MP2 (utilizing aug-cc-pVQZ and aug-cc-pVTZ basis sets), if all of the components are obtained by averaging the counterpoise and non-counterpoise energies. With such an approach, binding energies for the set of ten dimers are predicted with a mean absolute deviation of 0.02 kcal/mol, a maximum absolute deviation of 0.05 kcal/mol, and a mean percent

  3. Extrapolating MP2 and CCSD explicitly correlated correlation energies to the complete basis set limit with first and second row correlation consistent basis sets

    NASA Astrophysics Data System (ADS)

    Hill, J. Grant; Peterson, Kirk A.; Knizia, Gerald; Werner, Hans-Joachim

    2009-11-01

    Accurate extrapolation to the complete basis set (CBS) limit of valence correlation energies calculated with explicitly correlated MP2-F12 and CCSD(T)-F12b methods have been investigated using a Schwenke-style approach for molecules containing both first and second row atoms. Extrapolation coefficients that are optimal for molecular systems containing first row elements differ from those optimized for second row analogs, hence values optimized for a combined set of first and second row systems are also presented. The new coefficients are shown to produce excellent results in both Schwenke-style and equivalent power-law-based two-point CBS extrapolations, with the MP2-F12/cc-pV(D,T)Z-F12 extrapolations producing an average error of just 0.17 mEh with a maximum error of 0.49 for a collection of 23 small molecules. The use of larger basis sets, i.e., cc-pV(T,Q)Z-F12 and aug-cc-pV(Q,5)Z, in extrapolations of the MP2-F12 correlation energy leads to average errors that are smaller than the degree of confidence in the reference data (˜0.1 mEh). The latter were obtained through use of very large basis sets in MP2-F12 calculations on small molecules containing both first and second row elements. CBS limits obtained from optimized coefficients for conventional MP2 are only comparable to the accuracy of the MP2-F12/cc-pV(D,T)Z-F12 extrapolation when the aug-cc-pV(5+d)Z and aug-cc-pV(6+d)Z basis sets are used. The CCSD(T)-F12b correlation energy is extrapolated as two distinct parts: CCSD-F12b and (T). While the CCSD-F12b extrapolations with smaller basis sets are statistically less accurate than those of the MP2-F12 correlation energies, this is presumably due to the slower basis set convergence of the CCSD-F12b method compared to MP2-F12. The use of larger basis sets in the CCSD-F12b extrapolations produces correlation energies with accuracies exceeding the confidence in the reference data (also obtained in large basis set F12 calculations). It is demonstrated that the use

  4. Flash-lag effect: complicating motion extrapolation of the moving reference-stimulus paradoxically augments the effect.

    PubMed

    Bachmann, Talis; Murd, Carolina; Põder, Endel

    2012-09-01

    One fundamental property of the perceptual and cognitive systems is their capacity for prediction in the dynamic environment; the flash-lag effect has been considered as a particularly suggestive example of this capacity (Nijhawan in nature 370:256-257, 1994, Behav brain sci 31:179-239, 2008). Thus, because of involvement of the mechanisms of extrapolation and visual prediction, the moving object is perceived ahead of the simultaneously flashed static object objectively aligned with the moving one. In the present study we introduce a new method and report experimental results inconsistent with at least some versions of the prediction/extrapolation theory. We show that a stimulus moving in the opposite direction to the reference stimulus by approaching it before the flash does not diminish the flash-lag effect, but rather augments it. In addition, alternative theories (in)capable of explaining this paradoxical result are discussed.

  5. Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?

    NASA Astrophysics Data System (ADS)

    Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.

    2016-02-01

    It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.

  6. The use of extrapolation concepts to augment the Frequency Separation Technique

    NASA Astrophysics Data System (ADS)

    Alexiou, Spiros

    2015-03-01

    The Frequency Separation Technique (FST) is a general method formulated to improve the speed and/or accuracy of lineshape calculations, including strong overlapping collisions, as is the case for ion dynamics. It should be most useful when combined with ultrafast methods, that, however have significant difficulties when the impact regime is approached. These difficulties are addressed by the Frequency Separation Technique, in which the impact limit is correctly recovered. The present work examines the possibility of combining the Frequency Separation Technique with the addition of extrapolation to improve results and minimize errors resulting from the neglect of fast-slow coupling and thus obtain the exact result with a minimum of extra effort. To this end the adequacy of one such ultrafast method, the Frequency Fluctuation Method (FFM) for treating the nonimpact part is examined. It is found that although the FFM is unable to reproduce the nonimpact profile correctly, its coupling with the FST correctly reproduces the total profile.

  7. Biosimilars in Inflammatory Bowel Disease: Facts and Fears of Extrapolation.

    PubMed

    Ben-Horin, Shomron; Vande Casteele, Niels; Schreiber, Stefan; Lakatos, Peter Laszlo

    2016-12-01

    Biologic drugs such as infliximab and other anti-tumor necrosis factor monoclonal antibodies have transformed the treatment of immune-mediated inflammatory conditions such as Crohn's disease and ulcerative colitis (collectively known as inflammatory bowel disease [IBD]). However, the complex manufacturing processes involved in producing these drugs mean their use in clinical practice is expensive. Recent or impending expiration of patents for several biologics has led to development of biosimilar versions of these drugs, with the aim of providing substantial cost savings and increased accessibility to treatment. Biosimilars undergo an expedited regulatory process. This involves proving structural, functional, and biological biosimilarity to the reference product (RP). It is also expected that clinical equivalency/comparability will be demonstrated in a clinical trial in one (or more) sensitive population. Once these requirements are fulfilled, extrapolation of biosimilar approval to other indications for which the RP is approved is permitted without the need for further clinical trials, as long as this is scientifically justifiable. However, such justification requires that the mechanism(s) of action of the RP in question should be similar across indications and also comparable between the RP and the biosimilar in the clinically tested population(s). Likewise, the pharmacokinetics, immunogenicity, and safety of the RP should be similar across indications and comparable between the RP and biosimilar in the clinically tested population(s). To date, most anti-tumor necrosis factor biosimilars have been tested in trials recruiting patients with rheumatoid arthritis. Concerns have been raised regarding extrapolation of clinical data obtained in rheumatologic populations to IBD indications. In this review, we discuss the issues surrounding indication extrapolation, with a focus on extrapolation to IBD. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All

  8. Møller-Plesset perturbation energies and distances for HeC(20) extrapolated to the complete basis set limit.

    PubMed

    Varandas, A J C

    2009-02-01

    The potential energy surface for the C(20)-He interaction is extrapolated for three representative cuts to the complete basis set limit using second-order Møller-Plesset perturbation calculations with correlation consistent basis sets up to the doubly augmented variety. The results both with and without counterpoise correction show consistency with each other, supporting that extrapolation without such a correction provides a reliable scheme to elude the basis-set-superposition error. Converged attributes are obtained for the C(20)-He interaction, which are used to predict the fullerene dimer ones. Time requirements show that the method can be drastically more economical than the counterpoise procedure and even competitive with Kohn-Sham density functional theory for the title system.

  9. The chemistry side of AOP: implications for toxicity extrapolation

    EPA Science Inventory

    An adverse outcome pathway (AOP) is a structured representation of the biological events that lead to adverse impacts following a molecular initiating event caused by chemical interaction with a macromolecule. AOPs have been proposed to facilitate toxicity extrapolation across s...

  10. Height extrapolation of wind data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikhail, A.S.

    1982-11-01

    Hourly average data for a period of 1 year from three tall meteorological towers - the Erie tower in Colorado, the Goodnoe Hills tower in Washington and the WKY-TV tower in Oklahoma - were used to analyze the wind shear exponent variabiilty with various parameters such as thermal stability, anemometer level wind speed, projection height and surface roughness. Different proposed models for prediction of height variability of short-term average wind speeds were discussed. Other models that predict the height dependence of Weilbull distribution parameters were tested. The observed power law exponent for all three towers showed strong dependence on themore » anemometer level wind speed and stability (nighttime and daytime). It also exhibited a high degree of dependence on extrapolation height with respect to anemometer height. These dependences became less severe as the anemometer level wind speeds were increased due to the turbulent mixing of the atmospheric boundary layer. The three models used for Weibull distribution parameter extrapolation were he velocity-dependent power law model (Justus), the velocity, surface roughness, and height-dependent model (Mikhail) and the velocity and surface roughness-dependent model (NASA). The models projected the scale parameter C fairly accurately for the Goodnoe Hills and WKY-TV towers and were less accurate for the Erie tower. However, all models overestimated the C value. The maximum error for the Mikhail model was less than 2% for Goodnoe Hills, 6% for WKY-TV and 28% for Erie. The error associated with the prediction of the shape factor (K) was similar for the NASA, Mikhail and Justus models. It ranged from 20 to 25%. The effect of the misestimation of hub-height distribution parameters (C and K) on average power output is briefly discussed.« less

  11. Simulation-extrapolation method to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates, 1950-2003.

    PubMed

    Allodji, Rodrigue S; Schwartz, Boris; Diallo, Ibrahima; Agbovon, Césaire; Laurier, Dominique; de Vathaire, Florent

    2015-08-01

    Analyses of the Life Span Study (LSS) of Japanese atomic bombing survivors have routinely incorporated corrections for additive classical measurement errors using regression calibration. Recently, several studies reported that the efficiency of the simulation-extrapolation method (SIMEX) is slightly more accurate than the simple regression calibration method (RCAL). In the present paper, the SIMEX and RCAL methods have been used to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates. For instance, it is shown that using the SIMEX method, the ERR/Gy is increased by an amount of about 29 % for all solid cancer deaths using a linear model compared to the RCAL method, and the corrected EAR 10(-4) person-years at 1 Gy (the linear terms) is decreased by about 8 %, while the corrected quadratic term (EAR 10(-4) person-years/Gy(2)) is increased by about 65 % for leukaemia deaths based on a linear-quadratic model. The results with SIMEX method are slightly higher than published values. The observed differences were probably due to the fact that with the RCAL method the dosimetric data were partially corrected, while all doses were considered with the SIMEX method. Therefore, one should be careful when comparing the estimated risks and it may be useful to use several correction techniques in order to obtain a range of corrected estimates, rather than to rely on a single technique. This work will enable to improve the risk estimates derived from LSS data, and help to make more reliable the development of radiation protection standards.

  12. 40 CFR 86.435-78 - Extrapolated emission values.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) accumulate distance to the useful life. [42 FR 1126, Jan. 5, 1977, as amended at 49 FR 48139, Dec. 10, 1984] ... Regulations for 1978 and Later New Motorcycles, General Provisions § 86.435-78 Extrapolated emission values... the useful life, or if all points used to generate the lines are below the standards, predicted useful...

  13. 40 CFR 86.435-78 - Extrapolated emission values.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) accumulate distance to the useful life. [42 FR 1126, Jan. 5, 1977, as amended at 49 FR 48139, Dec. 10, 1984] ... Regulations for 1978 and Later New Motorcycles, General Provisions § 86.435-78 Extrapolated emission values... the useful life, or if all points used to generate the lines are below the standards, predicted useful...

  14. Atomically resolved structural determination of graphene and its point defects via extrapolation assisted phase retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latychevskaia, Tatiana; Fink, Hans-Werner

    Previously reported crystalline structures obtained by an iterative phase retrieval reconstruction of their diffraction patterns seem to be free from displaying any irregularities or defects in the lattice, which appears to be unrealistic. We demonstrate here that the structure of a nanocrystal including its atomic defects can unambiguously be recovered from its diffraction pattern alone by applying a direct phase retrieval procedure not relying on prior information of the object shape. Individual point defects in the atomic lattice are clearly apparent. Conventional phase retrieval routines assume isotropic scattering. We show that when dealing with electrons, the quantitatively correct transmission functionmore » of the sample cannot be retrieved due to anisotropic, strong forward scattering specific to electrons. We summarize the conditions for this phase retrieval method and show that the diffraction pattern can be extrapolated beyond the original record to even reveal formerly not visible Bragg peaks. Such extrapolated wave field pattern leads to enhanced spatial resolution in the reconstruction.« less

  15. A visual basic program to generate sediment grain-size statistics and to extrapolate particle distributions

    USGS Publications Warehouse

    Poppe, L.J.; Eliason, A.H.; Hastings, M.E.

    2004-01-01

    Measures that describe and summarize sediment grain-size distributions are important to geologists because of the large amount of information contained in textural data sets. Statistical methods are usually employed to simplify the necessary comparisons among samples and quantify the observed differences. The two statistical methods most commonly used by sedimentologists to describe particle distributions are mathematical moments (Krumbein and Pettijohn, 1938) and inclusive graphics (Folk, 1974). The choice of which of these statistical measures to use is typically governed by the amount of data available (Royse, 1970). If the entire distribution is known, the method of moments may be used; if the next to last accumulated percent is greater than 95, inclusive graphics statistics can be generated. Unfortunately, earlier programs designed to describe sediment grain-size distributions statistically do not run in a Windows environment, do not allow extrapolation of the distribution's tails, or do not generate both moment and graphic statistics (Kane and Hubert, 1963; Collias et al., 1963; Schlee and Webster, 1967; Poppe et al., 2000)1.Owing to analytical limitations, electro-resistance multichannel particle-size analyzers, such as Coulter Counters, commonly truncate the tails of the fine-fraction part of grain-size distributions. These devices do not detect fine clay in the 0.6–0.1 μm range (part of the 11-phi and all of the 12-phi and 13-phi fractions). Although size analyses performed down to 0.6 μm microns are adequate for most freshwater and near shore marine sediments, samples from many deeper water marine environments (e.g. rise and abyssal plain) may contain significant material in the fine clay fraction, and these analyses benefit from extrapolation.The program (GSSTAT) described herein generates statistics to characterize sediment grain-size distributions and can extrapolate the fine-grained end of the particle distribution. It is written in Microsoft

  16. Motion-based prediction explains the role of tracking in motion extrapolation.

    PubMed

    Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U

    2013-11-01

    During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated

  17. MEGA16 - Computer program for analysis and extrapolation of stress-rupture data

    NASA Technical Reports Server (NTRS)

    Ensign, C. R.

    1981-01-01

    The computerized form of the minimum commitment method of interpolating and extrapolating stress versus time to failure data, MEGA16, is described. Examples are given of its many plots and tabular outputs for a typical set of data. The program assumes a specific model equation and then provides a family of predicted isothermals for any set of data with at least 12 stress-rupture results from three different temperatures spread over reasonable stress and time ranges. It is written in FORTRAN 4 using IBM plotting subroutines and its runs on an IBM 370 time sharing system.

  18. Chiral extrapolation of the leading hadronic contribution to the muon anomalous magnetic moment

    NASA Astrophysics Data System (ADS)

    Golterman, Maarten; Maltman, Kim; Peris, Santiago

    2017-04-01

    A lattice computation of the leading-order hadronic contribution to the muon anomalous magnetic moment can potentially help reduce the error on the Standard Model prediction for this quantity, if sufficient control of all systematic errors affecting such a computation can be achieved. One of these systematic errors is that associated with the extrapolation to the physical pion mass from values on the lattice larger than the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 200 to 400 MeV with the help of two-loop chiral perturbation theory, and we find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various tricks to improve the reliability of the chiral extrapolation employed in the literature are taken into account. In addition, while chiral perturbation theory also predicts the dependence on the pion mass of the leading-order hadronic contribution to the muon anomalous magnetic moment as the chiral limit is approached, this prediction turns out to be of no practical use because the physical pion mass is larger than the muon mass that sets the scale for the onset of this behavior.

  19. Wildlife toxicity extrapolations: NOAEL versus LOAEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fairbrother, A.; Berg, M. van den

    1995-12-31

    Ecotoxicological assessments must rely on the extrapolation of toxicity data from a few indicator species to many species of concern. Data are available from laboratory studies (e.g., quail, mallards, rainbow trout, fathead minnow) and some planned or serendipitous field studies of a broader, but by no means comprehensive, suite of species. Yet all ecological risk assessments begin with an estimate of risk based on information gleaned from the literature. One is then confronted with the necessity of extrapolating toxicity information from a limited number of indicator species to all organisms of interest. This is a particularly acute problem when tryingmore » to estimate hazards to wildlife in terrestrial systems as there is an extreme paucity of data for most chemicals in all but a handful of species. This section continues the debate by six panelists of the ``correct`` approach for determining wildlife toxicity thresholds by debating which toxicity value should be used for setting threshold criteria. Should the lowest observable effect level (LOAEL) be used or is it more appropriate to use the no observable effect level (NOAEL)? What are the short-comings of using either of these point estimates? Should a ``benchmark`` approach, similar to that proposed for human health risk assessments, be used instead, where an EC{sub 5} or EC{sub 10} and associated confidence limits are determined and then divided by a safety factor? How should knowledge of the slope of the dose-response curve be incorporated into determination of toxicity threshold values?« less

  20. Dioxin equivalency: Challenge to dose extrapolation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.F. Jr.; Silkworth, J.B.

    1995-12-31

    Extensive research has shown that all biological effects of dioxin-like agents are mediated via a single biochemical target, the Ah receptor (AhR), and that the relative biologic potencies of such agents in any given system, coupled with their exposure levels, may be described in terms of toxic equivalents (TEQ). It has also shown that the TEQ sources include not only chlorinated species such as the dioxins (PCDDs), PCDFs, and coplanar PCBs, but also non-chlorinated substances such as the PAHs of wood smoke, the AhR agonists of cooked meat, and the indolocarbazol (ICZ) derived from cruciferous vegetables. Humans have probably hadmore » elevated exposures to these non-chlorinated TEQ sources ever since the discoveries of fire, cooking, and the culinary use of Brassica spp. Recent assays of CYP1A2 induction show that these ``natural`` or ``traditional`` AhR agonists are contributing 50--100 times as much to average human TEQ exposures as do the chlorinated xenobiotics. Currently, the safe doses of the xenobiotic TEQ sources are estimated from their NOAELs and large extrapolation factors, derived from arbitrary mathematical models, whereas the NOAELs themselves are regarded as the safe doses for the TEQs of traditional dietary components. Available scientific data can neither support nor refute either approach to assessing the health risk of an individual chemical substance. However, if two substances be toxicologically equivalent, then their TEQ-adjusted health risks must also be equivalent, and the same dose extrapolation procedure should be used for both.« less

  1. Application of the Weibull extrapolation to 137Cs geochronology in Tokyo Bay and Ise Bay, Japan.

    PubMed

    Lu, Xueqiang

    2004-01-01

    Considerable doubt surrounds the nature of processes by which 137Cs is deposited in marine sediments, leading to a situation where 137Cs geochronology cannot be always applied suitably. Based on extrapolation with Weibull distribution, the maximum concentration of 137Cs derived from asymptotic values for cumulative specific inventory was used to re-establish 137Cs geochronology, instead of original 137Cs profiles. Corresponding dating results for cores in Tokyo Bay and Ise Bay, Japan, by means of this new method, are in much closer agreement with those calculated from 210Pb method than the previous method.

  2. WE-DE-201-05: Evaluation of a Windowless Extrapolation Chamber Design and Monte Carlo Based Corrections for the Calibration of Ophthalmic Applicators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, J; Culberson, W; DeWerd, L

    Purpose: To test the validity of a windowless extrapolation chamber used to measure surface dose rate from planar ophthalmic applicators and to compare different Monte Carlo based codes for deriving correction factors. Methods: Dose rate measurements were performed using a windowless, planar extrapolation chamber with a {sup 90}Sr/{sup 90}Y Tracerlab RA-1 ophthalmic applicator previously calibrated at the National Institute of Standards and Technology (NIST). Capacitance measurements were performed to estimate the initial air gap width between the source face and collecting electrode. Current was measured as a function of air gap, and Bragg-Gray cavity theory was used to calculate themore » absorbed dose rate to water. To determine correction factors for backscatter, divergence, and attenuation from the Mylar entrance window found in the NIST extrapolation chamber, both EGSnrc Monte Carlo user code and Monte Carlo N-Particle Transport Code (MCNP) were utilized. Simulation results were compared with experimental current readings from the windowless extrapolation chamber as a function of air gap. Additionally, measured dose rate values were compared with the expected result from the NIST source calibration to test the validity of the windowless chamber design. Results: Better agreement was seen between EGSnrc simulated dose results and experimental current readings at very small air gaps (<100 µm) for the windowless extrapolation chamber, while MCNP results demonstrated divergence at these small gap widths. Three separate dose rate measurements were performed with the RA-1 applicator. The average observed difference from the expected result based on the NIST calibration was −1.88% with a statistical standard deviation of 0.39% (k=1). Conclusion: EGSnrc user code will be used during future work to derive correction factors for extrapolation chamber measurements. Additionally, experiment results suggest that an entrance window is not needed in order for an

  3. Localized time-lapse elastic waveform inversion using wavefield injection and extrapolation: 2-D parametric studies

    NASA Astrophysics Data System (ADS)

    Yuan, Shihao; Fuji, Nobuaki; Singh, Satish; Borisov, Dmitry

    2017-06-01

    We present a methodology to invert seismic data for a localized area by combining source-side wavefield injection and receiver-side extrapolation method. Despite the high resolving power of seismic full waveform inversion, the computational cost for practical scale elastic or viscoelastic waveform inversion remains a heavy burden. This can be much more severe for time-lapse surveys, which require real-time seismic imaging on a daily or weekly basis. Besides, changes of the structure during time-lapse surveys are likely to occur in a small area rather than the whole region of seismic experiments, such as oil and gas reservoir or CO2 injection wells. We thus propose an approach that allows to image effectively and quantitatively the localized structure changes far deep from both source and receiver arrays. In our method, we perform both forward and back propagation only inside the target region. First, we look for the equivalent source expression enclosing the region of interest by using the wavefield injection method. Second, we extrapolate wavefield from physical receivers located near the Earth's surface or on the ocean bottom to an array of virtual receivers in the subsurface by using correlation-type representation theorem. In this study, we present various 2-D elastic numerical examples of the proposed method and quantitatively evaluate errors in obtained models, in comparison to those of conventional full-model inversions. The results show that the proposed localized waveform inversion is not only efficient and robust but also accurate even under the existence of errors in both initial models and observed data.

  4. Extrapolating Solar Dynamo Models Throughout the Heliosphere

    NASA Astrophysics Data System (ADS)

    Cox, B. T.; Miesch, M. S.; Augustson, K.; Featherstone, N. A.

    2014-12-01

    There are multiple theories that aim to explain the behavior of the solar dynamo, and their associated models have been fiercely contested. The two prevailing theories investigated in this project are the Convective Dynamo model that arises from the pure solving of the magnetohydrodynamic equations, as well as the Babcock-Leighton model that relies on sunspot dissipation and reconnection. Recently, the supercomputer simulations CASH and BASH have formed models of the behavior of the Convective and Babcock-Leighton models, respectively, in the convective zone of the sun. These models show the behavior of the models within the sun, while much less is known about the effects these models may have further away from the solar surface. The goal of this work is to investigate any fundamental differences between the Convective and Babcock-Leighton models of the solar dynamo outside of the sun and extending into the solar system via the use of potential field source surface extrapolations implemented via python code that operates on data from CASH and BASH. The use of real solar data to visualize supergranular flow data in the BASH model is also used to learn more about the behavior of the Babcock-Leighton Dynamo. From the process of these extrapolations it has been determined that the Babcock-Leighton model, as represented by BASH, maintains complex magnetic fields much further into the heliosphere before reverting into a basic dipole field, providing 3D visualisations of the models distant from the sun.

  5. Narrowing the error in electron correlation calculations by basis set re-hierarchization and use of the unified singlet and triplet electron-pair extrapolation scheme: Application to a test set of 106 systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.

    2014-12-14

    A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less

  6. Application of a framework for extrapolating chemical effects across species in pathways controlled by estrogen receptor-á

    EPA Science Inventory

    Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetic...

  7. Projecting species’ vulnerability to climate change: Which uncertainty sources matter most and extrapolate best?

    USGS Publications Warehouse

    Steen, Valerie; Sofaer, Helen R.; Skagen, Susan K.; Ray, Andrea J.; Noon, Barry R

    2017-01-01

    Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water

  8. Projecting species' vulnerability to climate change: Which uncertainty sources matter most and extrapolate best?

    PubMed

    Steen, Valerie; Sofaer, Helen R; Skagen, Susan K; Ray, Andrea J; Noon, Barry R

    2017-11-01

    Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water

  9. An experimental extrapolation technique using the Gafchromic EBT3 film for relative output factor measurements in small x-ray fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morales, Johnny E., E-mail: johnny.morales@lh.org.

    Purpose: An experimental extrapolation technique is presented, which can be used to determine the relative output factors for very small x-ray fields using the Gafchromic EBT3 film. Methods: Relative output factors were measured for the Brainlab SRS cones ranging in diameters from 4 to 30 mm{sup 2} on a Novalis Trilogy linear accelerator with 6 MV SRS x-rays. The relative output factor was determined from an experimental reducing circular region of interest (ROI) extrapolation technique developed to remove the effects of volume averaging. This was achieved by scanning the EBT3 film measurements with a high scanning resolution of 1200 dpi.more » From the high resolution scans, the size of the circular regions of interest was varied to produce a plot of relative output factors versus area of analysis. The plot was then extrapolated to zero to determine the relative output factor corresponding to zero volume. Results: Results have shown that for a 4 mm field size, the extrapolated relative output factor was measured as a value of 0.651 ± 0.018 as compared to 0.639 ± 0.019 and 0.633 ± 0.021 for 0.5 and 1.0 mm diameter of analysis values, respectively. This showed a change in the relative output factors of 1.8% and 2.8% at these comparative regions of interest sizes. In comparison, the 25 mm cone had negligible differences in the measured output factor between zero extrapolation, 0.5 and 1.0 mm diameter ROIs, respectively. Conclusions: This work shows that for very small fields such as 4.0 mm cone sizes, a measureable difference can be seen in the relative output factor based on the circular ROI and the size of the area of analysis using radiochromic film dosimetry. The authors recommend to scan the Gafchromic EBT3 film at a resolution of 1200 dpi for cone sizes less than 7.5 mm and to utilize an extrapolation technique for the output factor measurements of very small field dosimetry.« less

  10. An Extrapolation of a Radical Equation More Accurately Predicts Shelf Life of Frozen Biological Matrices.

    PubMed

    De Vore, Karl W; Fatahi, Nadia M; Sass, John E

    2016-08-01

    Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.

  11. Evaluating In Vitro-In Vivo Extrapolation of Toxicokinetics

    PubMed Central

    MacMillan, Denise K; Ford, Jermaine; Fennell, Timothy R; Black, Sherry R; Snyder, Rodney W; Sipes, Nisha S; Westerhout, Joost; Setzer, R Woodrow; Pearce, Robert G; Simmons, Jane Ellen; Thomas, Russell S

    2018-01-01

    Abstract Prioritizing the risk posed by thousands of chemicals potentially present in the environment requires exposure, toxicity, and toxicokinetic (TK) data, which are often unavailable. Relatively high throughput, in vitro TK (HTTK) assays and in vitro-to-in vivo extrapolation (IVIVE) methods have been developed to predict TK, but most of the in vivo TK data available to benchmark these methods are from pharmaceuticals. Here we report on new, in vivo rat TK experiments for 26 non-pharmaceutical chemicals with environmental relevance. Both intravenous and oral dosing were used to calculate bioavailability. These chemicals, and an additional 19 chemicals (including some pharmaceuticals) from previously published in vivo rat studies, were systematically analyzed to estimate in vivo TK parameters (e.g., volume of distribution [Vd], elimination rate). For each of the chemicals, rat-specific HTTK data were available and key TK predictions were examined: oral bioavailability, clearance, Vd, and uncertainty. For the non-pharmaceutical chemicals, predictions for bioavailability were not effective. While no pharmaceutical was absorbed at less than 10%, the fraction bioavailable for non-pharmaceutical chemicals was as low as 0.3%. Total clearance was generally more under-estimated for nonpharmaceuticals and Vd methods calibrated to pharmaceuticals may not be appropriate for other chemicals. However, the steady-state, peak, and time-integrated plasma concentrations of nonpharmaceuticals were predicted with reasonable accuracy. The plasma concentration predictions improved when experimental measurements of bioavailability were incorporated. In summary, HTTK and IVIVE methods are adequately robust to be applied to high throughput in vitro toxicity screening data of environmentally relevant chemicals for prioritizing based on human health risks. PMID:29385628

  12. Extrapolating Single Organic Ion Solvation Thermochemistry from Simulated Water Nanodroplets.

    PubMed

    Coles, Jonathan P; Houriez, Céline; Meot-Ner Mautner, Michael; Masella, Michel

    2016-09-08

    We compute the ion/water interaction energies of methylated ammonium cations and alkylated carboxylate anions solvated in large nanodroplets of 10 000 water molecules using 10 ns molecular dynamics simulations and an all-atom polarizable force-field approach. Together with our earlier results concerning the solvation of these organic ions in nanodroplets whose molecular sizes range from 50 to 1000, these new data allow us to discuss the reliability of extrapolating absolute single-ion bulk solvation energies from small ion/water droplets using common power-law functions of cluster size. We show that reliable estimates of these energies can be extrapolated from a small data set comprising the results of three droplets whose sizes are between 100 and 1000 using a basic power-law function of droplet size. This agrees with an earlier conclusion drawn from a model built within the mean spherical framework and paves the road toward a theoretical protocol to systematically compute the solvation energies of complex organic ions.

  13. Dose-response relationships and extrapolation in toxicology - Mechanistic and statistical considerations

    EPA Science Inventory

    Controversy on toxicological dose-response relationships and low-dose extrapolation of respective risks is often the consequence of misleading data presentation, lack of differentiation between types of response variables, and diverging mechanistic interpretation. In this chapter...

  14. Windtunnel Rebuilding And Extrapolation To Flight At Transsonic Speed For ExoMars

    NASA Astrophysics Data System (ADS)

    Fertig, Markus; Neeb, Dominik; Gulhan, Ali

    2011-05-01

    The static as well as the dynamic behaviour of the EXOMARS vehicle in the transonic velocity regime has been investigated experimentally by the Supersonic and Hypersonic Technology Department of DLR in order to investigate the behaviour prior to parachute opening. Since the experimental work was performed in air, a numerical extrapolation to flight by means of CFD is necessary. At low supersonic speed this extrapolation to flight was performed by the Spacecraft Department of the Institute of Flow Technology of DLR employing the CFD code TAU. Numerical as well as experimental results for the wind tunnel test at Mach 1.2 will be compared and discussed for three different angles of attack.

  15. Extrapolation of enalapril efficacy from adults to children using pharmacokinetic/pharmacodynamic modelling.

    PubMed

    Kechagia, Irene-Ariadne; Kalantzi, Lida; Dokoumetzidis, Aristides

    2015-11-01

    To extrapolate enalapril efficacy to children 0-6 years old, a pharmacokinetic/pharmacodynamic (PKPD) model was built using literature data, with blood pressure as the PD endpoint. A PK model of enalapril was developed from literature paediatric data up to 16 years old. A PD model of enalapril was fitted to adult literature response vs time data with various doses. The final PKPD model was validated with literature paediatric efficacy observations (diastolic blood pressure (DBP) drop after 2 weeks of treatment) in children of age 6 years and higher. The model was used to predict enalapril efficacy for ages 0-6 years. A two-compartment PK model was chosen with weight, reflecting indirectly age as a covariate on clearance and central volume. An indirect link PD model was chosen to describe drug effect. External validation of the model's capability to predict efficacy in children was successful. Enalapril efficacy was extrapolated to ages 1-11 months and 1-6 years finding the mean DBP drop 11.2 and 11.79 mmHg, respectively. Mathematical modelling was used to extrapolate enalapril efficacy to young children to support a paediatric investigation plan targeting a paediatric-use marketing authorization application. © 2015 Royal Pharmaceutical Society.

  16. Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.

    2018-03-01

    We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.

  17. Establishing macroecological trait datasets: digitalization, extrapolation, and validation of diet preferences in terrestrial mammals worldwide

    PubMed Central

    Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian

    2014-01-01

    Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species’ evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals (“MammalDIET”). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external

  18. Extrapolation of vertical target motion through a brief visual occlusion.

    PubMed

    Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco

    2010-03-01

    It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects.

  19. Determination of Extrapolation Distance With Pressure Signatures Measured at Two to Twenty Span Lengths From Two Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.; Kuhn, Neil S.

    2006-01-01

    A study was performed to determine a limiting separation distance for the extrapolation of pressure signatures from cruise altitude to the ground. The study was performed at two wind-tunnel facilities with two research low-boom wind-tunnel models designed to generate ground pressure signatures with "flattop" shapes. Data acquired at the first wind-tunnel facility showed that pressure signatures had not achieved the desired low-boom features for extrapolation purposes at separation distances of 2 to 5 span lengths. However, data acquired at the second wind-tunnel facility at separation distances of 5 to 20 span lengths indicated the "limiting extrapolation distance" had been achieved so pressure signatures could be extrapolated with existing codes to obtain credible predictions of ground overpressures.

  20. Infrared length scale and extrapolations for the no-core shell model

    DOE PAGES

    Wendt, K. A.; Forssén, C.; Papenbrock, T.; ...

    2015-06-03

    In this paper, we precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3(A-1)-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound statesmore » of 4He, 6He, 6Li, and 7Li. Finally, we also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.« less

  1. Guided wave tomography in anisotropic media using recursive extrapolation operators

    NASA Astrophysics Data System (ADS)

    Volker, Arno

    2018-04-01

    Guided wave tomography is an advanced technology for quantitative wall thickness mapping to image wall loss due to corrosion or erosion. An inversion approach is used to match the measured phase (time) at a specific frequency to a model. The accuracy of the model determines the sizing accuracy. Particularly for seam welded pipes there is a measurable amount of anisotropy. Moreover, for small defects a ray-tracing based modelling approach is no longer accurate. Both issues are solved by applying a recursive wave field extrapolation operator assuming vertical transverse anisotropy. The inversion scheme is extended by not only estimating the wall loss profile but also the anisotropy, local material changes and transducer ring alignment errors. This makes the approach more robust. The approach will be demonstrated experimentally on different defect sizes, and a comparison will be made between this new approach and an isotropic ray-tracing approach. An example is given in Fig. 1 for a 75 mm wide, 5 mm deep defect. The wave field extrapolation based tomography clearly provides superior results.

  2. Extrapolation of forest community types with a geographic information system

    Treesearch

    W.K. Clatterbuck; J. Gregory

    1991-01-01

    A geographic information system (GIS) was used to project eight forest community types from a 1,190-acre (482-ha) intensively sampled area to an unsampled 19,887-acre (8,054-ha) adjacent area with similar environments on the Western Highland Rim of Tennessee. Both physiographic and vegetative parameters were used to distinguish, extrapolate, and map communities.

  3. Uncertainty of the potential curve minimum for diatomic molecules extrapolated from Dunham type coefficients

    NASA Astrophysics Data System (ADS)

    Ilieva, T.; Iliev, I.; Pashov, A.

    2016-12-01

    In the traditional description of electronic states of diatomic molecules by means of molecular constants or Dunham coefficients, one of the important fitting parameters is the value of the zero point energy - the minimum of the potential curve or the energy of the lowest vibrational-rotational level - E00 . Their values are almost always the result of an extrapolation and it may be difficult to estimate their uncertainties, because they are connected not only with the uncertainty of the experimental data, but also with the distribution of experimentally observed energy levels and the particular realization of set of Dunham coefficients. This paper presents a comprehensive analysis based on Monte Carlo simulations, which aims to demonstrate the influence of all these factors on the uncertainty of the extrapolated minimum of the potential energy curve U (Re) and the value of E00 . The very good extrapolation properties of the Dunham coefficients are quantitatively confirmed and it is shown that for a proper estimate of the uncertainties, the ambiguity in the composition of the Dunham coefficients should be taken into account.

  4. EXTRAPOLATION OF THE SOLAR CORONAL MAGNETIC FIELD FROM SDO/HMI MAGNETOGRAM BY A CESE-MHD-NLFFF CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang Chaowei; Feng Xueshang, E-mail: cwjiang@spaceweather.ac.cn, E-mail: fengx@spaceweather.ac.cn

    Due to the absence of direct measurement, the magnetic field in the solar corona is usually extrapolated from the photosphere in a numerical way. At the moment, the nonlinear force-free field (NLFFF) model dominates the physical models for field extrapolation in the low corona. Recently, we have developed a new NLFFF model with MHD relaxation to reconstruct the coronal magnetic field. This method is based on CESE-MHD model with the conservation-element/solution-element (CESE) spacetime scheme. In this paper, we report the application of the CESE-MHD-NLFFF code to Solar Dynamics Observatory/Helioseismic and Magnetic Imager (SDO/HMI) data with magnetograms sampled for two activemore » regions (ARs), NOAA AR 11158 and 11283, both of which were very non-potential, producing X-class flares and eruptions. The raw magnetograms are preprocessed to remove the force and then inputted into the extrapolation code. Qualitative comparison of the results with the SDO/AIA images shows that our code can reconstruct magnetic field lines resembling the EUV-observed coronal loops. Most important structures of the ARs are reproduced excellently, like the highly sheared field lines that suspend filaments in AR 11158 and twisted flux rope which corresponds to a sigmoid in AR 11283. Quantitative assessment of the results shows that the force-free constraint is fulfilled very well in the strong-field regions but apparently not that well in the weak-field regions because of data noise and numerical errors in the small currents.« less

  5. Improving In Vitro to In Vivo Extrapolation by Incorporating Toxicokinetic Measurements: A Case Study of Lindane-Induced Neurotoxicity

    EPA Science Inventory

    Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicit...

  6. Extrapolation to Nonequilibrium from Coarse-Grained Response Theory

    NASA Astrophysics Data System (ADS)

    Basu, Urna; Helden, Laurent; Krüger, Matthias

    2018-05-01

    Nonlinear response theory, in contrast to linear cases, involves (dynamical) details, and this makes application to many-body systems challenging. From the microscopic starting point we obtain an exact response theory for a small number of coarse-grained degrees of freedom. With it, an extrapolation scheme uses near-equilibrium measurements to predict far-from-equilibrium properties (here, second order responses). Because it does not involve system details, this approach can be applied to many-body systems. It is illustrated in a four-state model and in the near critical Ising model.

  7. Extrapolation of toxic indices among test objects

    PubMed Central

    Tichý, Miloň; Rucki, Marián; Roth, Zdeněk; Hanzlíková, Iveta; Vlková, Alena; Tumová, Jana; Uzlová, Rút

    2010-01-01

    Oligochaeta Tubifex tubifex, fish fathead minnow (Pimephales promelas), hepatocytes isolated from rat liver and ciliated protozoan are absolutely different organisms and yet their acute toxicity indices correlate. Correlation equations for special effects were developed for a large heterogeneous series of compounds (QSAR, quantitative structure-activity relationships). Knowing those correlation equations and their statistic evaluation, one can extrapolate the toxic indices. The reason is that a common physicochemical property governs the biological effect, namely the partition coefficient between two unmissible phases, simulated generally by n-octanol and water. This may mean that the transport of chemicals towards a target is responsible for the magnitude of the effect, rather than reactivity, as one would assume suppose. PMID:21331180

  8. Multivariable extrapolation of grand canonical free energy landscapes

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-12-01

    We derive an approach for extrapolating the free energy landscape of multicomponent systems in the grand canonical ensemble, obtained from flat-histogram Monte Carlo simulations, from one set of temperature and chemical potentials to another. This is accomplished by expanding the landscape in a Taylor series at each value of the order parameter which defines its macrostate phase space. The coefficients in each Taylor polynomial are known exactly from fluctuation formulas, which may be computed by measuring the appropriate moments of extensive variables that fluctuate in this ensemble. Here we derive the expressions necessary to define these coefficients up to arbitrary order. In principle, this enables a single flat-histogram simulation to provide complete thermodynamic information over a broad range of temperatures and chemical potentials. Using this, we also show how to combine a small number of simulations, each performed at different conditions, in a thermodynamically consistent fashion to accurately compute properties at arbitrary temperatures and chemical potentials. This method may significantly increase the computational efficiency of biased grand canonical Monte Carlo simulations, especially for multicomponent mixtures. Although approximate, this approach is amenable to high-throughput and data-intensive investigations where it is preferable to have a large quantity of reasonably accurate simulation data, rather than a smaller amount with a higher accuracy.

  9. Data-based discharge extrapolation: estimating annual discharge for a partially gauged large river basin from its small sub-basins

    NASA Astrophysics Data System (ADS)

    Gong, L.

    2013-12-01

    Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.

  10. THE MISUSE OF HYDROLOGIC UNIT MAPS FOR EXTRAPOLATION, REPORTING, AND ECOSYSTEM MANAGEMENT

    EPA Science Inventory

    The use of watersheds to conduct research on land-water relationships has expanded recently to include both extrapolation and reporting of water resource information and ecosystem management. More often than not, hydrologic units, and hydrologic unit codes (HUCs) in particular, a...

  11. SeqAPASS v3.0 for Extrapolation of Toxicity Knowledge Across Species

    EPA Science Inventory

    The U.S. Environmental Protection Agency Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS; https://seqapass.epa.gov/seqapass/) tool was initially released to the public in 2016, providing a novel means to begin to address challenges in extrapolating toxicity ...

  12. Dispersal and extrapolation on the accuracy of temporal predictions from distribution models for the Darwin's frog.

    PubMed

    Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio

    2017-07-01

    Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as

  13. Extrapolating intensified forest inventory data to the surrounding landscape using landsat

    Treesearch

    Evan B. Brooks; John W. Coulston; Valerie A. Thomas; Randolph H. Wynne

    2015-01-01

    In 2011, a collection of spatially intensified plots was established on three of the Experimental Forests and Ranges (EFRs) sites with the intent of facilitating FIA program objectives for regional extrapolation. Characteristic coefficients from harmonic regression (HR) analysis of associated Landsat stacks are used as inputs into a conditional random forests model to...

  14. Failure of the straight-line DCS boundary when extrapolated to the hypobaric realm.

    PubMed

    Conkin, J; Van Liew, H D

    1992-11-01

    The lowest pressure (P2) to which a diver can ascend without developing decompression sickness (DCS) after becoming equilibrated at some higher pressure (P1) is described by a straight line with a negative y-intercept. We tested whether extrapolation of such a line also predicts safe decompression to altitude. We substituted tissue nitrogen pressure (P1N2) calculated for a compartment with a 360-min half-time for P1 values; this allows data from hypobaric exposures to be plotted on a P2 vs. P1N2 graph, even if the subject breathes oxygen before ascent. In literature sources, we found 40 reports of human exposures in hypobaric chambers that fell in the region of a P2 vs. P1N2 plot where the extrapolation from hyperbaric data predicted that the decompression should be free of DCS. Of 4,576 exposures, 785 persons suffered decompression sickness (17%), indicating that extrapolation of the diver line to altitude is not valid. Over the pressure range spanned by human hypobaric exposures and hyperbaric air exposures, the best separation between no DCS and DCS on a P2 vs. P1N2 plot seems to be a curve which approximates a straight line in the hyperbaric region but bends toward the origin in the hypobaric region.

  15. Prioritizing abandoned coal mine reclamation projects within the contiguous United States using geographic information system extrapolation.

    PubMed

    Gorokhovich, Yuri; Reid, Matthew; Mignone, Erica; Voros, Andrew

    2003-10-01

    Coal mine reclamation projects are very expensive and require coordination of local and federal agencies to identify resources for the most economic way of reclaiming mined land. Location of resources for mine reclamation is a spatial problem. This article presents a methodology that allows the combination of spatial data on resources for the coal mine reclamation and uses GIS analysis to develop a priority list of potential mine reclamation sites within contiguous United States using the method of extrapolation. The extrapolation method in this study was based on the Bark Camp reclamation project. The mine reclamation project at Bark Camp, Pennsylvania, USA, provided an example of the beneficial use of fly ash and dredged material to reclaim 402,600 sq mi of a mine abandoned in the 1980s. Railroads provided transportation of dredged material and fly ash to the site. Therefore, four spatial elements contributed to the reclamation project at Bark Camp: dredged material, abandoned mines, fly ash sources, and railroads. Using spatial distribution of these data in the contiguous United States, it was possible to utilize GIS analysis to prioritize areas where reclamation projects similar to Bark Camp are feasible. GIS analysis identified unique occurrences of all four spatial elements used in the Bark Camp case for each 1 km of the United States territory within 20, 40, 60, 80, and 100 km radii from abandoned mines. The results showed the number of abandoned mines for each state and identified their locations. The federal or state governments can use these results in mine reclamation planning.

  16. Neural Extrapolation of Motion for a Ball Rolling Down an Inclined Plane

    PubMed Central

    La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka

    2014-01-01

    It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion. PMID:24940874

  17. Neural extrapolation of motion for a ball rolling down an inclined plane.

    PubMed

    La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka

    2014-01-01

    It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion.

  18. Interpolation Method Needed for Numerical Uncertainty

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.

  19. EVALUATION OF MINIMUM DATA REQUIREMENTS FOR ACUTE TOXICITY VALUE EXTRAPOLATION WITH AQUATIC ORGANISMS

    EPA Science Inventory

    Buckler, Denny R., Foster L. Mayer, Mark R. Ellersieck and Amha Asfaw. 2003. Evaluation of Minimum Data Requirements for Acute Toxicity Value Extrapolation with Aquatic Organisms. EPA/600/R-03/104. U.S. Environmental Protection Agency, National Health and Environmental Effects Re...

  20. A novel evaluation method for extrapolated retention factor in determination of n-octanol/water partition coefficient of halogenated organic pollutants by reversed-phase high performance liquid chromatography.

    PubMed

    Han, Shu-ying; Liang, Chao; Qiao, Jun-qin; Lian, Hong-zhen; Ge, Xin; Chen, Hong-yuan

    2012-02-03

    The retention factor corresponding to pure water in reversed-phase high performance liquid chromatography (RP-HPLC), k(w), was commonly obtained by extrapolation of retention factor (k) in a mixture of organic modifier and water as mobile phase in tedious experiments. In this paper, a relationship between logk(w) and logk for directly determining k(w) has been proposed for the first time. With a satisfactory validation, the approach was confirmed to enable easy and accurate evaluation of k(w) for compounds in question with similar structure to model compounds. Eight PCB congeners with different degree of chlorination were selected as a training set for modeling the logk(w)-logk correlation on both silica-based C(8) and C(18) stationary phases to evaluate logk(w) of sample compounds including seven PCB, six PBB and eight PBDE congeners. These eight model PCBs were subsequently combined with seven structure-similar benzene derivatives possessing reliable experimental K(ow) values as a whole training set for logK(ow)-logk(w) regressions on the two stationary phases. Consequently, the evaluated logk(w) values of sample compounds were used to determine their logK(ow) by the derived logK(ow)-logk(w) models. The logK(ow) values obtained by these evaluated logk(w) were well comparable with those obtained by experimental-extrapolated logk(w), demonstrating that the proposed method for logk(w) evaluation in this present study could be an effective means in lipophilicity study of environmental contaminants with numerous congeners. As a result, logK(ow) data of many PCBs, PBBs and PBDEs could be offered. These contaminants are considered to widely exist in the environment, but there have been no reliable experimental K(ow) data available yet. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Basic antenna transmitting characteristics using an extrapolation range measurement technique at a millimeter-wave band at NMIJ/AIST.

    PubMed

    Yamamoto, Tetsuya

    2007-06-01

    A novel test fixture operating at a millimeter-wave band using an extrapolation range measurement technique was developed at the National Metrology Institute of Japan (NMIJ). Here I describe the measurement system using a Q-band test fixture. I measured the relative insertion loss as a function of antenna separation distance and observed the effects of multiple reflections between the antennas. I also evaluated the antenna gain at 33 GHz using the extrapolation technique.

  2. Numerical methods in acoustics

    NASA Astrophysics Data System (ADS)

    Candel, S. M.

    This paper presents a survey of some computational techniques applicable to acoustic wave problems. Recent advances in wave extrapolation methods, spectral methods and boundary integral methods are discussed and illustrated by specific calculations.

  3. Vapor Pressure Data and Analysis for Selected Organophosphorus Compounds, CMMP, DPMP, DMEP, and DEEP: Extrapolation of High-Temperature Data

    DTIC Science & Technology

    2018-04-01

    EXTRAPOLATION OF HIGH -TEMPERATURE DATA ECBC-TR-1507 Ann Brozena Patrice Abercrombie-Thomas RESEARCH AND TECHNOLOGY DIRECTORATE David E. Tevault...Compounds, CMMP, DPMP, DMEP, and DEEP: Extrapolation of High - Temperature Data 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR...22060-6201 10. SPONSOR/MONITOR’S ACRONYM(S) DTRA 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT Approved

  4. Initial Findings on Hydrodynamic Scaling Extrapolations of National Ignition Facility BigFoot Implosions

    NASA Astrophysics Data System (ADS)

    Nora, R.; Field, J. E.; Peterson, J. Luc; Spears, B.; Kruse, M.; Humbird, K.; Gaffney, J.; Springer, P. T.; Brandon, S.; Langer, S.

    2017-10-01

    We present an experimentally corroborated hydrodynamic extrapolation of several recent BigFoot implosions on the National Ignition Facility. An estimate on the value and error of the hydrodynamic scale necessary for ignition (for each individual BigFoot implosion) is found by hydrodynamically scaling a distribution of multi-dimensional HYDRA simulations whose outputs correspond to their experimental observables. The 11-parameter database of simulations, which include arbitrary drive asymmetries, dopant fractions, hydrodynamic scaling parameters, and surface perturbations due to surrogate tent and fill-tube engineering features, was computed on the TRINITY supercomputer at Los Alamos National Laboratory. This simple extrapolation is the first step in providing a rigorous calibration of our workflow to provide an accurate estimate of the efficacy of achieving ignition on the National Ignition Facility. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. Application Electrochemical Impedance Spectroscopy Methods to Evaluation Corrosion Behavior of Stainless steels 304 in Nanofluids Media

    NASA Astrophysics Data System (ADS)

    Hadi Prajitno, Djoko; Umar, Efrizon; Gustaman Syarif, Dani

    2017-01-01

    Corrosion is a common problem in many engineering metals and alloys. Electrochemical methods are commonly instrument to use as tool to study the corrosion behavior of the metals and alloy. This method was examined interaction between a surface of the metals and alloys in corrosive media. The present paper, the effects of nano particle ZrO2 as an additive to aqua de mineralized on the corrosion behavior of stainless steel were investigated. Electrochemical impedance spectroscopy (EIS) testing was performed in both de mineralized water and demineralized water contain nano particle 0,01% ZrO2 as Nano fluid. Surface morphology examination of the specimens showed that microstructure of stainless steel 304 alloys relatively unchanged after corrosion and EIS testing. According to the corrosion potential examination of the stainless steel 304 in nanofluid media, it showed that stainless steel 304 actively corroded in nanofluida media. The value of anodic Tafel slope stainless steel 304 in nanofluid higher compare with in demineralized water. Tafel polarization examination show that corrosion rate of stainless steel 304 in nanofluid higher compare with corrosin rate in demineralized media.EIS technique show that impedance of stainless steel 304 in nanofluid lower compare with in demineralized media, resulting in an increase in the corrosion rates of these stainless steel 304 specimens in nano fluids

  6. Progress in extrapolating divertor heat fluxes towards large fusion devices

    NASA Astrophysics Data System (ADS)

    Sieglin, B.; Faitsch, M.; Eich, T.; Herrmann, A.; Suttrop, W.; Collaborators, JET; the MST1 Team; the ASDEX Upgrade Team

    2017-12-01

    Heat load to the plasma facing components is one of the major challenges for the development and design of large fusion devices such as ITER. Nowadays fusion experiments can operate with heat load mitigation techniques, e.g. sweeping, impurity seeding, but do not generally require it. For large fusion devices however, heat load mitigation will be essential. This paper presents the current progress of the extrapolation of steady state and transient heat loads towards large fusion devices. For transient heat loads, so-called edge localized modes are considered a serious issue for the lifetime of divertor components. In this paper, the ITER operation at half field (2.65 T) and half current (7.5 MA) will be discussed considering the current material limit for the divertor peak energy fluence of 0.5 {MJ}/{{{m}}}2. Recent studies were successful in describing the observed energy fluence in the JET, MAST and ASDEX Upgrade using the pedestal pressure prior to the ELM crash. Extrapolating this towards ITER results in a more benign heat load compared to previous scalings. In the presence of magnetic perturbation, the axisymmetry is broken and a 2D heat flux pattern is induced on the divertor target, leading to local increase of the heat flux which is a concern for ITER. It is shown that for a moderate divertor broadening S/{λ }{{q}}> 0.5 the toroidal peaking of the heat flux disappears.

  7. Exposure Matching for Extrapolation of Efficacy in Pediatric Drug Development

    PubMed Central

    Mulugeta, Yeruk; Barrett, Jeffrey S.; Nelson, Robert; Eshete, Abel Tilahun; Mushtaq, Alvina; Yao, Lynne; Glasgow, Nicole; Mulberg, Andrew E.; Gonzalez, Daniel; Green, Dionna; Florian, Jeffry; Krudys, Kevin; Seo, Shirley; Kim, Insook; Chilukuri, Dakshina; Burckart, Gilbert J.

    2017-01-01

    During drug development, matching adult systemic exposures of drugs is a common approach for dose selection in pediatric patients when efficacy is partially or fully extrapolated. This is a systematic review of approaches used for matching adult systemic exposures as the basis for dose selection in pediatric trials submitted to the U.S. Food and Drug Administration (FDA) between 1998 and 2012. The trial design of pediatric pharmacokinetic (PK) studies and the pediatric and adult systemic exposure data were obtained from FDA publicly available databases containing reviews of pediatric trials. Exposure matching approaches that were used as the basis for pediatric dose selection were reviewed. The PK data from the adult and pediatric populations were used to quantify exposure agreement between the two patient populations. The main measures were the pediatric PK studies trial design elements and drug systemic exposures (adult and pediatric). There were 31 products (86 trials) with full or partial extrapolation of efficacy with an available PK assessment. Pediatric exposures had a range of mean Cmax and AUC ratios (pediatric/adult) of 0.63-4.19 and 0.36-3.60 respectively. Seven of the 86 trials (8.1%) had a pre-defined acceptance boundary used to match adult exposures. The key PK parameter was consistently predefined for antiviral and anti-infective products. Approaches to match exposure in children and adults varied across products. A consistent approach for systemic exposure matching and evaluating pediatric PK studies is needed to guide future pediatric trials. PMID:27040726

  8. A model for the data extrapolation of greenhouse gas emissions in the Brazilian hydroelectric system

    NASA Astrophysics Data System (ADS)

    Pinguelli Rosa, Luiz; Aurélio dos Santos, Marco; Gesteira, Claudio; Elias Xavier, Adilson

    2016-06-01

    Hydropower reservoirs are artificial water systems and comprise a small proportion of the Earth’s continental territory. However, they play an important role in the aquatic biogeochemistry and may affect the environment negatively. Since the 90s, as a result of research on organic matter decay in manmade flooded areas, some reports have associated greenhouse gas emissions with dam construction. Pioneering work carried out in the early period challenged the view that hydroelectric plants generate completely clean energy. Those estimates suggested that GHG emissions into the atmosphere from some hydroelectric dams may be significant when measured per unit of energy generated and should be compared to GHG emissions from fossil fuels used for power generation. The contribution to global warming of greenhouse gases emitted by hydropower reservoirs is currently the subject of various international discussions and debates. One of the most controversial issues is the extrapolation of data from different sites. In this study, the extrapolation from a site sample where measurements were made to the complete set of 251 reservoirs in Brazil, comprising a total flooded area of 32 485 square kilometers, was derived from the theory of self-organized criticality. We employed a power law for its statistical representation. The present article reviews the data generated at that time in order to demonstrate how, with the help of mathematical tools, we can extrapolate values from one reservoir to another without compromising the reliability of the results.

  9. Calculation methods study on hot spot stress of new girder structure detail

    NASA Astrophysics Data System (ADS)

    Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing

    2017-10-01

    To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.

  10. Extrapolation of bulk rock elastic moduli of different rock types to high pressure conditions and comparison with texture-derived elastic moduli

    NASA Astrophysics Data System (ADS)

    Ullemeyer, Klaus; Lokajíček, Tomás; Vasin, Roman N.; Keppler, Ruth; Behrmann, Jan H.

    2018-02-01

    In this study elastic moduli of three different rock types of simple (calcite marble) and more complex (amphibolite, micaschist) mineralogical compositions were determined by modeling of elastic moduli using texture (crystallographic preferred orientation; CPO) data, experimental investigation and extrapolation. 3D models were calculated using single crystal elastic moduli, and CPO measured using time-of-flight neutron diffraction at the SKAT diffractometer in Dubna (Russia) and subsequently analyzed using Rietveld Texture Analysis. To define extrinsic factors influencing elastic behaviour, P-wave and S-wave velocity anisotropies were experimentally determined at 200, 400 and 600 MPa confining pressure. Functions describing variations of the elastic moduli with confining pressure were then used to predict elastic properties at 1000 MPa, revealing anisotropies in a supposedly crack-free medium. In the calcite marble elastic anisotropy is dominated by the CPO. Velocities continuously increase, while anisotropies decrease from measured, over extrapolated to CPO derived data. Differences in velocity patterns with sample orientation suggest that the foliation forms an important mechanical anisotropy. The amphibolite sample shows similar magnitudes of extrapolated and CPO derived velocities, however the pattern of CPO derived velocity is closer to that measured at 200 MPa. Anisotropy decreases from the extrapolated to the CPO derived data. In the micaschist, velocities are higher and anisotropies are lower in the extrapolated data, in comparison to the data from measurements at lower pressures. Generally our results show that predictions for the elastic behavior of rocks at great depths are possible based on experimental data and those computed from CPO. The elastic properties of the lower crust can, thus, be characterized with an improved degree of confidence using extrapolations. Anisotropically distributed spherical micro-pores are likely to be preserved, affecting

  11. Attention maintains mental extrapolation of target position: irrelevant distractors eliminate forward displacement after implied motion.

    PubMed

    Kerzel, Dirk

    2003-05-01

    Observers' judgments of the final position of a moving target are typically shifted in the direction of implied motion ("representational momentum"). The role of attention is unclear: visual attention may be necessary to maintain or halt target displacement. When attention was captured by irrelevant distractors presented during the retention interval, forward displacement after implied target motion disappeared, suggesting that attention may be necessary to maintain mental extrapolation of target motion. In a further corroborative experiment, the deployment of attention was measured after a sequence of implied motion, and faster responses were observed to stimuli appearing in the direction of motion. Thus, attention may guide the mental extrapolation of target motion. Additionally, eye movements were measured during stimulus presentation and retention interval. The results showed that forward displacement with implied motion does not depend on eye movements. Differences between implied and smooth motion are discussed with respect to recent neurophysiological findings.

  12. Quantification of the biocontrol agent Trichoderma harzianum with real-time TaqMan PCR and its potential extrapolation to the hyphal biomass.

    PubMed

    López-Mondéjar, Rubén; Antón, Anabel; Raidl, Stefan; Ros, Margarita; Pascual, José Antonio

    2010-04-01

    The species of the genus Trichoderma are used successfully as biocontrol agents against a wide range of phytopathogenic fungi. Among them, Trichoderma harzianum is especially effective. However, to develop more effective fungal biocontrol strategies in organic substrates and soil, tools for monitoring the control agents are required. Real-time PCR is potentially an effective tool for the quantification of fungi in environmental samples. The aim of this study consisted of the development and application of a real-time PCR-based method to the quantification of T. harzianum, and the extrapolation of these data to fungal biomass values. A set of primers and a TaqMan probe for the ITS region of the fungal genome were designed and tested, and amplification was correlated to biomass measurements obtained with optical microscopy and image analysis, of the hyphal length of the mycelium of the colony. A correlation of 0.76 between ITS copies and biomass was obtained. The extrapolation of the quantity of ITS copies, calculated based on real-time PCR data, into quantities of fungal biomass provides potentially a more accurate value of the quantity of soil fungi. Copyright 2009 Elsevier Ltd. All rights reserved.

  13. Improving in vitro to in vivo extrapolation by incorporating toxicokinetic measurements: A case study of lindane-induced neurotoxicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croom, Edward L.; Shafer, Timothy J.; Evans, Marina V.

    Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicity. Lindane cell and media concentrations in vitro, together with in vitro concentration-response data for lindane effects on neuronal network firing rates, were compared to in vivo data and model simulations as an exercise in extrapolation for chemical-induced neurotoxicity in rodents and humans. Time- and concentration-dependent lindane dosimetry was determined in primary cultures of rat cortical neuronsmore » in vitro using “faux” (without electrodes) microelectrode arrays (MEAs). In vivo data were derived from literature values, and physiologically based pharmacokinetic (PBPK) modeling was used to extrapolate from rat to human. The previously determined EC{sub 50} for increased firing rates in primary cultures of cortical neurons was 0.6 μg/ml. Media and cell lindane concentrations at the EC{sub 50} were 0.4 μg/ml and 7.1 μg/ml, respectively, and cellular lindane accumulation was time- and concentration-dependent. Rat blood and brain lindane levels during seizures were 1.7–1.9 μg/ml and 5–11 μg/ml, respectively. Brain lindane levels associated with seizures in rats and those predicted for humans (average = 7 μg/ml) by PBPK modeling were very similar to in vitro concentrations detected in cortical cells at the EC{sub 50} dose. PBPK model predictions matched literature data and timing. These findings indicate that in vitro MEA results are predictive of in vivo responses to lindane and demonstrate a successful modeling approach for IVIVE of rat and human neurotoxicity. - Highlights: • In vitro to in vivo extrapolation for lindane neurotoxicity was performed. • Dosimetry of lindane in a micro-electrode array (MEA) test system was assessed. • Cell concentrations at the

  14. The Effect of Format and Organization on Extrapolation and Interpolation with Multiple Trend Displays.

    ERIC Educational Resources Information Center

    Wolfe, Mary L.; Martuza, Victor R.

    The major purpose of this experiment was to examine the effects of format (bar graphs vs. tables) and organization (by year vs. by brand) on the speed and accuracy of extrapolation and interpolation with multiple, nonlinear trend displays. Fifty-six undergraduates enrolled in the College of Education at the University of Delaware served as the…

  15. Spatial extrapolation of lysimeter results using thermal infrared imaging

    NASA Astrophysics Data System (ADS)

    Voortman, B. R.; Bosveld, F. C.; Bartholomeus, R. P.; Witte, J. P. M.

    2016-12-01

    Measuring evaporation (E) with lysimeters is costly and prone to numerous errors. By comparing the energy balance and the remotely sensed surface temperature of lysimeters with those of the undisturbed surroundings, we were able to assess the representativeness of lysimeter measurements and to quantify differences in evaporation caused by spatial variations in soil moisture content. We used an algorithm (the so called 3T model) to spatially extrapolate the measured E of a reference lysimeter based on differences in surface temperature, net radiation and soil heat flux. We tested the performance of the 3T model on measurements with multiple lysimeters (47.5 cm inner diameter) and micro lysimeters (19.2 cm inner diameter) installed in bare sand, moss and natural dry grass. We developed different scaling procedures using in situ measurements and remotely sensed surface temperatures to derive spatially distributed estimates of Rn and G and explored the physical soundness of the 3T model. Scaling of Rn and G considerably improved the performance of the 3T model for the bare sand and moss experiments (Nash-Sutcliffe efficiency (NSE) increasing from 0.45 to 0.89 and from 0.81 to 0.94, respectively). For the grass surface, the scaling procedures resulted in a poorer performance of the 3T model (NSE decreasing from 0.74 to 0.70), which was attributed to effects of shading and the difficulty to correct for differences in emissivity between dead and living biomass. The 3T model is physically unsound if the field scale average air temperature, measured at an arbitrarily chosen reference height, is used as input to the model. The proposed measurement system is relatively cheap, since it uses a zero tension (freely draining) lysimeter which results are extrapolated by the 3T model to the unaffected surroundings. The system is promising for bridging the gap between ground observations and satellite based estimates of E.

  16. Adsorption of pharmaceuticals onto activated carbon fiber cloths - Modeling and extrapolation of adsorption isotherms at very low concentrations.

    PubMed

    Fallou, Hélène; Cimetière, Nicolas; Giraudet, Sylvain; Wolbert, Dominique; Le Cloirec, Pierre

    2016-01-15

    Activated carbon fiber cloths (ACFC) have shown promising results when applied to water treatment, especially for removing organic micropollutants such as pharmaceutical compounds. Nevertheless, further investigations are required, especially considering trace concentrations, which are found in current water treatment. Until now, most studies have been carried out at relatively high concentrations (mg L(-1)), since the experimental and analytical methodologies are more difficult and more expensive when dealing with lower concentrations (ng L(-1)). Therefore, the objective of this study was to validate an extrapolation procedure from high to low concentrations, for four compounds (Carbamazepine, Diclofenac, Caffeine and Acetaminophen). For this purpose, the reliability of the usual adsorption isotherm models, when extrapolated from high (mg L(-1)) to low concentrations (ng L(-1)), was assessed as well as the influence of numerous error functions. Some isotherm models (Freundlich, Toth) and error functions (RSS, ARE) show weaknesses to be used as an adsorption isotherms at low concentrations. However, from these results, the pairing of the Langmuir-Freundlich isotherm model with Marquardt's percent standard of deviation was evidenced as the best combination model, enabling the extrapolation of adsorption capacities by orders of magnitude. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Opportunities and Challenges in Employing In Vitro-In Vivo Extrapolation (IVIVE) to the Tox21 Dataset

    EPA Science Inventory

    In vitro-in vivo extrapolation (IVIVE), or the process of using in vitro data to predict in vivo phenomena, provides key opportunities to bridge the disconnect between high-throughput screening data and real-world human exposures and potential health effects. Strategies utilizing...

  18. A study of alternative schemes for extrapolation of secular variation at observatories

    USGS Publications Warehouse

    Alldredge, L.R.

    1976-01-01

    The geomagnetic secular variation is not well known. This limits the useful life of geomagnetic models. The secular variation is usually assumed to be linear with time. It is found that attenative schemes that employ quasiperiodic variations from internal and external sources can improve the extrapolation of secular variation at high-quality observatories. Although the schemes discussed are not yet fully applicable in worldwide model making, they do suggest some basic ideas that may be developed into useful tools in future model work. ?? 1976.

  19. Electron density extrapolation above F2 peak by the linear Vary-Chap model supporting new Global Navigation Satellite Systems-LEO occultation missions

    NASA Astrophysics Data System (ADS)

    Hernández-Pajares, Manuel; Garcia-Fernández, Miquel; Rius, Antonio; Notarpietro, Riccardo; von Engeln, Axel; Olivares-Pulido, Germán.; Aragón-Àngel, Àngela; García-Rigo, Alberto

    2017-08-01

    The new radio-occultation (RO) instrument on board the future EUMETSAT Polar System-Second Generation (EPS-SG) satellites, flying at a height of 820 km, is primarily focusing on neutral atmospheric profiling. It will also provide an opportunity for RO ionospheric sounding, but only below impact heights of 500 km, in order to guarantee a full data gathering of the neutral part. This will leave a gap of 320 km, which impedes the application of the direct inversion techniques to retrieve the electron density profile. To overcome this challenge, we have looked for new ways (accurate and simple) of extrapolating the electron density (also applicable to other low-Earth orbiting, LEO, missions like CHAMP): a new Vary-Chap Extrapolation Technique (VCET). VCET is based on the scale height behavior, linearly dependent on the altitude above hmF2. This allows extrapolating the electron density profile for impact heights above its peak height (this is the case for EPS-SG), up to the satellite orbital height. VCET has been assessed with more than 3700 complete electron density profiles obtained in four representative scenarios of the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) in the United States and the Formosa Satellite Mission 3 (FORMOSAT-3) in Taiwan, in solar maximum and minimum conditions, and geomagnetically disturbed conditions, by applying an updated Improved Abel Transform Inversion technique to dual-frequency GPS measurements. It is shown that VCET performs much better than other classical Chapman models, with 60% of occultations showing relative extrapolation errors below 20%, in contrast with conventional Chapman model extrapolation approaches with 10% or less of the profiles with relative error below 20%.

  20. Communication: A novel implementation to compute MP2 correlation energies without basis set superposition errors and complete basis set extrapolation.

    PubMed

    Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario

    2017-06-07

    By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.

  1. Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature

  2. Extrapolating regional probability of drying of headwater streams using discrete observations and gauging networks

    NASA Astrophysics Data System (ADS)

    Beaufort, Aurélien; Lamouroux, Nicolas; Pella, Hervé; Datry, Thibault; Sauquet, Eric

    2018-05-01

    Headwater streams represent a substantial proportion of river systems and many of them have intermittent flows due to their upstream position in the network. These intermittent rivers and ephemeral streams have recently seen a marked increase in interest, especially to assess the impact of drying on aquatic ecosystems. The objective of this paper is to quantify how discrete (in space and time) field observations of flow intermittence help to extrapolate over time the daily probability of drying (defined at the regional scale). Two empirical models based on linear or logistic regressions have been developed to predict the daily probability of intermittence at the regional scale across France. Explanatory variables were derived from available daily discharge and groundwater-level data of a dense gauging/piezometer network, and models were calibrated using discrete series of field observations of flow intermittence. The robustness of the models was tested using an independent, dense regional dataset of intermittence observations and observations of the year 2017 excluded from the calibration. The resulting models were used to extrapolate the daily regional probability of drying in France: (i) over the period 2011-2017 to identify the regions most affected by flow intermittence; (ii) over the period 1989-2017, using a reduced input dataset, to analyse temporal variability of flow intermittence at the national level. The two empirical regression models performed equally well between 2011 and 2017. The accuracy of predictions depended on the number of continuous gauging/piezometer stations and intermittence observations available to calibrate the regressions. Regions with the highest performance were located in sedimentary plains, where the monitoring network was dense and where the regional probability of drying was the highest. Conversely, the worst performances were obtained in mountainous regions. Finally, temporal projections (1989-2016) suggested the highest

  3. DOSE-RESPONSE BEHAVIOR OF ANDROGENIC AND ANTIANDROGENIC CHEMICALS: IMPLICATIONS FOR LOW-DOSE EXTRAPOLATION AND CUMULATIVE TOXICITY

    EPA Science Inventory

    DOSE-RESPONSE BEHAVIOR OF ANDROGENIC AND ANTIANDROGENIC CHEMICALS: IMPLICATIONS FOR LOW-DOSE EXTRAPOLATION AND CUMULATIVE TOXICITY. LE Gray Jr, C Wolf, J Furr, M Price, C Lambright, VS Wilson and J Ostby. USEPA, ORD, NHEERL, EB, RTD, RTP, NC, USA.
    Dose-response behavior of a...

  4. Dose measurement in heterogeneous phantoms with an extrapolation chamber

    NASA Astrophysics Data System (ADS)

    Deblois, Francois

    A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water(TM) and bone-equivalent material was used for determining absolute dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x-rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The air gaps used were between 2 and 3 mm and the sensitive air volume of the extrapolation chamber was remotely controlled through the motion of the motorized piston with a precision of +/-0.0025 mm. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain dose data for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC from 0.7 to ˜2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water(TM) PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). The collecting electrode material in comparison with the polarizing electrode material has a larger effect on the electrode correction factor; the thickness of thin

  5. Improving toxicity extrapolation using molecular sequence similarity: A case study of pyrethroids and the sodium ion channel

    EPA Science Inventory

    A significant challenge in ecotoxicology has been determining chemical hazards to species with limited or no toxicity data. Currently, extrapolation tools like U.S. EPA’s Web-based Interspecies Correlation Estimation (Web-ICE; www3.epa.gov/webice) models categorize toxicity...

  6. Surface dose measurements with commonly used detectors: a consistent thickness correction method

    PubMed Central

    Higgins, Patrick

    2015-01-01

    The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30‐360) with other parallel plate chambers RMI‐449 (Attix), Capintec PS‐033, PTW 30‐329 (Markus) and Memorial. Measurements of surface dose for 6 MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (−0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid‐state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three‐detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth‐dose curves

  7. Surface dose measurements with commonly used detectors: a consistent thickness correction method.

    PubMed

    Reynolds, Tatsiana A; Higgins, Patrick

    2015-09-08

    The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30-360) with other parallel plate chambers RMI-449 (Attix), Capintec PS-033, PTW 30-329 (Markus) and Memorial. Measurements of surface dose for 6MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (-0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid-state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three-detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth-dose curves and is not

  8. Estimation of low-level neutron dose-equivalent rate by using extrapolation method for a curie level Am-Be neutron source.

    PubMed

    Li, Gang; Xu, Jiayun; Zhang, Jie

    2015-01-01

    Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am-Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am-Be neutron source was investigated by measuring the count rates obtained through a 3 He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, 3 He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. J-85 jet engine noise measured in the ONERA S1 wind tunnel and extrapolated to far field

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Julienne, Alain; Atencio, Adolph, Jr.

    1991-01-01

    Noise from a J-85 turbojet with a conical, convergent nozzle was measured in simulated flight in the ONERA S1 Wind Tunnel. Data are presented for several flight speeds up to 130 m/sec and for radiation angles of 40 to 160 degrees relative to the upstream direction. The jet was operated with subsonic and sonic exhaust speeds. A moving microphone on a 2 m sideline was used to survey the radiated sound field in the acoustically treated, closed test section. The data were extrapolated to a 122 m sideline by means of a multiple-sideline source-location method, which was used to identify the acoustic source regions, directivity patterns, and near field effects. The source-location method is described along with its advantages and disadvantages. Results indicate that the effects of simulated flight on J-85 noise are significant. At the maximum forward speed of 130 m/sec, the peak overall sound levels in the aft quadrant were attentuated approximately 10 dB relative to sound levels of the engine operated statically. As expected, the simulated flight and static data tended to merge in the forward quadrant as the radiation angle approached 40 degrees. There is evidence that internal engine or shock noise was important in the forward quadrant. The data are compared with published predictions for flight effects on pure jet noise and internal engine noise. A new empirical prediction is presented that relates the variation of internally generated engine noise or broadband shock noise to forward speed. Measured near field noise extrapolated to far field agrees reasonably well with data from similar engines tested statically outdoors, in flyover, in a wind tunnel, and on the Bertin Aerotrain. Anomalies in the results for the forward quadrant and for angles above 140 degrees are discussed. The multiple-sideline method proved to be cumbersome in this application, and it did not resolve all of the uncertainties associated with measurements of jet noise close to the jet. The

  10. USING MODELS TO EXTRAPOLATE POPULATION-LEVEL EFFECTS FROM LABORATORY TOXICITY TESTS IN SUPPORT OF POPULATION RISK ASSESSMENTS

    EPA Science Inventory

    Using models to extrapolate population-level effects from laboratory toxicity tests in support of population risk assessments. Munns, W.R., Jr.*, Anne Kuhn, Matt G. Mitro, and Timothy R. Gleason, U.S. EPA ORD NHEERL, Narragansett, RI, USA. Driven in large part by management goa...

  11. EVALUATING TOOLS AND MODELS USED FOR QUANTITATIVE EXTRAPOLATION OF IN VITRO TO IN VIVO DATA FOR NEUROTOXICANTS*

    EPA Science Inventory

    There are a number of risk management decisions, which range from prioritization for testing to quantitative risk assessments. The utility of in vitro studies in these decisions depends on how well the results of such data can be qualitatively and quantitatively extrapolated to i...

  12. Single-source dual-energy computed tomography: use of monoenergetic extrapolation for a reduction of metal artifacts.

    PubMed

    Mangold, Stefanie; Gatidis, Sergios; Luz, Oliver; König, Benjamin; Schabel, Christoph; Bongers, Malte N; Flohr, Thomas G; Claussen, Claus D; Thomas, Christoph

    2014-12-01

    The objective of this study was to retrospectively determine the potential of virtual monoenergetic (ME) reconstructions for a reduction of metal artifacts using a new-generation single-source computed tomographic (CT) scanner. The ethics committee of our institution approved this retrospective study with a waiver of the need for informed consent. A total of 50 consecutive patients (29 men and 21 women; mean [SD] age, 51.3 [16.7] years) with metal implants after osteosynthetic fracture treatment who had been examined using a single-source CT scanner (SOMATOM Definition Edge; Siemens Healthcare, Forchheim, Germany; consecutive dual-energy mode with 140 kV/80 kV) were selected. Using commercially available postprocessing software (syngo Dual Energy; Siemens AG), virtual ME data sets with extrapolated energy of 130 keV were generated (medium smooth convolution kernel D30) and compared with standard polyenergetic images reconstructed with a B30 (medium smooth) and a B70 (sharp) kernel. For quantification of the beam hardening artifacts, CT values were measured on circular lines surrounding bone and the osteosynthetic device, and frequency analyses of these values were performed using discrete Fourier transform. A high proportion of low frequencies to the spectrum indicates a high level of metal artifacts. The measurements in all data sets were compared using the Wilcoxon signed rank test. The virtual ME images with extrapolated energy of 130 keV showed significantly lower contribution of low frequencies after the Fourier transform compared with any polyenergetic data set reconstructed with D30, B70, and B30 kernels (P < 0.001). Sequential single-source dual-energy CT allows an efficient reduction of metal artifacts using high-energy ME extrapolation after osteosynthetic fracture treatment.

  13. Measured and Modeled Toxicokinetics in Cultured Fish Cells and Application to In Vitro - In Vivo Toxicity Extrapolation

    PubMed Central

    Stadnicka-Michalak, Julita; Tanneberger, Katrin; Schirmer, Kristin; Ashauer, Roman

    2014-01-01

    Effect concentrations in the toxicity assessment of chemicals with fish and fish cells are generally based on external exposure concentrations. External concentrations as dose metrics, may, however, hamper interpretation and extrapolation of toxicological effects because it is the internal concentration that gives rise to the biological effective dose. Thus, we need to understand the relationship between the external and internal concentrations of chemicals. The objectives of this study were to: (i) elucidate the time-course of the concentration of chemicals with a wide range of physicochemical properties in the compartments of an in vitro test system, (ii) derive a predictive model for toxicokinetics in the in vitro test system, (iii) test the hypothesis that internal effect concentrations in fish (in vivo) and fish cell lines (in vitro) correlate, and (iv) develop a quantitative in vitro to in vivo toxicity extrapolation method for fish acute toxicity. To achieve these goals, time-dependent amounts of organic chemicals were measured in medium, cells (RTgill-W1) and the plastic of exposure wells. Then, the relation between uptake, elimination rate constants, and log KOW was investigated for cells in order to develop a toxicokinetic model. This model was used to predict internal effect concentrations in cells, which were compared with internal effect concentrations in fish gills predicted by a Physiologically Based Toxicokinetic model. Our model could predict concentrations of non-volatile organic chemicals with log KOW between 0.5 and 7 in cells. The correlation of the log ratio of internal effect concentrations in fish gills and the fish gill cell line with the log KOW was significant (r>0.85, p = 0.0008, F-test). This ratio can be predicted from the log KOW of the chemical (77% of variance explained), comprising a promising model to predict lethal effects on fish based on in vitro data. PMID:24647349

  14. Evidence that Arrhenius high-temperature aging behavior for an EPDM o-ring does not extrapolate to lower temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillen, K.T.; Wise, J.; Celina, M.

    1997-09-01

    Because of the need to significantly extend the lifetimes of weapons, and because of potential implications of environmental O-ring failure on degradation of critical internal weapon components, the authors have been working on improved methods of predicting and verifying O-ring lifetimes. In this report, they highlight the successful testing of a new predictive method for deriving more confident lifetime extrapolations. This method involves ultrasensitive oxygen consumption measurements. The material studied is an EPDM formulation use for the environmental O-ring the W88. Conventional oven aging (155 C to 111 C) was done on compression molded sheet material; periodically, samples were removedmore » from the ovens and subjected to various measurements, including ultimate tensile elongation, density and modulus profiles. Compression stress relaxation (CSR) measurements were made at 125 C and 111 C on disc shaped samples (12.7 mm diameter by 6 mm thick) using a Shawbury Wallace Compression Stress Relaxometer MK 2. Oxygen consumption measurements were made versus time, at temperatures ranging from 160 C to 52 C, using chromatographic quantification of the change in oxygen content caused by reaction with the EPDM material in sealed containers.« less

  15. Interpolation/extrapolation technique with application to hypervelocity impact of space debris

    NASA Technical Reports Server (NTRS)

    Rule, William K.

    1992-01-01

    A new technique for the interpolation/extrapolation of engineering data is described. The technique easily allows for the incorporation of additional independent variables, and the most suitable data in the data base is automatically used for each prediction. The technique provides diagnostics for assessing the reliability of the prediction. Two sets of predictions made for known 5-degree-of-freedom, 15-parameter functions using the new technique produced an average coefficient of determination of 0.949. Here, the technique is applied to the prediction of damage to the Space Station from hypervelocity impact of space debris. A new set of impact data is presented for this purpose. Reasonable predictions for bumper damage were obtained, but predictions of pressure wall and multilayer insulation damage were poor.

  16. Extrapolation of dynamic load behaviour on hydroelectric turbine blades with cyclostationary modelling

    NASA Astrophysics Data System (ADS)

    Poirier, Marc; Gagnon, Martin; Tahan, Antoine; Coutu, André; Chamberland-lauzon, Joël

    2017-01-01

    In this paper, we present the application of cyclostationary modelling for the extrapolation of short stationary load strain samples measured in situ on hydraulic turbine blades. Long periods of measurements allow for a wide range of fluctuations representative of long-term reality to be considered. However, sampling over short periods limits the dynamic strain fluctuations available for analysis. The purpose of the technique presented here is therefore to generate a representative signal containing proper long term characteristics and expected spectrum starting with a much shorter signal period. The final objective is to obtain a strain history that can be used to estimate long-term fatigue behaviour of hydroelectric turbine runners.

  17. High- to low-dose extrapolation: critical determinants involved in the dose response of carcinogenic substances.

    PubMed

    Swenberg, J A; Richardson, F C; Boucheron, J A; Deal, F H; Belinsky, S A; Charbonneau, M; Short, B G

    1987-12-01

    Recent investigations on mechanism of carcinogenesis have demonstrated important quantitative relationships between the induction of neoplasia, the molecular dose of promutagenic DNA adducts and their efficiency for causing base-pair mismatch, and the extent of cell proliferation in target organ. These factors are involved in the multistage process of carcinogenesis, including initiation, promotion, and progression. The molecular dose of DNA adducts can exhibit supralinear, linear, or sublinear relationships to external dose due to differences in absorption, biotransformation, and DNA repair at high versus low doses. In contrast, increased cell proliferation is a common phenomena that is associated with exposures to relatively high doses of toxic chemicals. As such, it enhances the carcinogenic response at high doses, but has little effect at low doses. Since data on cell proliferation can be obtained for any exposure scenario and molecular dosimetry studies are beginning to emerge on selected chemical carcinogens, methods are needed so that these critical factors can be utilized in extrapolation from high to low doses and across species. The use of such information may provide a scientific basis for quantitative risk assessment.

  18. High Temperature Corrosion and Characterization Studies in Flux Cored Arc Welded 2.25Cr-1Mo Power Plant Steel

    NASA Astrophysics Data System (ADS)

    Kumaresh Babu, S. P.; Natarajan, S.

    2010-07-01

    Higher productivity is registered with Flux cored arc welding (FCAW) process in many applications. Further, it combines the characteristics of shielded metal arc welding (SMAW), gas metal arc welding (GMAW), and submerged arc welding (SAW) processes. This article describes the experimental work carried out to evaluate and compare corrosion and its inhibition in SA 387 Gr.22 (2.25Cr-1Mo) steel weldments prepared by FCAW process with four different heat inputs exposed to hydrochloric acid medium at 0.1, 0.5, and 1.0 M concentrations. The parent metal, weld metal, and heat-affected zone are chosen as regions of exposure for the study carried out at 100 °C. Electrochemical polarization techniques such as Tafel line extrapolation (Tafel) and linear polarization resistance (LPR) have been used to measure the corrosion current. The role of hexamine and mixed inhibitor (thiourea + hexamine in 0.5 M HCl), each at 100 ppm concentration is studied in these experiments. Microstructural observation, hardness survey, surface characterization, and morphology using scanning electron microscope (SEM) and x-ray diffraction (XRD) have been made on samples to highlight the nature and extent of film formation. The film is found to contain Fe2Si, FeSi2, FeMn3, Fe7Mo3, Fe3O4, FeO, FeCr, AlO7Fe3SiO3, and KFe4Mn77Si19.

  19. Cell cultures in drug discovery and development: The need of reliable in vitro-in vivo extrapolation for pharmacodynamics and pharmacokinetics assessment.

    PubMed

    Jaroch, Karol; Jaroch, Alina; Bojko, Barbara

    2018-01-05

    For ethical and cost-related reasons, use of animals for the assessment of mode of action, metabolism and/or toxicity of new drug candidates has been increasingly scrutinized in research and industrial applications. Implementation of the 3 "Rs" 1 ; rule (Reduction, Replacement, Refinement) through development of in silico or in vitro assays has become an essential element of risk assessment. Physiologically based pharmacokinetic (PBPK 2 ) modeling is the most potent in silico tool used for extrapolation of pharmacokinetic parameters to animal or human models from results obtained in vitro. Although, many types of in vitro assays are conducted during drug development, use of cell cultures is the most reliable one. Two-dimensional (2D) cell cultures have been a part of drug development for many years. Nowadays, their role is decreasing in favor of three-dimensional (3D) cell cultures and co-cultures. 3D cultures exhibit protein expression patterns and intercellular junctions that are closer to in vivo states in comparison to classical monolayer cultures. Co-cultures allow for examinations of the mutual influence of different cell lines. However, the complexity and high costs of co-cultures and 3D equipment exclude such methods from high-throughput screening (HTS). 3 In vitro absorption, distribution, metabolism, and excretion assessment, as well as drug-drug interaction (DDI), are usually performed with the use of various cell culture based assays. Progress in in silico and in vitro methods can lead to better in vitro-in vivo extrapolation (IVIVE 4 ) outcomes and have a potential to contribute towards a significant reduction in the number of laboratory animals needed for drug research. As such, concentrated efforts need to be spent towards the development of an HTS in vitro platform with satisfactory IVIVE features. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Electrochemical characteristics of calcium-phosphatized AZ31 magnesium alloy in 0.9 % NaCl solution.

    PubMed

    Hadzima, Branislav; Mhaede, Mansour; Pastorek, Filip

    2014-05-01

    Magnesium alloys suffer from their high reactivity in common environments. Protective layers are widely created on the surface of magnesium alloys to improve their corrosion resistance. This article evaluates the influence of a calcium-phosphate layer on the electrochemical characteristics of AZ31 magnesium alloy in 0.9 % NaCl solution. The calcium phosphate (CaP) layer was electrochemically deposited in a solution containing 0.1 M Ca(NO3)2, 0.06 M NH4H2PO4 and 10 ml l(-1) of H2O2. The formed surface layer was composed mainly of brushite [(dicalcium phosphate dihidrate (DCPD)] as proved by energy-dispersive X-ray analysis. The surface morphology was observed by scanning electron microscopy. Immersion test was performed in order to observe degradation of the calcium phosphatized surfaces. The influence of the phosphate layer on the electrochemical characteristics of AZ31, in 0.9 % NaCl solution, was evaluated by potentiodynamic measurements and electrochemical impedance spectroscopy. The obtained results were analysed by the Tafel-extrapolation method and equivalent circuits method. The results showed that the polarization resistance of the DCPD-coated surface is about 25 times higher than that of non-coated surface. The CaP electro-deposition process increased the activation energy of corrosion process.

  1. A Comparison of Methods for Computing the Residual Resistivity Ratio of High-Purity Niobium

    PubMed Central

    Splett, J. D.; Vecchia, D. F.; Goodrich, L. F.

    2011-01-01

    We compare methods for estimating the residual resistivity ratio (RRR) of high-purity niobium and investigate the effects of using different functional models. RRR is typically defined as the ratio of the electrical resistances measured at 273 K (the ice point) and 4.2 K (the boiling point of helium at standard atmospheric pressure). However, pure niobium is superconducting below about 9.3 K, so the low-temperature resistance is defined as the normal-state (i.e., non-superconducting state) resistance extrapolated to 4.2 K and zero magnetic field. Thus, the estimated value of RRR depends significantly on the model used for extrapolation. We examine three models for extrapolation based on temperature versus resistance, two models for extrapolation based on magnetic field versus resistance, and a new model based on the Kohler relationship that can be applied to combined temperature and field data. We also investigate the possibility of re-defining RRR so that the quantity is not dependent on extrapolation. PMID:26989580

  2. Dose and dose rate extrapolation factors for malignant and non-malignant health endpoints after exposure to gamma and neutron radiation.

    PubMed

    Tran, Van; Little, Mark P

    2017-11-01

    Murine experiments were conducted at the JANUS reactor in Argonne National Laboratory from 1970 to 1992 to study the effect of acute and protracted radiation dose from gamma rays and fission neutron whole body exposure. The present study reports the reanalysis of the JANUS data on 36,718 mice, of which 16,973 mice were irradiated with neutrons, 13,638 were irradiated with gamma rays, and 6107 were controls. Mice were mostly Mus musculus, but one experiment used Peromyscus leucopus. For both types of radiation exposure, a Cox proportional hazards model was used, using age as timescale, and stratifying on sex and experiment. The optimal model was one with linear and quadratic terms in cumulative lagged dose, with adjustments to both linear and quadratic dose terms for low-dose rate irradiation (<5 mGy/h) and with adjustments to the dose for age at exposure and sex. After gamma ray exposure there is significant non-linearity (generally with upward curvature) for all tumours, lymphoreticular, respiratory, connective tissue and gastrointestinal tumours, also for all non-tumour, other non-tumour, non-malignant pulmonary and non-malignant renal diseases (p < 0.001). Associated with this the low-dose extrapolation factor, measuring the overestimation in low-dose risk resulting from linear extrapolation is significantly elevated for lymphoreticular tumours 1.16 (95% CI 1.06, 1.31), elevated also for a number of non-malignant endpoints, specifically all non-tumour diseases, 1.63 (95% CI 1.43, 2.00), non-malignant pulmonary disease, 1.70 (95% CI 1.17, 2.76) and other non-tumour diseases, 1.47 (95% CI 1.29, 1.82). However, for a rather larger group of malignant endpoints the low-dose extrapolation factor is significantly less than 1 (implying downward curvature), with central estimates generally ranging from 0.2 to 0.8, in particular for tumours of the respiratory system, vasculature, ovary, kidney/urinary bladder and testis. For neutron exposure most endpoints, malignant

  3. EVALUATION OF THE EFFICACY OF EXTRAPOLATION POPULATION MODELING TO PREDICT THE DYNAMICS OF AMERICAMYSIS BAHIA POPULATIONS IN THE LABORATORY

    EPA Science Inventory

    An age-classified projection matrix model has been developed to extrapolate the chronic (28-35d) demographic responses of Americamysis bahia (formerly Mysidopsis bahia) to population-level response. This study was conducted to evaluate the efficacy of this model for predicting t...

  4. Using Corpus Linguistics to Examine the Extrapolation Inference in the Validity Argument for a High-Stakes Speaking Assessment

    ERIC Educational Resources Information Center

    LaFlair, Geoffrey T.; Staples, Shelley

    2017-01-01

    Investigations of the validity of a number of high-stakes language assessments are conducted using an argument-based approach, which requires evidence for inferences that are critical to score interpretation (Chapelle, Enright, & Jamieson, 2008b; Kane, 2013). The current study investigates the extrapolation inference for a high-stakes test of…

  5. State-of-the-Science Workshop Report: Issues and Approaches in Low Dose–Response Extrapolation for Environmental Health Risk Assessment

    EPA Science Inventory

    Low-dose extrapolation model selection for evaluating the health effects of environmental pollutants is a key component of the risk assessment process. At a workshop held in Baltimore, MD, on April 23-24, 2007, and sponsored by U.S. Environmental Protection Agency (EPA) and Johns...

  6. High- to low-dose extrapolation: critical determinants involved in the dose response of carcinogenic substances.

    PubMed Central

    Swenberg, J A; Richardson, F C; Boucheron, J A; Deal, F H; Belinsky, S A; Charbonneau, M; Short, B G

    1987-01-01

    Recent investigations on mechanism of carcinogenesis have demonstrated important quantitative relationships between the induction of neoplasia, the molecular dose of promutagenic DNA adducts and their efficiency for causing base-pair mismatch, and the extent of cell proliferation in target organ. These factors are involved in the multistage process of carcinogenesis, including initiation, promotion, and progression. The molecular dose of DNA adducts can exhibit supralinear, linear, or sublinear relationships to external dose due to differences in absorption, biotransformation, and DNA repair at high versus low doses. In contrast, increased cell proliferation is a common phenomena that is associated with exposures to relatively high doses of toxic chemicals. As such, it enhances the carcinogenic response at high doses, but has little effect at low doses. Since data on cell proliferation can be obtained for any exposure scenario and molecular dosimetry studies are beginning to emerge on selected chemical carcinogens, methods are needed so that these critical factors can be utilized in extrapolation from high to low doses and across species. The use of such information may provide a scientific basis for quantitative risk assessment. PMID:3447904

  7. Investigation of hexagonal boron nitride as an atomically thin corrosion passivation coating in aqueous solution.

    PubMed

    Zhang, Jing; Yang, Yingchao; Lou, Jun

    2016-09-09

    Hexagonal boron nitride (h-BN) atomic layers were utilized as a passivation coating in this study. A large-area continuous h-BN thin film was grown on nickel foil using a chemical vapor deposition method and then transferred onto sputtered copper as a corrosion passivation coating. The corrosion passivation performance in a Na2SO4 solution of bare and coated copper was investigated by electrochemical methods including cyclic voltammetry (CV), Tafel polarization and electrochemical impedance spectroscopy (EIS). CV and Tafel analysis indicate that the h-BN coating could effectively suppress the anodic dissolution of copper. The EIS fitting result suggests that defects are the dominant leakage source on h-BN films, and improved anti-corrosion performances could be achieved by further passivating these defects.

  8. Identification of the viscoelastic properties of soft materials at low frequency: performance, ill-conditioning and extrapolation capabilities of fractional and exponential models.

    PubMed

    Ciambella, J; Paolone, A; Vidoli, S

    2014-09-01

    We report about the experimental identification of viscoelastic constitutive models for frequencies ranging within 0-10Hz. Dynamic moduli data are fitted forseveral materials of interest to medical applications: liver tissue (Chatelin et al., 2011), bioadhesive gel (Andrews et al., 2005), spleen tissue (Nicolle et al., 2012) and synthetic elastomer (Osanaiye, 1996). These materials actually represent a rather wide class of soft viscoelastic materials which are usually subjected to low frequencies deformations. We also provide prescriptions for the correct extrapolation of the material behavior at higher frequencies. Indeed, while experimental tests are more easily carried out at low frequency, the identified viscoelastic models are often used outside the frequency range of the actual test. We consider two different classes of models according to their relaxation function: Debye models, whose kernel decays exponentially fast, and fractional models, including Cole-Cole, Davidson-Cole, Nutting and Havriliak-Negami, characterized by a slower decay rate of the material memory. Candidate constitutive models are hence rated according to the accurateness of the identification and to their robustness to extrapolation. It is shown that all kernels whose decay rate is too fast lead to a poor fitting and high errors when the material behavior is extrapolated to broader frequency ranges. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  9. Ground signature extrapolation of three-dimensional near-field CFD predictions for several HSCT configurations

    NASA Technical Reports Server (NTRS)

    Siclari, M. J.

    1992-01-01

    A CFD analysis of the near-field sonic boom environment of several low boom High Speed Civilian Transport (HSCT) concepts is presented. The CFD method utilizes a multi-block Euler marching code within the context of an innovative mesh topology that allows for the resolution of shock waves several body lengths from the aircraft. Three-dimensional pressure footprints at one body length below three-different low boom aircraft concepts are presented. Models of two concepts designed by NASA to cruise at Mach 2 and Mach 3 were built and tested in the wind tunnel. The third concept was designed by Boeing to cruise at Mach 1.7. Centerline and sideline samples of these footprints are then extrapolated to the ground using a linear waveform parameter method to estimate the ground signatures or sonic boom ground overpressure levels. The Mach 2 concept achieved its centerline design signature but indicated higher sideline booms due to the outboard wing crank of the configuration. Nacelles are also included on two of NASA's low boom concepts. Computations are carried out for both flow-through nacelles and nacelles with engine exhaust simulation. The flow-through nacelles with the assumption of zero spillage and zero inlet lip radius showed very little effect on the sonic boom signatures. On the other hand, it was shown that the engine exhaust plumes can have an effect on the levels of overpressure reaching the ground depending on the engine operating conditions. The results of this study indicate that engine integration into a low boom design should be given some attention.

  10. Kinetic Monte Carlo simulations of water ice porosity: extrapolations of deposition parameters from the laboratory to interstellar space

    NASA Astrophysics Data System (ADS)

    Clements, Aspen R.; Berk, Brandon; Cooke, Ilsa R.; Garrod, Robin T.

    2018-02-01

    Using an off-lattice kinetic Monte Carlo model we reproduce experimental laboratory trends in the density of amorphous solid water (ASW) for varied deposition angle, rate and surface temperature. Extrapolation of the model to conditions appropriate to protoplanetary disks and interstellar dark clouds indicate that these ices may be less porous than laboratory ices.

  11. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  12. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE PAGES

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-01

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  13. SPECIES DIFFERENCES IN ANDROGEN AND ESTROGEN RECEPTOR STRUCTURE AND FUNCTION AMONG VERTEBRATES AND INVERTEBRATES: INTERSPECIES EXTRAPOLATIONS REGARDING ENDOCRINE DISRUPTING CHEMICALS

    EPA Science Inventory

    Species Differences in Androgen and Estrogen Receptor Structure and Function Among Vertebrates and Invertebrates: Interspecies Extrapolations regarding Endocrine Disrupting Chemicals
    VS Wilson1, GT Ankley2, M Gooding 1,3, PD Reynolds 1,4, NC Noriega 1, M Cardon 1, P Hartig1,...

  14. Beyond the plot: technology extrapolation domains for scaling out agronomic science

    NASA Astrophysics Data System (ADS)

    Rattalino Edreira, Juan I.; Cassman, Kenneth G.; Hochman, Zvi; van Ittersum, Martin K.; van Bussel, Lenny; Claessens, Lieven; Grassini, Patricio

    2018-05-01

    Ensuring an adequate food supply in systems that protect environmental quality and conserve natural resources requires productive and resource-efficient cropping systems on existing farmland. Meeting this challenge will be difficult without a robust spatial framework that facilitates rapid evaluation and scaling-out of currently available and emerging technologies. Here we develop a global spatial framework to delineate ‘technology extrapolation domains’ based on key climate and soil factors that govern crop yields and yield stability in rainfed crop production. The proposed framework adequately represents the spatial pattern of crop yields and stability when evaluated over the data-rich US Corn Belt. It also facilitates evaluation of cropping system performance across continents, which can improve efficiency of agricultural research that seeks to intensify production on existing farmland. Populating this biophysical spatial framework with appropriate socio-economic attributes provides the potential to amplify the return on investments in agricultural research and development by improving the effectiveness of research prioritization and impact assessment.

  15. Corrosion of Titanium Matrix Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Covino, B.S., Jr.; Alman, D.E.

    2002-09-22

    The corrosion behavior of unalloyed Ti and titanium matrix composites containing up to 20 vol% of TiC or TiB{sub 2} was determined in deaerated 2 wt% HCl at 50, 70, and 90 degrees C. Corrosion rates were calculated from corrosion currents determined by extrapolation of the tafel slopes. All curves exhibited active-passive behavior but no transpassive region. Corrosion rates for Ti + TiC composites were similar to those for unalloyed Ti except at 90 degrees C where the composites were slightly higher. Corrosion rates for Ti + TiB{sub 2} composites were generally higher than those for unalloyed Ti and increasedmore » with higher concentrations of TiB{sub 2}. XRD and SEM-EDS analyses showed that the TiC reinforcement did not react with the Ti matrix during fabrication while the TiB{sub 2} reacted to form a TiB phase.« less

  16. Electrochemical screening of organic and inorganic inhibitors for the corrosion of ASTM A-470 steel in concentrated sodium hydroxide solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moccari, A.; MacDonald, D.D.

    The corrosion of ASTM A-470 turbine disk steel in concentrated sodium hydroxide solution (10 mol/kg) containing sodium silicate, sodium dihydrogen phosphate, sodium chromate, aniline and some of its derivatives, tannic acid, L-(-)-phenylalanine (aminopropionic acid) and octadecylamine as potential inhibitors has been studied using the potentiodynamic, AC impedance, and Tafel extrapolation techniques. All tests were performed at 115 + or - 2 C. The anodic and cathodic polarization data show that aniline and its derivatives, L-(-)-phenylalanine, NaH/sub 2/PO/sub 4/, Na/sub 2/SiO/sub 3/, and Na/sub 2/CrO/sub 4/ inhibit the anodic process, whereas tannic acid inhibits the cathodic reaction. Octadecylamine was found tomore » inhibit both the anodic and cathodic processes. The mechanisms of inhibition for some of these compounds have been inferred from the wide band width frequency dispersions of the interfacial impedance.« less

  17. Tweede Serie Ergonomietests Lichtgewicht Bommenpakken (Second Series of Ergonomic Tests on Lightweight Bomb Disposal Suits)

    DTIC Science & Technology

    2007-12-01

    warmtebelastingtests vast te stellen en (sit-and-reach, stand-and-reach. abductie referentiewaarden te bepalen door het van de arnen, anteflexie van de armen ...volgende, bewegingbeperkingtests: sit-and-reach, stand-and-reach. abductie van de armen , anteflexie van de armen en beperking van zicht. Bij de sit-and...gebogen op de rand van een tafel en houdt de armen zo ver mogeijk gestrekt naar voren op tafel. Daarbij wordt de afstand vanaf de rand van de tafel tot

  18. Unified approach for extrapolation and bridging of adult information in early-phase dose-finding paediatric studies.

    PubMed

    Petit, Caroline; Samson, Adeline; Morita, Satoshi; Ursino, Moreno; Guedj, Jérémie; Jullien, Vincent; Comets, Emmanuelle; Zohar, Sarah

    2018-06-01

    The number of trials conducted and the number of patients per trial are typically small in paediatric clinical studies. This is due to ethical constraints and the complexity of the medical process for treating children. While incorporating prior knowledge from adults may be extremely valuable, this must be done carefully. In this paper, we propose a unified method for designing and analysing dose-finding trials in paediatrics, while bridging information from adults. The dose-range is calculated under three extrapolation options, linear, allometry and maturation adjustment, using adult pharmacokinetic data. To do this, it is assumed that target exposures are the same in both populations. The working model and prior distribution parameters of the dose-toxicity and dose-efficacy relationships are obtained using early-phase adult toxicity and efficacy data at several dose levels. Priors are integrated into the dose-finding process through Bayesian model selection or adaptive priors. This calibrates the model to adjust for misspecification, if the adult and pediatric data are very different. We performed a simulation study which indicates that incorporating prior adult information in this way may improve dose selection in children.

  19. Tests and applications of nonlinear force-free field extrapolations in spherical geometry

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Ding, M. D.

    2013-07-01

    We test a nonlinear force-free field (NLFFF) optimization code in spherical geometry with an analytical solution from Low and Lou. The potential field source surface (PFSS) model is served as the initial and boundary conditions where observed data are not available. The analytical solution can be well recovered if the boundary and initial conditions are properly handled. Next, we discuss the preprocessing procedure for the noisy bottom boundary data, and find that preprocessing is necessary for NLFFF extrapolations when we use the observed photospheric magnetic field as bottom boundaries. Finally, we apply the NLFFF model to a solar area where four active regions interacting with each other. An M8.7 flare occurred in one active region. NLFFF modeling in spherical geometry simultaneously constructs the small and large scale magnetic field configurations better than the PFSS model does.

  20. Studying the Transfer of Magnetic Helicity in Solar Active Regions with the Connectivity-based Helicity Flux Density Method

    NASA Astrophysics Data System (ADS)

    Dalmasse, K.; Pariat, É.; Valori, G.; Jing, J.; Démoulin, P.

    2018-01-01

    In the solar corona, magnetic helicity slowly and continuously accumulates in response to plasma flows tangential to the photosphere and magnetic flux emergence through it. Analyzing this transfer of magnetic helicity is key for identifying its role in the dynamics of active regions (ARs). The connectivity-based helicity flux density method was recently developed for studying the 2D and 3D transfer of magnetic helicity in ARs. The method takes into account the 3D nature of magnetic helicity by explicitly using knowledge of the magnetic field connectivity, which allows it to faithfully track the photospheric flux of magnetic helicity. Because the magnetic field is not measured in the solar corona, modeled 3D solutions obtained from force-free magnetic field extrapolations must be used to derive the magnetic connectivity. Different extrapolation methods can lead to markedly different 3D magnetic field connectivities, thus questioning the reliability of the connectivity-based approach in observational applications. We address these concerns by applying this method to the isolated and internally complex AR 11158 with different magnetic field extrapolation models. We show that the connectivity-based calculations are robust to different extrapolation methods, in particular with regard to identifying regions of opposite magnetic helicity flux. We conclude that the connectivity-based approach can be reliably used in observational analyses and is a promising tool for studying the transfer of magnetic helicity in ARs and relating it to their flaring activity.

  1. Improving Predictions with Reliable Extrapolation Schemes and Better Understanding of Factorization

    NASA Astrophysics Data System (ADS)

    More, Sushant N.

    New insights into the inter-nucleon interactions, developments in many-body technology, and the surge in computational capabilities has led to phenomenal progress in low-energy nuclear physics in the past few years. Nonetheless, many calculations still lack a robust uncertainty quantification which is essential for making reliable predictions. In this work we investigate two distinct sources of uncertainty and develop ways to account for them. Harmonic oscillator basis expansions are widely used in ab-initio nuclear structure calculations. Finite computational resources usually require that the basis be truncated before observables are fully converged, necessitating reliable extrapolation schemes. It has been demonstrated recently that errors introduced from basis truncation can be taken into account by focusing on the infrared and ultraviolet cutoffs induced by a truncated basis. We show that a finite oscillator basis effectively imposes a hard-wall boundary condition in coordinate space. We accurately determine the position of the hard-wall as a function of oscillator space parameters, derive infrared extrapolation formulas for the energy and other observables, and discuss the extension of this approach to higher angular momentum and to other localized bases. We exploit the duality of the harmonic oscillator to account for the errors introduced by a finite ultraviolet cutoff. Nucleon knockout reactions have been widely used to study and understand nuclear properties. Such an analysis implicitly assumes that the effects of the probe can be separated from the physics of the target nucleus. This factorization between nuclear structure and reaction components depends on the renormalization scale and scheme, and has not been well understood. But it is potentially critical for interpreting experiments and for extracting process-independent nuclear properties. We use a class of unitary transformations called the similarity renormalization group (SRG) transformations to

  2. Methods for converging correlation energies within the dielectric matrix formalism

    NASA Astrophysics Data System (ADS)

    Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario

    2018-03-01

    Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.

  3. Subsonic panel method for designing wing surfaces from pressure distribution

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.; Hawk, J. D.

    1983-01-01

    An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.

  4. Adaptation to implied tilt: extensive spatial extrapolation of orientation gradients

    PubMed Central

    Roach, Neil W.; Webb, Ben S.

    2013-01-01

    To extract the global structure of an image, the visual system must integrate local orientation estimates across space. Progress is being made toward understanding this integration process, but very little is known about whether the presence of structure exerts a reciprocal influence on local orientation coding. We have previously shown that adaptation to patterns containing circular or radial structure induces tilt-aftereffects (TAEs), even in locations where the adapting pattern was occluded. These spatially “remote” TAEs have novel tuning properties and behave in a manner consistent with adaptation to the local orientation implied by the circular structure (but not physically present) at a given test location. Here, by manipulating the spatial distribution of local elements in noisy circular textures, we demonstrate that remote TAEs are driven by the extrapolation of orientation structure over remarkably large regions of visual space (more than 20°). We further show that these effects are not specific to adapting stimuli with polar orientation structure, but require a gradient of orientation change across space. Our results suggest that mechanisms of visual adaptation exploit orientation gradients to predict the local pattern content of unfilled regions of space. PMID:23882243

  5. EXTRAPOLATION TECHNIQUES EVALUATING 24 HOURS OF AVERAGE ELECTROMAGNETIC FIELD EMITTED BY RADIO BASE STATION INSTALLATIONS: SPECTRUM ANALYZER MEASUREMENTS OF LTE AND UMTS SIGNALS.

    PubMed

    Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa

    2017-04-01

    International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Extrapolating cetacean densities to quantitatively assess human impacts on populations in the high seas.

    PubMed

    Mannocci, Laura; Roberts, Jason J; Miller, David L; Halpin, Patrick N

    2017-06-01

    As human activities expand beyond national jurisdictions to the high seas, there is an increasing need to consider anthropogenic impacts to species inhabiting these waters. The current scarcity of scientific observations of cetaceans in the high seas impedes the assessment of population-level impacts of these activities. We developed plausible density estimates to facilitate a quantitative assessment of anthropogenic impacts on cetacean populations in these waters. Our study region extended from a well-surveyed region within the U.S. Exclusive Economic Zone into a large region of the western North Atlantic sparsely surveyed for cetaceans. We modeled densities of 15 cetacean taxa with available line transect survey data and habitat covariates and extrapolated predictions to sparsely surveyed regions. We formulated models to reduce the extent of extrapolation beyond covariate ranges, and constrained them to model simple and generalizable relationships. To evaluate confidence in the predictions, we mapped where predictions were made outside sampled covariate ranges, examined alternate models, and compared predicted densities with maps of sightings from sources that could not be integrated into our models. Confidence levels in model results depended on the taxon and geographic area and highlighted the need for additional surveying in environmentally distinct areas. With application of necessary caution, our density estimates can inform management needs in the high seas, such as the quantification of potential cetacean interactions with military training exercises, shipping, fisheries, and deep-sea mining and be used to delineate areas of special biological significance in international waters. Our approach is generally applicable to other marine taxa and geographic regions for which management will be implemented but data are sparse. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  7. A stabilized MFE reduced-order extrapolation model based on POD for the 2D unsteady conduction-convection problem.

    PubMed

    Xia, Hong; Luo, Zhendong

    2017-01-01

    In this study, we devote ourselves to establishing a stabilized mixed finite element (MFE) reduced-order extrapolation (SMFEROE) model holding seldom unknowns for the two-dimensional (2D) unsteady conduction-convection problem via the proper orthogonal decomposition (POD) technique, analyzing the existence and uniqueness and the stability as well as the convergence of the SMFEROE solutions and validating the correctness and dependability of the SMFEROE model by means of numerical simulations.

  8. Creep behavior of bone cement: a method for time extrapolation using time-temperature equivalence.

    PubMed

    Morgan, R L; Farrar, D F; Rose, J; Forster, H; Morgan, I

    2003-04-01

    The clinical lifetime of poly(methyl methacrylate) (PMMA) bone cement is considerably longer than the time over which it is convenient to perform creep testing. Consequently, it is desirable to be able to predict the long term creep behavior of bone cement from the results of short term testing. A simple method is described for prediction of long term creep using the principle of time-temperature equivalence in polymers. The use of the method is illustrated using a commercial acrylic bone cement. A creep strain of approximately 0.6% is predicted after 400 days under a constant flexural stress of 2 MPa. The temperature range and stress levels over which it is appropriate to perform testing are described. Finally, the effects of physical aging on the accuracy of the method are discussed and creep data from aged cement are reported.

  9. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    NASA Astrophysics Data System (ADS)

    Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad

    2014-08-01

    Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.

  10. Creation of Abdominal Aortic Aneurysms in Sheep by Extrapolation of Rodent Models: Is It Feasible?

    PubMed

    Verbrugghe, Peter; Verhoeven, Jelle; Clijsters, Marnick; Vervoort, Dominique; Coudyzer, Walter; Verbeken, Eric; Meuris, Bart; Herijgers, Paul

    2018-06-07

    Abdominal aortic aneurysms (AAAs) are a potentially deathly disease, needing surgical or endovascular treatment. To evaluate potentially new diagnostic tools and treatments, a large animal model, which resembles not only the morphological characteristics but also the pathophysiological background, would be useful. Rodent animal aneurysm models were extrapolated to sheep. Four groups were created: intraluminal infusion with an elastase-collagenase solution (n = 4), infusion with elastase-collagenase solution combined with proximal stenosis (n = 7), aortic xenograft (n = 3), and elastase-collagenase-treated xenograft (n = 4). At fixed time intervals (6, 12, and 24 weeks), computer tomography and autopsy with histological evaluation were performed. The described models had a high perioperative mortality (45%), due to acute aortic thrombosis or fatale hemorrhage. A maximum aortic diameter increase of 30% was obtained in the protease-stenosis group. In the protease-treated groups, some histological features of human AAAs, such as inflammation, thinning of the media, and loss of elastin could be reproduced. In the xenotransplant groups, a pronounced inflammatory reaction was visible at the start. In all models, inflammation decreased and fibrosis occurred at long follow-up, 24 weeks postoperatively. None of the extrapolated small animal aneurysm models could produce an AAA in sheep with similar morphological features as the human disease. Some histological findings of human surgical specimens could be reproduced in the elastase-collagenase-treated groups. Long-term histological evaluation indicated stabilization and healing of the aortic wall months after the initial stimulus. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  11. Molecular target sequence similarity as a basis for species extrapolation to assess the ecological risk of chemicals with known modes of action

    EPA Science Inventory

    In practice, it is neither feasible nor ethical to conduct toxicity tests with all species that may be impacted by chemical exposures. Therefore, cross-species extrapolation is fundamental to human health and ecological risk assessment. The extensive chemical universe for which w...

  12. Extrapolation of thermophysical properties data for oxygen to high pressures (5000 to 10,000 psia) at low temperatures (100-600 R)

    NASA Technical Reports Server (NTRS)

    Weber, L. A.

    1971-01-01

    Thermophysical properties data for oxygen at pressures below 5000 psia have been extrapolated to higher pressures (5,000-10,000 psia) in the temperature range 100-600 R. The tables include density, entropy, enthalpy, internal energy, speed of sound, specific heat, thermal conductivity, viscosity, thermal diffusivity, Prandtl number, and dielectric constant.

  13. Chiral extrapolations of the ρ ( 770 ) meson in N f = 2 + 1 lattice QCD simulations

    DOE PAGES

    Hu, B.; Molina, R.; Döring, M.; ...

    2017-08-24

    Recentmore » $$N_f=2+1$$ lattice data for meson-meson scattering in $p$-wave and isospin $I=1$ are analyzed using a unitarized model inspired by Chiral Perturbation Theory in the inverse-amplitude formulation for two and three flavors. We perform chiral extrapolations that postdict phase shifts extracted from experiment quite well. Additionally, the low-energy constants are compared to the ones from a recent analysis of $$N_f=2$$ lattice QCD simulations to check for the consistency of the hadronic model used here. Some inconsistencies are detected in the fits to $$N_f=2+1$$ data, in contrast to the previous analysis of $$N_f=2$$ data.« less

  14. Determination of Extrapolation Distance with Measured Pressure Signatures from Two Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.; Kuhn, Neil

    2004-01-01

    A study to determine a limiting distance to span ratio for the extrapolation of near-field pressure signatures is described and discussed. This study was to be done in two wind-tunnel facilities with two wind-tunnel models. At this time, only the first half had been completed, so the scope of this report is limited to the design of the models, and to an analysis of the first set of measured pressure signatures. The results from this analysis showed that the pressure signatures measured at separation distances of 2 to 5 span lengths did not show the desired low-boom shapes. However, there were indications that the pressure signature shapes were becoming 'flat-topped'. This trend toward a 'flat-top' pressure signatures shape was seen to be a gradual one at the distance ratios employed in this first series of wind-tunnel tests.

  15. Predicting the impact of biocorona formation kinetics on interspecies extrapolations of nanoparticle biodistribution modeling.

    PubMed

    Sahneh, Faryad Darabi; Scoglio, Caterina M; Monteiro-Riviere, Nancy A; Riviere, Jim E

    2015-01-01

    To assess the impact of biocorona kinetics on expected tissue distribution of nanoparticles (NPs) across species. The potential fate of NPs in vivo is described through a simple and descriptive pharmacokinetic model using rate processes dependent upon basal metabolic rate coupled to dynamics of protein corona. Mismatch of time scales between interspecies allometric scaling and the kinetics of corona formation is potentially a fundamental issue with interspecies extrapolations of NP biodistribution. The impact of corona evolution on NP biodistribution across two species is maximal when corona transition half-life is close to the geometric mean of NP half-lives of the two species. While engineered NPs can successfully reach target cells in rodent models, the results may be different in humans due to the fact that the longer circulation time allows for further biocorona evolution.

  16. Hartree-Fock mass formulas and extrapolation to new mass data

    NASA Astrophysics Data System (ADS)

    Goriely, S.; Samyn, M.; Heenen, P.-H.; Pearson, J. M.; Tondeur, F.

    2002-08-01

    The two previously published Hartree-Fock (HF) mass formulas, HFBCS-1 and HFB-1 (HF-Bogoliubov), are shown to be in poor agreement with new Audi-Wapstra mass data. The problem lies first with the prescription adopted for the cutoff of the single-particle spectrum used with the δ-function pairing force, and second with the Wigner term. We find an optimal mass fit if the spectrum is cut off both above EF+15 MeV and below EF-15 MeV, EF being the Fermi energy of the nucleus in question. In addition to the Wigner term of the form VW exp(-λ|N-Z|/A) already included in the two earlier HF mass formulas, we find that a second Wigner term linear in |N-Z| leads to a significant improvement in lighter nuclei. These two features are incorporated into our new Hartree-Fock-Bogoliubov model, which leads to much improved extrapolations. The 18 parameters of the model are fitted to the 2135 measured masses for N,Z>=8 with an rms error of 0.674 MeV. With this parameter set a complete mass table, labeled HFB-2, has been constructed, going from one drip line to the other, up to Z=120. The new pairing-cutoff prescription favored by the new mass data leads to weaker neutron-shell gaps in neutron-rich nuclei.

  17. CDNA CLONING OF FATHEAD MINNOW (PIMEPHALES PROMELAS) ESTROGEN AND ANDROGEN RECEPTORS FOR USE IN STEROID RECEPTOR EXTRAPOLATION STUDIES FOR ENDOCRINE DISRUPTING CHEMICALS

    EPA Science Inventory

    cDNA Cloning of Fathead minnow (Pimephales promelas) Estrogen and Androgen Receptors for Use in Steroid Receptor Extrapolation Studies for Endocrine Disrupting Chemicals.

    Wilson, V.S.1,, Korte, J.2, Hartig P. 1, Ankley, G.T.2, Gray, L.E., Jr 1, , and Welch, J.E.1. 1U.S...

  18. Scattering of targets over layered half space using a semi-analytic method in conjunction with FDTD algorithm.

    PubMed

    Cao, Le; Wei, Bing

    2014-08-25

    Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.

  19. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    ERIC Educational Resources Information Center

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  20. Complete basis set extrapolations for low-lying triplet electronic states of acetylene and vinylidene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherrill, C. David; Byrd, Edward F. C.; Head-Gordon, Martin

    2000-07-22

    A recent study by Ahmed, Peterka, and Suits [J. Chem. Phys. 110, 4248 (1999)] has presented the first experimentally derived estimate of the singlet-triplet gap in the simplest alkyne, acetylene. Their value, T{sub 0}(a(tilde sign) {sup 3}B{sub 2})=28 900 cm{sup -1}, does not agree with previous theoretical predictions using the coupled-cluster singles, doubles, and perturbative triples [CCSD(T)] method and a triple-{zeta} plus double polarization plus f-function basis set (TZ2P f ), which yields 30 500{+-}1000 cm{sup -1}. This discrepancy has prompted us to investigate possible deficiencies in this usually-accurate theoretical approach. Employing extrapolations to the complete basis set limit alongmore » with corrections for full connected triple excitations, core correlation, and even relativistic effects, we obtain a value of 30 900 cm-1 (estimated uncertainty {+-}230 cm-1), demonstrating that the experimental value is underestimated. To assist in the interpretation of anticipated future experiments, we also present highly accurate excitation energies for the other three low-lying triplet states of acetylene, a(tilde sign) {sup 3}B{sub u}(33 570{+-}230 cm{sup -1}), b(tilde sign) {sup 3}A{sub u}(36 040{+-}260 cm{sup -1}), and b(tilde sign) {sup 3}A{sub 2}(38 380{+-}260 cm{sup -1}), and the three lowest-lying states of vinylidene, X(tilde sign) {sup 1}A{sub 1}(15 150{+-}230 cm{sup -1}), a(tilde sign) {sup 3}B{sub 2}(31 870{+-}230 cm{sup -1}), and b(tilde sign) {sup 3}A{sub 2}(36 840{+-}350 cm{sup -1}). Finally, we assess the ability of density functional theory (DFT) and the Gaussian-3 method to match our benchmark results for adiabatic excitation energies of C{sub 2}H{sub 2}. (c) 2000 American Institute of Physics.« less

  1. In vitro to in vivo extrapolation of biotransformation rates for assessing bioaccumulation of hydrophobic organic chemicals in mammals.

    PubMed

    Lee, Yung-Shan; Lo, Justin C; Otton, S Victoria; Moore, Margo M; Kennedy, Chris J; Gobas, Frank A P C

    2017-07-01

    Incorporating biotransformation in bioaccumulation assessments of hydrophobic chemicals in both aquatic and terrestrial organisms in a simple, rapid, and cost-effective manner is urgently needed to improve bioaccumulation assessments of potentially bioaccumulative substances. One approach to estimate whole-animal biotransformation rate constants is to combine in vitro measurements of hepatic biotransformation kinetics with in vitro to in vivo extrapolation (IVIVE) and bioaccumulation modeling. An established IVIVE modeling approach exists for pharmaceuticals (referred to in the present study as IVIVE-Ph) and has recently been adapted for chemical bioaccumulation assessments in fish. The present study proposes and tests an alternative IVIVE-B technique to support bioaccumulation assessment of hydrophobic chemicals with a log octanol-water partition coefficient (K OW ) ≥ 4 in mammals. The IVIVE-B approach requires fewer physiological and physiochemical parameters than the IVIVE-Ph approach and does not involve interconversions between clearance and rate constants in the extrapolation. Using in vitro depletion rates, the results show that the IVIVE-B and IVIVE-Ph models yield similar estimates of rat whole-organism biotransformation rate constants for hypothetical chemicals with log K OW  ≥ 4. The IVIVE-B approach generated in vivo biotransformation rate constants and biomagnification factors (BMFs) for benzo[a]pyrene that are within the range of empirical observations. The proposed IVIVE-B technique may be a useful tool for assessing BMFs of hydrophobic organic chemicals in mammals. Environ Toxicol Chem 2017;36:1934-1946. © 2016 SETAC. © 2016 SETAC.

  2. Microsomal and Cytosolic Scaling Factors in Dog and Human Kidney Cortex and Application for In Vitro-In Vivo Extrapolation of Renal Metabolic Clearance

    PubMed Central

    Scotcher, Daniel; Billington, Sarah; Brown, Jay; Jones, Christopher R.; Brown, Colin D. A.; Rostami-Hodjegan, Amin

    2017-01-01

    In vitro-in vivo extrapolation of drug metabolism data obtained in enriched preparations of subcellular fractions rely on robust estimates of physiologically relevant scaling factors for the prediction of clearance in vivo. The purpose of the current study was to measure the microsomal and cytosolic protein per gram of kidney (MPPGK and CPPGK) in dog and human kidney cortex using appropriate protein recovery marker and evaluate functional activity of human cortex microsomes. Cytochrome P450 (CYP) content and glucose-6-phosphatase (G6Pase) activity were used as microsomal protein markers, whereas glutathione-S-transferase activity was a cytosolic marker. Functional activity of human microsomal samples was assessed by measuring mycophenolic acid glucuronidation. MPPGK was 33.9 and 44.0 mg/g in dog kidney cortex, and 41.1 and 63.6 mg/g in dog liver (n = 17), using P450 content and G6Pase activity, respectively. No trends were noted between kidney, liver, and intestinal scalars from the same animals. Species differences were evident, as human MPPGK and CPPGK were 26.2 and 53.3 mg/g in kidney cortex (n = 38), respectively. MPPGK was 2-fold greater than the commonly used in vitro-in vivo extrapolation scalar; this difference was attributed mainly to tissue source (mixed kidney regions versus cortex). Robust human MPPGK and CPPGK scalars were measured for the first time. The work emphasized the importance of regional differences (cortex versus whole kidney–specific MPPGK, tissue weight, and blood flow) and a need to account for these to improve assessment of renal metabolic clearance and its extrapolation to in vivo. PMID:28270564

  3. Influence of shot peening on corrosion properties of biocompatible magnesium alloy AZ31 coated by dicalcium phosphate dihydrate (DCPD).

    PubMed

    Mhaede, Mansour; Pastorek, Filip; Hadzima, Branislav

    2014-06-01

    Magnesium alloys are promising materials for biomedical applications because of many outstanding properties like biodegradation, bioactivity and their specific density and Young's modulus are closer to bone than the commonly used metallic implant materials. Unfortunately their fatigue properties and low corrosion resistance negatively influenced their application possibilities in the field of biomedicine. These problems could be diminished through appropriate surface treatments. This study evaluates the influence of a surface pre-treatment by shot peening and shot peening+coating on the corrosion properties of magnesium alloy AZ31. The dicalcium phosphate dihydrate coating (DCPD) was electrochemically deposited in a solution containing 0.1M Ca(NO3)2, 0.06M NH4H2PO4 and 10mL/L of H2O2. The effect of shot peening on the surface properties of magnesium alloy was evaluated by microhardness and surface roughness measurements. The influence of the shot peening and dicalcium phosphate dihydrate layer on the electrochemical characteristics of AZ31 magnesium alloy was evaluated by potentiodynamic measurements and electrochemical impedance spectroscopy in 0.9% NaCl solution at a temperature of 22±1°C. The obtained results were analyzed by the Tafel-extrapolation method and equivalent circuit method. The results showed that the application of shot peening process followed by DCPD coating improves the properties of the AZ31 surface from corrosion and mechanical point of view. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Predicting treatment effect from surrogate endpoints and historical trials: an extrapolation involving probabilities of a binary outcome or survival to a specific time

    PubMed Central

    Sargent, Daniel J.; Buyse, Marc; Burzykowski, Tomasz

    2011-01-01

    SUMMARY Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download. PMID:21838732

  5. Hybrid superconducting a.c. current limiter extrapolation 63 kV-1 250 A

    NASA Astrophysics Data System (ADS)

    Tixador, P.; Levêque, J.; Brunet, Y.; Pham, V. D.

    1994-04-01

    Following the developement of a.c. superconducting wires a.c. current superconducting limiters have emerged. These limiters limit the fault currents nearly instantaneously, without detection nor order giver and may be suitable for high voltages. They are based on the natural transition from the superconducting state to the normal resistive state by overstepping the critical current of a superconducting coil which limits or triggers the limitation. Our limiter device consists essentially of two copper windings coupled through a saturable magnetic circuit and of a non inductively wound superconducting coil with a reduced current compared to the line current. This design allows a simple superconducting cable and reduced cryogenic losses but the dielectric stresses are high during faults. A small model (150 V/50 A) has experimentally validated our design. An industrial scale current limiter is designed and the comparisons between this design and other superconducting current limiters are given. Les courants de court-circuit sur les grands réseaux électriques ne cessent d'augmenter. Dans ce contexte sont apparus les limiteurs supraconducteurs de courant suite au développement des brins supraconducteurs alternatifs. Ces limiteurs peuvent limiter les courants de défaut presque instantanément, sans détection de défaut ni donneur d'ordre et ils sont extrapolables aux hautes tensions. Ils sont fondés sur la transition naturelle de l'état supraconducteur à l'état normal très résistif par dépassement du courant critique d'un enroulement supraconducteur qui limite ou déclenche la limitation. Notre limiteur est composé de deux enroulements en cuivre couplés par un circuit magnétique saturable et d'une bobine supraconductrice à courant réduit par rapport au courant de la ligne. Cette conception permet un câble supraconducteur simple et des pertes cryogéniques réduites mais les contraintes diélectriques en régime de défaut sont importantes. Une maquette

  6. Quantitative Cross-Species Extrapolation between Humans and Fish: The Case of the Anti-Depressant Fluoxetine

    PubMed Central

    Margiotta-Casaluci, Luigi; Owen, Stewart F.; Cumming, Rob I.; de Polo, Anna; Winter, Matthew J.; Panter, Grace H.; Rand-Weaver, Mariann; Sumpter, John P.

    2014-01-01

    Fish are an important model for the pharmacological and toxicological characterization of human pharmaceuticals in drug discovery, drug safety assessment and environmental toxicology. However, do fish respond to pharmaceuticals as humans do? To address this question, we provide a novel quantitative cross-species extrapolation approach (qCSE) based on the hypothesis that similar plasma concentrations of pharmaceuticals cause comparable target-mediated effects in both humans and fish at similar level of biological organization (Read-Across Hypothesis). To validate this hypothesis, the behavioural effects of the anti-depressant drug fluoxetine on the fish model fathead minnow (Pimephales promelas) were used as test case. Fish were exposed for 28 days to a range of measured water concentrations of fluoxetine (0.1, 1.0, 8.0, 16, 32, 64 µg/L) to produce plasma concentrations below, equal and above the range of Human Therapeutic Plasma Concentrations (HTPCs). Fluoxetine and its metabolite, norfluoxetine, were quantified in the plasma of individual fish and linked to behavioural anxiety-related endpoints. The minimum drug plasma concentrations that elicited anxiolytic responses in fish were above the upper value of the HTPC range, whereas no effects were observed at plasma concentrations below the HTPCs. In vivo metabolism of fluoxetine in humans and fish was similar, and displayed bi-phasic concentration-dependent kinetics driven by the auto-inhibitory dynamics and saturation of the enzymes that convert fluoxetine into norfluoxetine. The sensitivity of fish to fluoxetine was not so dissimilar from that of patients affected by general anxiety disorders. These results represent the first direct evidence of measured internal dose response effect of a pharmaceutical in fish, hence validating the Read-Across hypothesis applied to fluoxetine. Overall, this study demonstrates that the qCSE approach, anchored to internal drug concentrations, is a powerful tool to guide the

  7. Novel electrochemical method for the characterization of the degree of chirality in chiral polyaniline.

    PubMed

    Feng, Zhang; Li, Ma; Yan, Yang; Jihai, Tang; Xiao, Li; Wanglin, Li

    2013-01-01

    A novel method to indicate the degree of chirality in polyaniline (PANI) was developed. The (D-camphorsulfonic acid)- and (HCl)-PANI-based electrodes exhibited significantly different electrochemical performances in D- and L-Alanine (Ala) aqueous solution, respectively, which can be used for the characterization the optical activity of chiral PANI. Cyclic voltammogram, tafel, and open circuit potential of PANI-based electrodes were measured within D- and L-Ala electrolyte solution, respectively. The open circuit potentials under different reacting conditions were analyzed by Doblhofer model formula, in which [C(+)](poly1)/[C(+)](poly2) was used as a parameter to characterize the degree of chirality in chiral PANI. The results showed that [C(+)](poly1)/[C(+)](poly2) can be increased with increasing concentrations of (1S)-(+)- and (1R)-(-)-10-camphorsulfonic acid. In addition, we detected that appropriate response time and lower temperature are necessary to improve the degree of chirality. Copyright © 2012 Wiley Periodicals, Inc.

  8. Method and apparatus for determining minority carrier diffusion length in semiconductors

    DOEpatents

    Goldstein, Bernard; Dresner, Joseph; Szostak, Daniel J.

    1983-07-12

    Method and apparatus are provided for determining the diffusion length of minority carriers in semiconductor material, particularly amorphous silicon which has a significantly small minority carrier diffusion length using the constant-magnitude surface-photovoltage (SPV) method. An unmodulated illumination provides the light excitation on the surface of the material to generate the SPV. A manually controlled or automatic servo system maintains a constant predetermined value of the SPV. A vibrating Kelvin method-type probe electrode couples the SPV to a measurement system. The operating optical wavelength of an adjustable monochromator to compensate for the wavelength dependent sensitivity of a photodetector is selected to measure the illumination intensity (photon flux) on the silicon. Measurements of the relative photon flux for a plurality of wavelengths are plotted against the reciprocal of the optical absorption coefficient of the material. A linear plot of the data points is extrapolated to zero intensity. The negative intercept value on the reciprocal optical coefficient axis of the extrapolated linear plot is the diffusion length of the minority carriers.

  9. Improvement of forecast skill for severe weather by merging radar-based extrapolation and storm-scale NWP corrected forecast

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming

    2015-03-01

    The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.

  10. A new method suitable for calculating accurately wetting temperature over a wide range of conditions: Based on the adaptation of continuation algorithm to classical DFT

    NASA Astrophysics Data System (ADS)

    Zhou, Shiqi

    2017-11-01

    A new scheme is put forward to determine the wetting temperature (Tw) by utilizing the adaptation of arc-length continuation algorithm to classical density functional theory (DFT) used originally by Frink and Salinger, and its advantages are summarized into four points: (i) the new scheme is applicable whether the wetting occurs near a planar or a non-planar surface, whereas a zero contact angle method is considered only applicable to a perfectly flat solid surface, as demonstrated previously and in this work, and essentially not fit for non-planar surface. (ii) The new scheme is devoid of an uncertainty, which plagues a pre-wetting extrapolation method and originates from an unattainability of the infinitely thick film in the theoretical calculation. (iii) The new scheme can be similarly and easily applied to extreme instances characterized by lower temperatures and/or higher surface attraction force field, which, however, can not be dealt with by the pre-wetting extrapolation method because of the pre-wetting transition being mixed with many layering transitions and the difficulty in differentiating varieties of the surface phase transitions. (iv) The new scheme still works in instance wherein the wetting transition occurs close to the bulk critical temperature; however, this case completely can not be managed by the pre-wetting extrapolation method because near the bulk critical temperature the pre-wetting region is extremely narrow, and no enough pre-wetting data are available for use of the extrapolation procedure.

  11. Heuristic method of fabricating counter electrodes in dye-sensitized solar cells based on a PEDOT:PSS layer as a catalytic material

    NASA Astrophysics Data System (ADS)

    Edalati, Sh; Houshangi far, A.; Torabi, N.; Baneshi, Z.; Behjat, A.

    2017-02-01

    Poly(3,4-ethylendioxythiophene):poly(styrene sulfonate) (PEDOT:PSS) was deposited on a fluoride-doped tin oxide glass substrate using a heuristic method to fabricate platinum-free counter electrodes for dye-sensitized solar cells (DSSCs). In this heuristic method a thin layer of PEDOT:PPS is obtained by spin coating the PEDOT:PSS on a Cu substrate and then removing the substrate with FeCl3. The characteristics of the deposited PEDOT:PSS were studied by energy dispersive x-ray analysis and scanning electron microscopy, which revealed the micro-electronic specifications of the cathode. The aforementioned DSSCs exhibited a solar conversion efficiency of 3.90%, which is far higher than that of DSSCs with pure PEDOT:PSS (1.89%). This enhancement is attributed not only to the micro-electronic specifications but also to the HNO3 treatment through our heuristic method. The results of cyclic voltammetry, electrochemical impedance spectroscopy (EIS) and Tafel polarization plots show the modified cathode has a dual function, including excellent conductivity and electrocatalytic activity for iodine reduction.

  12. A Magnetohydrodynamic Simulation of Magnetic Null-point Reconnections in NOAA AR 12192, Initiated with an Extrapolated Non-force-free Field

    NASA Astrophysics Data System (ADS)

    Prasad, A.; Bhattacharyya, R.; Hu, Qiang; Kumar, Sanjay; Nayak, Sushree S.

    2018-06-01

    The magnetohydrodynamics of the solar corona is simulated numerically. The simulation is initialized with an extrapolated non-force-free magnetic field using the vector magnetogram of the active region NOAA 12192, which was obtained from the solar photosphere. Particularly, we focus on the magnetic reconnections (MRs) occurring close to a magnetic null point that resulted in the appearance of circular chromospheric flare ribbons on 2014 October 24 around 21:21 UT, after the peak of an X3.1 flare. The extrapolated field lines show the presence of the three-dimensional (3D) null near one of the polarity-inversion lines—where the flare was observed. In the subsequent numerical simulation, we find MRs occurring near the null point, where the magnetic field lines from the fan plane of the 3D null form a X-type configuration with underlying arcade field lines. The footpoints of the dome-shaped field lines, inherent to the 3D null, show high gradients of the squashing factor. We find slipping reconnections at these quasi-separatrix layers, which are co-located with the post-flare circular brightening observed at chromospheric heights. This demonstrates the viability of the initial non-force-free field, along with the dynamics it initiates. Moreover, the initial field and its simulated evolution are found to be devoid of any flux rope, which is congruent with the confined nature of the flare.

  13. Attenuated Total Reflectance Fourier transform infrared spectroscopy for determination of long chain free fatty acid concentration in oily wastewater using the double wavenumber extrapolation technique

    USDA-ARS?s Scientific Manuscript database

    Long Chain Free Fatty Acids (LCFFAs) from the hydrolysis of fat, oil and grease (FOG) are major components in the formation of insoluble saponified solids known as FOG deposits that accumulate in sewer pipes and lead to sanitary sewer overflows (SSOs). A Double Wavenumber Extrapolative Technique (DW...

  14. Evolution of the Active Region NOAA 12443 based on magnetic field extrapolations: preliminary results

    NASA Astrophysics Data System (ADS)

    Chicrala, André; Dallaqua, Renato Sergio; Antunes Vieira, Luis Eduardo; Dal Lago, Alisson; Rodríguez Gómez, Jenny Marcela; Palacios, Judith; Coelho Stekel, Tardelli Ronan; Rezende Costa, Joaquim Eduardo; da Silva Rockenbach, Marlos

    2017-10-01

    The behavior of Active Regions (ARs) is directly related to the occurrence of some remarkable phenomena in the Sun such as solar flares or coronal mass ejections (CME). In this sense, changes in the magnetic field of the region can be used to uncover other relevant features like the evolution of the ARs magnetic structure and the plasma flow related to it. In this work we describe the evolution of the magnetic structure of the active region AR NOAA12443 observed from 2015/10/30 to 2015/11/10, which may be associated with several X-ray flares of classes C and M. The analysis is based on observations of the solar surface and atmosphere provided by HMI and AIA instruments on board of the SDO spacecraft. In order to investigate the magnetic energy buildup and release of the ARs, we shall employ potential and linear force free extrapolations based on the solar surface magnetic field distribution and the photospheric velocity fields.

  15. Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method

    NASA Astrophysics Data System (ADS)

    Taitano, William; Knoll, Dana; Chacon, Luis

    2009-11-01

    The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO

  16. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  17. Modeling an exhumed basin: A method for estimating eroded overburden

    USGS Publications Warehouse

    Poelchau, H.S.

    2001-01-01

    The Alberta Deep Basin in western Canada has undergone a large amount of erosion following deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin model. Erosion can be estimated using shale compaction trends. For instance, the widely used Magara method attempts to establish a sonic log gradient for shales and uses the extrapolation to a theoretical uncompacted shale value as a first indication of overcompaction and estimation of the amount of erosion. Because such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic A; values of one suitable shale formation are calibrated with maximum depth of burial estimates from sonic log extrapolation for several wells. This resulting regression equation then can be used to estimate and map maximum depth of burial or amount of erosion for all wells in which this formation has been logged. The example from the Alberta Deep Basin shows that the magnitude of erosion calculated by this method is conservative and comparable to independent estimates using vitrinite reflectance gradient methods. ?? 2001 International Association for Mathematical Geology.

  18. Extrapolating the Acute Behavioral Effects of Toluene from 1-Hour to 24-Hour Exposures in Rats: Roles of Dose Metric, and Metabolic and Behavioral Tolerance.

    EPA Science Inventory

    Recent research on the acute effects of volatile organic compounds (VQCs) suggests that extrapolation from short (~ 1 h) to long durations (up to 4 h) may be improved by using estimates of brain toluene concentration (Br[Tol]) instead of cumulative inhaled dose (C x t) as a metri...

  19. Introduction to Numerical Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoonover, Joseph A.

    2016-06-14

    These are slides for a lecture for the Parallel Computing Summer Research Internship at the National Security Education Center. This gives an introduction to numerical methods. Repetitive algorithms are used to obtain approximate solutions to mathematical problems, using sorting, searching, root finding, optimization, interpolation, extrapolation, least squares regresion, Eigenvalue problems, ordinary differential equations, and partial differential equations. Many equations are shown. Discretizations allow us to approximate solutions to mathematical models of physical systems using a repetitive algorithm and introduce errors that can lead to numerical instabilities if we are not careful.

  20. A Study on the Copper Effect on gold leaching in copper-ethanediamine-thiosulphate solutions

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Xiang, Pengzhi; Huang, Yao

    2018-01-01

    A simple, fast and sensitive square-wave voltammetry (SWV), cyclic voltammetry(CV) and tafel method for the determination of various factors of gold in thiosulphate solution in this paper. We present our study on the effect of copper(II) on the leaching of gold in thiosulphate solutions. The current study aims to establish the interaction of copper in the leaching process by electrochemical method.

  1. Extrapolation of earth-based solar irradiance measurements to exoatmospheric levels for broad-band and selected absorption-band observations

    NASA Technical Reports Server (NTRS)

    Reagan, John A.; Pilewskie, Peter A.; Scott-Fleming, Ian C.; Herman, Benjamin M.; Ben-David, Avishai

    1987-01-01

    Techniques for extrapolating earth-based spectral band measurements of directly transmitted solar irradiance to equivalent exoatmospheric signal levels were used to aid in determining system gain settings of the Halogen Occultation Experiment (HALOE) sunsensor being developed for the NASA Upper Atmosphere Research Satellite and for the Stratospheric Aerosol and Gas (SAGE) 2 instrument on the Earth Radiation Budget Satellite. A band transmittance approach was employed for the HALOE sunsensor which has a broad-band channel determined by the spectral responsivity of a silicon detector. A modified Langley plot approach, assuming a square-root law behavior for the water vapor transmittance, was used for the SAGE-2 940 nm water vapor channel.

  2. Studying the Transient Thermal Contact Conductance Between the Exhaust Valve and Its Seat Using the Inverse Method

    NASA Astrophysics Data System (ADS)

    Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid

    2016-02-01

    In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.

  3. The importance of inclusion of kinetic information in the extrapolation of high-to-low concentrations for human limit setting.

    PubMed

    Geraets, Liesbeth; Zeilmaker, Marco J; Bos, Peter M J

    2018-01-05

    Human health risk assessment of inhalation exposures generally includes a high-to-low concentration extrapolation. Although this is a common step in human risk assessment, it introduces various uncertainties. One of these uncertainties is related to the toxicokinetics. Many kinetic processes such as absorption, metabolism or excretion can be subject to saturation at high concentration levels. In the presence of saturable kinetic processes of the parent compound or metabolites, disproportionate increases in internal blood or tissue concentration relative to the external concentration administered may occur resulting in nonlinear kinetics. The present paper critically reviews human health risk assessment of inhalation exposure. More specific, it emphasizes the importance of kinetic information for the determination of a safe exposure in human risk assessment of inhalation exposures assessed by conversion from a high animal exposure to a low exposure in humans. For two selected chemicals, i.e. methyl tert-butyl ether and 1,2-dichloroethane, PBTK-modelling was used, for illustrative purposes, to follow the extrapolation and conversion steps as performed in existing risk assessments for these chemicals. Human health-based limit values based on an external dose metric without sufficient knowledge on kinetics might be too high to be sufficiently protective. Insight in the actual internal exposure, the toxic agent, the appropriate dose metric, and whether an effect is related to internal concentration or dose is important. Without this, application of assessment factors on an external dose metric and the conversion to continuous exposure results in an uncertain human health risk assessment of inhalation exposures. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. New methods for the numerical integration of ordinary differential equations and their application to the equations of motion of spacecraft

    NASA Technical Reports Server (NTRS)

    Banyukevich, A.; Ziolkovski, K.

    1975-01-01

    A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

  5. Accurate bond energies of hydrocarbons from complete basis set extrapolated multi-reference singles and doubles configuration interaction.

    PubMed

    Oyeyemi, Victor B; Pavone, Michele; Carter, Emily A

    2011-12-09

    Quantum chemistry has become one of the most reliable tools for characterizing the thermochemical underpinnings of reactions, such as bond dissociation energies (BDEs). The accurate prediction of these particular properties (BDEs) are challenging for ab initio methods based on perturbative corrections or coupled cluster expansions of the single-determinant Hartree-Fock wave function: the processes of bond breaking and forming are inherently multi-configurational and require an accurate description of non-dynamical electron correlation. To this end, we present a systematic ab initio approach for computing BDEs that is based on three components: 1) multi-reference single and double excitation configuration interaction (MRSDCI) for the electronic energies; 2) a two-parameter scheme for extrapolating MRSDCI energies to the complete basis set limit; and 3) DFT-B3LYP calculations of minimum-energy structures and vibrational frequencies to account for zero point energy and thermal corrections. We validated our methodology against a set of reliable experimental BDE values of CC and CH bonds of hydrocarbons. The goal of chemical accuracy is achieved, on average, without applying any empirical corrections to the MRSDCI electronic energies. We then use this composite scheme to make predictions of BDEs in a large number of hydrocarbon molecules for which there are no experimental data, so as to provide needed thermochemical estimates for fuel molecules. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Generalized Gilat-Raubenheimer method for density-of-states calculation in photonic crystals

    NASA Astrophysics Data System (ADS)

    Liu, Boyuan; Johnson, Steven G.; Joannopoulos, John D.; Lu, Ling

    2018-04-01

    An efficient numerical algorithm is the key for accurate evaluation of density of states (DOS) in band theory. The Gilat-Raubenheimer (GR) method proposed in 1966 is an efficient linear extrapolation method which was limited in specific lattices. Here, using an affine transformation, we provide a new generalization of the original GR method to any Bravais lattices and show that it is superior to the tetrahedron method and the adaptive Gaussian broadening method. Finally, we apply our generalized GR method to compute DOS of various gyroid photonic crystals of topological degeneracies.

  7. Chiral Extrapolations of the $$\\boldsymbol{ρ(770)}$$ Meson in $$\\mathbf{N_f=2+1}$$ Lattice QCD Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molina, Raquel; Hu, Bitao; Doering, Michael

    Several lattice QCD simulations of meson-meson scattering in p-wave and Isospin = 1 in Nf = 2 + 1 flavours have been carried out recently. Unitarized Chiral Perturbation Theory is used to perform extrapolations to the physical point. In contrast to previous findings on the analyses of Nf = 2 lattice data, where most of the data seems to be in agreement, some discrepancies are detected in the Nf = 2 + 1 lattice data analyses, which could be due to different masses of the strange quark, meson decay constants, initial constraints in the simulation, or other lattice artifacts. Inmore » addition, the low-energy constants are compared to the ones from a recent analysis of Nf = 2 lattice data.« less

  8. Possibilities and limitations of the kinetic plot method in supercritical fluid chromatography.

    PubMed

    De Pauw, Ruben; Desmet, Gert; Broeckhoven, Ken

    2013-08-30

    Although supercritical fluid chromatography (SFC) is becoming a technique of increasing importance in the field of analytical chromatography, methods to compare the performance of SFC-columns and separations in an unbiased way are not fully developed. The present study uses mathematical models to investigate the possibilities and limitations of the kinetic plot method in SFC as this easily allows to investigate a wide range of operating pressures, retention and mobile phase conditions. The variable column length (L) kinetic plot method was further investigated in this work. Since the pressure history is identical for each measurement, this method gives the true kinetic performance limit in SFC. The deviations of the traditional way of measuring the performance as a function of flow rate (fixed back pressure and column length) and the isopycnic method with respect to this variable column length method were investigated under a wide range of operational conditions. It is found that using the variable L method, extrapolations towards other pressure drops are not valid in SFC (deviation of ∼15% for extrapolation from 50 to 200bar pressure drop). The isopycnic method provides the best prediction but its use is limited when operating closer towards critical point conditions. When an organic modifier is used, the predictions are improved for both methods with respect to the variable L method (e.g. deviations decreases from 20% to 2% when 20mol% of methanol is added). Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Kinetic Monte Carlo simulations of water ice porosity: extrapolations of deposition parameters from the laboratory to interstellar space.

    PubMed

    Clements, Aspen R; Berk, Brandon; Cooke, Ilsa R; Garrod, Robin T

    2018-02-21

    Dust grains in cold, dense interstellar clouds build up appreciable ice mantles through the accretion and subsequent surface chemistry of atoms and molecules from the gas. These mantles, of thicknesses on the order of 100 monolayers, are primarily composed of H 2 O, CO, and CO 2 . Laboratory experiments using interstellar ice analogues have shown that porosity could be present and can facilitate diffusion of molecules along the inner pore surfaces. However, the movement of molecules within and upon the ice is poorly described by current chemical kinetics models, making it difficult either to reproduce the formation of experimental porous ice structures or to extrapolate generalized laboratory results to interstellar conditions. Here we use the off-lattice Monte Carlo kinetics model MIMICK to investigate the effects that various deposition parameters have on laboratory ice structures. The model treats molecules as isotropic spheres of a uniform size, using a Lennard-Jones potential. We reproduce experimental trends in the density of amorphous solid water (ASW) for varied deposition angle, rate and surface temperature; ice density decreases when the incident angle or deposition rate is increased, while increasing temperature results in a more-compact water ice. The models indicate that the density behaviour at higher temperatures (≥80 K) is dependent on molecular rearrangement resulting from thermal diffusion. To reproduce trends at lower temperatures, it is necessary to take account of non-thermal diffusion by newly-adsorbed molecules, which bring kinetic energy both from the gas phase and from their acceleration into a surface binding site. Extrapolation of the model to conditions appropriate to protoplanetary disks, in which direct accretion of water from the gas-phase may be the dominant ice formation mechanism, indicate that these ices may be less porous than laboratory ices.

  10. Methodology Of PACS Effectiveness Evaluation As Part Of A Technology Assessment. The Dutch PACS Project Extrapolated.

    NASA Astrophysics Data System (ADS)

    Andriessen, J. H. T. H.; van der Horst-Bruinsma, I. E.; ter Haar Romeny, B. M.

    1989-05-01

    The present phase of the clinical evaluation within the Dutch PACS project mainly focuses on the development and evaluation of a PACSystem for a few departments in the Utrecht University hospital (UUH). A report on the first clinical experiences and a detailed cost/savings analysis of the PACSystem in the UUH are presented elsewhere. However, an assessment of the wider fmancial and organizational implications for hospitals and for the health sector is also needed. To this end a model for (financial) cost assessment of PACSystems is being developed by BAZIS. Learning from the actual pilot implementation in UUH we realized that general Technology Assessment (TA) also calls for an extra-polation of the medical and organizational effects. After a short excursion into the various approaches towards TA, this paper discusses the (inter) organizational dimensions relevant to the development of the necessary exttapolationmodels.

  11. Extrapolation of Earth-based solar irradiance measurements to exoatmospheric levels for broad-band and selected absorption-band observations

    NASA Technical Reports Server (NTRS)

    Reagan, J. A.; Pilewskie, P. A.; Scott-Fleming, I. C.; Hermann, B. M.

    1986-01-01

    Techniques for extrapolating Earth-based spectral band measurements of directly transmitted solar irradiance to equivalent exoatmospheric signal levels were used to aid in determining system gain settings of the Halogen Occultation Experiment (HALOE) sunsensor system being developed for the NASA Upper Atmosphere Research Satellite and for the Stratospheric Aerosol and Gas (SAGE) 2 instrument on the Earth Radiation Budget Satellite. A band transmittance approach was employed for the HALOE sunsensor which has a broad-band channel determined by the spectral responsivity of a silicon detector. A modified Langley plot approach, assuming a square-root law behavior for the water vapor transmittance, was used for the SAGE-2 940 nm water vapor channel.

  12. Comparison between amperometric and true potentiometric end-point detection in the determination of water by the Karl Fischer method.

    PubMed

    Cedergren, A

    1974-06-01

    A rapid and sensitive method using true potentiometric end-point detection has been developed and compared with the conventional amperometric method for Karl Fischer determination of water. The effect of the sulphur dioxide concentration on the shape of the titration curve is shown. By using kinetic data it was possible to calculate the course of titrations and make comparisons with those found experimentally. The results prove that the main reaction is the slow step, both in the amperometric and the potentiometric method. Results obtained in the standardization of the Karl Fischer reagent showed that the potentiometric method, including titration to a preselected potential, gave a standard deviation of 0.001(1) mg of water per ml, the amperometric method using extrapolation 0.002(4) mg of water per ml and the amperometric titration to a pre-selected diffusion current 0.004(7) mg of water per ml. Theories and results dealing with dilution effects are presented. The time of analysis was 1-1.5 min for the potentiometric and 4-5 min for the amperometric method using extrapolation.

  13. Predicting low-temperature free energy landscapes with flat-histogram Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Blanco, Marco A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-02-01

    We present a method for predicting the free energy landscape of fluids at low temperatures from flat-histogram grand canonical Monte Carlo simulations performed at higher ones. We illustrate our approach for both pure and multicomponent systems using two different sampling methods as a demonstration. This allows us to predict the thermodynamic behavior of systems which undergo both first order and continuous phase transitions upon cooling using simulations performed only at higher temperatures. After surveying a variety of different systems, we identify a range of temperature differences over which the extrapolation of high temperature simulations tends to quantitatively predict the thermodynamic properties of fluids at lower ones. Beyond this range, extrapolation still provides a reasonably well-informed estimate of the free energy landscape; this prediction then requires less computational effort to refine with an additional simulation at the desired temperature than reconstruction of the surface without any initial estimate. In either case, this method significantly increases the computational efficiency of these flat-histogram methods when investigating thermodynamic properties of fluids over a wide range of temperatures. For example, we demonstrate how a binary fluid phase diagram may be quantitatively predicted for many temperatures using only information obtained from a single supercritical state.

  14. Measurement of absorbed dose with a bone-equivalent extrapolation chamber.

    PubMed

    DeBlois, François; Abdel-Rahman, Wamied; Seuntjens, Jan P; Podgorsak, Ervin B

    2002-03-01

    A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water and bone-equivalent material was used for determining absorbed dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain absorbed dose in bone for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC by 0.7% to approximately 2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). In conjunction with appropriate correction factors determined with Monte Carlo techniques, the uncalibrated hybrid PEEC can be used for measuring absorbed dose in bone material to within 2% for high-energy photon and electron beams.

  15. [Scenario analysis--a method for long-term planning].

    PubMed

    Stavem, K

    2000-01-10

    Scenarios are known from the film industry, as detailed descriptions of films. This has given name to scenario analysis, a method for long term planning using descriptions of composite future pictures. This article is an introduction to the scenario method. Scenarios describe plausible, not necessarily probable, developments. They focus on problems and questions that decision makers must be aware of and prepare to deal with, and the consequences of alternative decisions. Scenarios are used in corporate and governmental planning, and they can be useful and complementary to traditional planning and extrapolation of past experience. The method is particularly useful in a rapidly changing world with shifting external conditions.

  16. A linear and non-linear polynomial neural network modeling of dissolved oxygen content in surface water: Inter- and extrapolation performance with inputs' significance analysis.

    PubMed

    Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor

    2018-01-01

    Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over p

  17. On the equivalence of LIST and DIIS methods for convergence acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garza, Alejandro J.; Scuseria, Gustavo E.

    2015-04-28

    Self-consistent field extrapolation methods play a pivotal role in quantum chemistry and electronic structure theory. We, here, demonstrate the mathematical equivalence between the recently proposed family of LIST methods [Wang et al., J. Chem. Phys. 134, 241103 (2011); Y. K. Chen and Y. A. Wang, J. Chem. Theory Comput. 7, 3045 (2011)] and the general form of Pulay’s DIIS [Chem. Phys. Lett. 73, 393 (1980); J. Comput. Chem. 3, 556 (1982)] with specific error vectors. Our results also explain the differences in performance among the various LIST methods.

  18. Extrapolation of plasma clearance to understand species differences in toxicokinetics of bisphenol A.

    PubMed

    Poet, Torka; Hays, Sean

    2017-10-13

    1. Understanding species differences in the toxicokinetics of bisphenol A (BPA) is central to setting acceptable exposure limits for human exposures to BPA. BPA toxicokinetics have been well studied, with controlled oral dosing studies in several species and across a wide dose range. 2. We analyzed the available toxicokinetic data for BPA following oral dosing to assess potential species differences and dose dependencies. BPA is rapidly conjugated and detoxified in all species. The toxicokinetics of BPA can be well described using non-compartmental analyses. 3. Several studies measured free (unconjugated) BPA in blood and reported area under the curve (AUC) of free BPA in blood of mice, rats, monkeys, chimpanzees and humans following controlled oral doses. Extrinsic clearance was calculated and analyzed across species and dose using allometric scaling. 4. The results indicate free BPA clearance is well described using allometric scaling with high correlation coefficients across all species and doses up to 10 mg/kg. The results indicate a human equivalent dose factor (HEDf) of 0.9 is appropriate for extrapolating a point of departure from mice and rats to a human equivalent dose (HED), thereby replacing default uncertainty factors for animal to human toxicokinetics.

  19. Direct optical band gap measurement in polycrystalline semiconductors: A critical look at the Tauc method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolgonos, Alex; Mason, Thomas O.; Poeppelmeier, Kenneth R., E-mail: krp@northwestern.edu

    2016-08-15

    The direct optical band gap of semiconductors is traditionally measured by extrapolating the linear region of the square of the absorption curve to the x-axis, and a variation of this method, developed by Tauc, has also been widely used. The application of the Tauc method to crystalline materials is rooted in misconception–and traditional linear extrapolation methods are inappropriate for use on degenerate semiconductors, where the occupation of conduction band energy states cannot be ignored. A new method is proposed for extracting a direct optical band gap from absorption spectra of degenerately-doped bulk semiconductors. This method was applied to pseudo-absorption spectramore » of Sn-doped In{sub 2}O{sub 3} (ITO)—converted from diffuse-reflectance measurements on bulk specimens. The results of this analysis were corroborated by room-temperature photoluminescence excitation measurements, which yielded values of optical band gap and Burstein–Moss shift that are consistent with previous studies on In{sub 2}O{sub 3} single crystals and thin films. - Highlights: • The Tauc method of band gap measurement is re-evaluated for crystalline materials. • Graphical method proposed for extracting optical band gaps from absorption spectra. • The proposed method incorporates an energy broadening term for energy transitions. • Values for ITO were self-consistent between two different measurement methods.« less

  20. High throughput method to characterize acid-base properties of insoluble drug candidates in water.

    PubMed

    Benito, D E; Acquaviva, A; Castells, C B; Gagliardi, L G

    2018-05-30

    In drug design experimental characterization of acidic groups in candidate molecules is one of the more important steps prior to the in-vivo studies. Potentiometry combined with Yasuda-Shedlovsky extrapolation is one of the more important strategy to study drug candidates with low solubility in water, although, it requires a large number of sequences to determine pK a values at different solvent-mixture compositions to, finally, obtain the pK a in water (pwwK a ) by extrapolation. We have recently proposed a method which requires only two sequences of additions to study the effect of organic solvent content in liquid chromatography mobile phases on the acidity of the buffer compounds usually dissolved in it along wide ranges of compositions. In this work we propose to apply this method to study thermodynamic pwwK a of drug candidates with low solubilities in pure water. Using methanol/water solvent mixtures we study six pharmaceutical drugs at 25 °C. Four of them: ibuprofen, salicylic acid, atenolol and labetalol, were chosen as members of carboxylic, amine and phenol families, respectively. Since these compounds have known pwwK a values, they were used to validate the procedure, the accuracy of Yasuda-Shedlovsky and other empirical models to fit the behaviors, and to obtain pwwK a by extrapolation. Finally, the method is applied to determine unknown thermodynamic pwwK a values of two pharmaceutical drugs: atorvastatin calcium and the two dissociation constants of ethambutol. The procedure proved to be simple, very fast and accurate in all of the studied cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Investigation of anticorrosion properties of nanocomposites of spray coated zinc oxide and titanium dioxide thin films on stainless steel (304L SS) in saline environment

    NASA Astrophysics Data System (ADS)

    P, Muhamed Shajudheen V.; S, Saravana Kumar; V, Senthil Kumar; Maheswari A, Uma; M, Sivakumar; Rani K, Anitha

    2018-01-01

    The present study reports the anticorrosive nature of nanocomposite thin films of zinc oxide and titanium dioxide on steel substrate (304L SS) using spray coating method. The morphology and chemical constituents of the nanocomposite thin film were characterized by field effect scanning electron microscopy and energy dispersive analysis of x-ray (EDAX) studies. From the EDAX studies, it was observed that nanocomposite coatings of desired stoichiometry can be synthesized using present coating technique. The cyclic voltametric techniques such as Tafel analysis and electrochemical impedance spectroscopy (EIS) analysis were conducted to study the anticorrosion properties of the coatings. The E corr values obtained from Tafel polarization curves of the sample coated with nanocomposites of ZnO and TiO2 in different ratios (5:1, 1:1 and 1:5) indicated that the corrosion resistance was improved compared to bare steel. The coating resistance values obtained from the Nyquist plot after fitting with equivalent circuit confirmed the improved anticorrosion performance of the coated samples. The sample coated with ZnO: TiO2 in the ratio 1:5 showed better corrosion resistance compared to other ratios. The Tafel and EIS studies were repeated after exposure to 5% NaCl for 390 h and the results indicated the anticorrosive nature of the coating in the aggressive environment. The root mean square deviation of surface roughness values calculated from the AFM images before and after salt spray indicated the stability of coating in the saline environment.

  2. Effective-range function methods for charged particle collisions

    NASA Astrophysics Data System (ADS)

    Gaspard, David; Sparenberg, Jean-Marc

    2018-04-01

    Different versions of the effective-range function method for charged particle collisions are studied and compared. In addition, a novel derivation of the standard effective-range function is presented from the analysis of Coulomb wave functions in the complex plane of the energy. The recently proposed effective-range function denoted as Δℓ [Ramírez Suárez and Sparenberg, Phys. Rev. C 96, 034601 (2017), 10.1103/PhysRevC.96.034601] and an earlier variant [Hamilton et al., Nucl. Phys. B 60, 443 (1973), 10.1016/0550-3213(73)90193-4] are related to the standard function. The potential interest of Δℓ for the study of low-energy cross sections and weakly bound states is discussed in the framework of the proton-proton S10 collision. The resonant state of the proton-proton collision is successfully computed from the extrapolation of Δℓ instead of the standard function. It is shown that interpolating Δℓ can lead to useful extrapolation to negative energies, provided scattering data are known below one nuclear Rydberg energy (12.5 keV for the proton-proton system). This property is due to the connection between Δℓ and the effective-range function by Hamilton et al. that is discussed in detail. Nevertheless, such extrapolations to negative energies should be used with caution because Δℓ is not analytic at zero energy. The expected analytic properties of the main functions are verified in the complex energy plane by graphical color-based representations.

  3. Investigative and extrapolative role of microRNAs' genetic expression in breast carcinoma.

    PubMed

    Usmani, Ambreen; Shoro, Amir Ali; Shirazi, Bushra; Memon, Zahida

    2016-01-01

    MicroRNAs (miRs) are non-coding ribonucleic acids consisting of about 18-22 nucleotide bases. Expression of several miRs can be altered in breast carcinomas in comparison to healthy breast tissue, or between various subtypes of breast cancer. These are regulated as either oncogene or tumor suppressors, this shows that their expression is misrepresented in cancers. Some miRs are specifically associated with breast cancer and are affected by cancer-restricted signaling pathways e.g. downstream of estrogen receptor-α or HER2/neu. Connection of multiple miRs with breast cancer, and the fact that most of these post transcript structures may transform complex functional networks of mRNAs, identify them as potential investigative, extrapolative and predictive tumor markers, as well as possible targets for treatment. Investigative tools that are currently available are RNA-based molecular techniques. An additional advantage related to miRs in oncology is that they are remarkably stable and are notably detectable in serum and plasma. Literature search was performed by using database of PubMed, the keywords used were microRNA (52 searches) AND breast cancer (169 searches). PERN was used by database of Bahria University, this included literature and articles from international sources; 2 articles from Pakistan on this topic were consulted (one in international journal and one in a local journal). Of these, 49 articles were shortlisted which discussed relation of microRNA genetic expression in breast cancer. These articles were consulted for this review.

  4. 26 CFR 1.263A-7 - Changing a method of accounting under section 263A.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... extrapolation, rather than based on the facts and circumstances of a particular year's data. All three methods... analyze the production and resale data for that particular year and apply the rules and principles of... books and records, actual financial and accounting data which is required to apply the capitalization...

  5. 26 CFR 1.263A-7 - Changing a method of accounting under section 263A.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... extrapolation, rather than based on the facts and circumstances of a particular year's data. All three methods... analyze the production and resale data for that particular year and apply the rules and principles of... books and records, actual financial and accounting data which is required to apply the capitalization...

  6. Standard electrode potential, Tafel equation, and the solvation thermodynamics.

    PubMed

    Matyushov, Dmitry V

    2009-06-21

    Equilibrium in the electronic subsystem across the solution-metal interface is considered to connect the standard electrode potential to the statistics of localized electronic states in solution. We argue that a correct derivation of the Nernst equation for the electrode potential requires a careful separation of the relevant time scales. An equation for the standard metal potential is derived linking it to the thermodynamics of solvation. The Anderson-Newns model for electronic delocalization between the solution and the electrode is combined with a bilinear model of solute-solvent coupling introducing nonlinear solvation into the theory of heterogeneous electron transfer. We therefore are capable of addressing the question of how nonlinear solvation affects electrochemical observables. The transfer coefficient of electrode kinetics is shown to be equal to the derivative of the free energy, or generalized force, required to shift the unoccupied electronic level in the bulk. The transfer coefficient thus directly quantifies the extent of nonlinear solvation of the redox couple. The current model allows the transfer coefficient to deviate from the value of 0.5 of the linear solvation models at zero electrode overpotential. The electrode current curves become asymmetric in respect to the change in the sign of the electrode overpotential.

  7. Measured Copper Toxicity to Cnesterodon decemmaculatus (Pisces: Poeciliidae) and Predicted by Biotic Ligand Model in Pilcomayo River Water: A Step for a Cross-Fish-Species Extrapolation

    PubMed Central

    Casares, María Victoria; de Cabo, Laura I.; Seoane, Rafael S.; Natale, Oscar E.; Castro Ríos, Milagros; Weigandt, Cristian; de Iorio, Alicia F.

    2012-01-01

    In order to determine copper toxicity (LC50) to a local species (Cnesterodon decemmaculatus) in the South American Pilcomayo River water and evaluate a cross-fish-species extrapolation of Biotic Ligand Model, a 96 h acute copper toxicity test was performed. The dissolved copper concentrations tested were 0.05, 0.19, 0.39, 0.61, 0.73, 1.01, and 1.42 mg Cu L−1. The 96 h Cu LC50 calculated was 0.655 mg L−1 (0.823 − 0.488). 96-h Cu LC50 predicted by BLM for Pimephales promelas was 0.722 mg L−1. Analysis of the inter-seasonal variation of the main water quality parameters indicates that a higher protective effect of calcium, magnesium, sodium, sulphate, and chloride is expected during the dry season. The very high load of total suspended solids in this river might be a key factor in determining copper distribution between solid and solution phases. A cross-fish-species extrapolation of copper BLM is valid within the water quality parameters and experimental conditions of this toxicity test. PMID:22523491

  8. Improved performance of CdSe/CdS co-sensitized solar cells adopting efficient CuS counter electrode modified by PbS film using SILAR method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolong; Lin, Yu; Wu, Jihuai; Fang, Biaopeng; Zeng, Jiali

    2018-04-01

    In this paper, CuS film was deposited onto fluorine-doped tin oxide (FTO) substrate using a facile chemical bath deposition method, and then modified by PbS using simple successive ionic layer absorption and reaction (SILAR) method with different cycles. These CuS/PbS films were utilized as counter electrodes (CEs) for CdSe/CdS co-sensitized solar cells. Field-emission scanning electron microscopy equipped with an energy-dispersive X-ray spectrometer was used to characterize the CuS/PbS films. The results show that CuS/PbS (10 cycles) CE exhibits an improved power conversion efficiency of 5.54% under the illumination of one sun (100 mW cm-2), which is higher than the CuS/PbS (0 cycles), CuS/PbS (5 cycles), and CuS/PbS (15 cycles) CEs. This enhancement is mainly attributed to good catalytic activity and lower charge-transfer and series resistances, which have been proved by electrochemical impedance spectroscopy, and Tafel polarization measurements.

  9. Solubility of crystalline organic compounds in high and low molecular weight amorphous matrices above and below the glass transition by zero enthalpy extrapolation.

    PubMed

    Amharar, Youness; Curtin, Vincent; Gallagher, Kieran H; Healy, Anne Marie

    2014-09-10

    Pharmaceutical applications which require knowledge of the solubility of a crystalline compound in an amorphous matrix are abundant in the literature. Several methods that allow the determination of such data have been reported, but so far have only been applicable to amorphous polymers above the glass transition of the resulting composites. The current work presents, for the first time, a reliable method for the determination of the solubility of crystalline pharmaceutical compounds in high and low molecular weight amorphous matrices at the glass transition and at room temperature (i.e. below the glass transition temperature), respectively. The solubilities of mannitol and indomethacin in polyvinyl pyrrolidone (PVP) K15 and PVP K25, respectively were measured at different temperatures. Mixtures of undissolved crystalline solute and saturated amorphous phase were obtained by annealing at a given temperature. The solubility at this temperature was then obtained by measuring the melting enthalpy of the crystalline phase, plotting it as a function of composition and extrapolating to zero enthalpy. This new method yielded results in accordance with the predictions reported in the literature. The method was also adapted for the measurement of the solubility of crystalline low molecular weight excipients in amorphous active pharmaceutical ingredients (APIs). The solubility of mannitol, glutaric acid and adipic acid in both indomethacin and sulfadimidine was experimentally determined and successfully compared with the difference between their respective calculated Hildebrand solubility parameters. As expected from the calculations, the dicarboxylic acids exhibited a high solubility in both amorphous indomethacin and sulfadimidine, whereas mannitol was almost insoluble in the same amorphous phases at room temperature. This work constitutes the first report of the methodology for determining an experimentally measured solubility for a low molecular weight crystalline solute

  10. Through-Space Charge Interaction Substituent Effects in Molecular Catalysis Leading to the Design of the Most Efficient Catalyst of CO2-to-CO Electrochemical Conversion.

    PubMed

    Azcarate, Iban; Costentin, Cyrille; Robert, Marc; Savéant, Jean-Michel

    2016-12-28

    The starting point of this study of through-space substituent effects on the catalysis of the electrochemical CO 2 -to-CO conversion by iron(0) tetraphenylporphyrins is the linear free energy correlation between through-structure electronic effects and the iron(I/0) standard potential that we established separately. The introduction of four positively charged trimethylanilinium groups at the para positions of the tetraphenylporphyrin (TPP) phenyls results in an important positive deviation from the correlation and a parallel improvement of the catalytic Tafel plot. The assignment of this catalysis boosting effect to the Coulombic interaction of these positive charges with the negative charge borne by the initial Fe 0 -CO 2 adduct is confirmed by the negative deviation observed when the four positive charges are replaced by four negative charges borne by sulfonate groups also installed in the para positions of the TPP phenyls. The climax of this strategy of catalysis boosting by means of Coulombic stabilization of the initial Fe 0 -CO 2 adduct is reached when four positively charged trimethylanilinium groups are introduced at the ortho positions of the TPP phenyls. The addition of a large concentration of a weak acid-phenol-helps by cleaving one of the C-O bonds of CO 2 . The efficiency of the resulting catalyst is unprecedented, as can be judged by the catalytic Tafel plot benchmarking with all presently available catalysts of the electrochemical CO 2 -to-CO conversion. The maximal turnover frequency (TOF) is as high as 10 6 s -1 and is reached at an overpotential of only 220 mV; the extrapolated TOF at zero overpotential is larger than 300 s -1 . This catalyst leads to a highly selective formation of CO (practically 100%) in spite of the presence of a high concentration of phenol, which could have favored H 2 evolution. It is also very stable, showing no significant alteration after more than 80 h of electrolysis.

  11. A Method for Estimating Zero-Flow Pressure and Intracranial Pressure

    PubMed Central

    Caren, Marzban; Paul, Raymond Illian; David, Morison; Anne, Moore; Michel, Kliot; Marek, Czosnyka; Pierre, Mourad

    2012-01-01

    Background It has been hypothesized that critical closing pressure of cerebral circulation, or zero-flow pressure (ZFP), can estimate intracranial pressure (ICP). One ZFP estimation method employs extrapolation of arterial blood pressure versus blood-flow velocity. The aim of this study is to improve ICP predictions. Methods Two revisions are considered: 1) The linear model employed for extrapolation is extended to a nonlinear equation, and 2) the parameters of the model are estimated by an alternative criterion (not least-squares). The method is applied to data on transcranial Doppler measurements of blood-flow velocity, arterial blood pressure, and ICP, from 104 patients suffering from closed traumatic brain injury, sampled across the United States and England. Results The revisions lead to qualitative (e.g., precluding negative ICP) and quantitative improvements in ICP prediction. In going from the original to the revised method, the ±2 standard deviation of error is reduced from 33 to 24 mm Hg; the root-mean-squared error (RMSE) is reduced from 11 to 8.2 mm Hg. The distribution of RMSE is tighter as well; for the revised method the 25th and 75th percentiles are 4.1 and 13.7 mm Hg, respectively, as compared to 5.1 and 18.8 mm Hg for the original method. Conclusions Proposed alterations to a procedure for estimating ZFP lead to more accurate and more precise estimates of ICP, thereby offering improved means of estimating it noninvasively. The quality of the estimates is inadequate for many applications, but further work is proposed which may lead to clinically useful results. PMID:22824923

  12. Statistical validation of predictive TRANSP simulations of baseline discharges in preparation for extrapolation to JET D-T

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Tae; Romanelli, M.; Yuan, X.; Kaye, S.; Sips, A. C. C.; Frassinetti, L.; Buchanan, J.; Contributors, JET

    2017-06-01

    This paper presents for the first time a statistical validation of predictive TRANSP simulations of plasma temperature using two transport models, GLF23 and TGLF, over a database of 80 baseline H-mode discharges in JET-ILW. While the accuracy of the predicted T e with TRANSP-GLF23 is affected by plasma collisionality, the dependency of predictions on collisionality is less significant when using TRANSP-TGLF, indicating that the latter model has a broader applicability across plasma regimes. TRANSP-TGLF also shows a good matching of predicted T i with experimental measurements allowing for a more accurate prediction of the neutron yields. The impact of input data and assumptions prescribed in the simulations are also investigated in this paper. The statistical validation and the assessment of uncertainty level in predictive TRANSP simulations for JET-ILW-DD will constitute the basis for the extrapolation to JET-ILW-DT experiments.

  13. ZrB 2-HfB 2 solid solutions as electrode materials for hydrogen reaction in acidic and basic solutions

    DOE PAGES

    Sitler, Steven J.; Raja, Krishnan S.; Charit, Indrajit

    2016-11-09

    Spark plasma sintered transition metal diborides such as HfB 2, ZrB 2 and their solid solutions were investigated as electrode materials for electrochemical hydrogen evolutions reactions (HER) in 1 M H 2SO 4 and 1 M NaOH electrolytes. HfB 2 and ZrB 2 formed complete solid solutions when mixed in 1:1, 1:4, and 4:1 ratios and they were stable in both electrolytes. The HER kinetics of the diborides were slower in the basic solution than in the acidic solutions. The Tafel slopes in 1 M H 2SO 4 were in the range of 0.15 - 0.18 V/decade except for puremore » HfB 2 which showed a Tafel slope of 0.38 V/decade. In 1 M NaOH the Tafel slopes were in the range of 0.12 - 0.27 V/decade. The composition of Hf xZr 1-xB 2 solid solutions with x = 0.2 - 0.8, influenced the exchange current densities, overpotentials and Tafel slopes of the HER. As a result, the EIS data were fitted with a porous film equivalent circuit model in order to better understand the HER behavior. In addition, modeling calculations, using density functional theory approach, were carried out to estimate the density of states and band structure of the boride solid solutions.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitler, Steven J.; Raja, Krishnan S.; Charit, Indrajit

    Spark plasma sintered transition metal diborides such as HfB 2, ZrB 2 and their solid solutions were investigated as electrode materials for electrochemical hydrogen evolutions reactions (HER) in 1 M H 2SO 4 and 1 M NaOH electrolytes. HfB 2 and ZrB 2 formed complete solid solutions when mixed in 1:1, 1:4, and 4:1 ratios and they were stable in both electrolytes. The HER kinetics of the diborides were slower in the basic solution than in the acidic solutions. The Tafel slopes in 1 M H 2SO 4 were in the range of 0.15 - 0.18 V/decade except for puremore » HfB 2 which showed a Tafel slope of 0.38 V/decade. In 1 M NaOH the Tafel slopes were in the range of 0.12 - 0.27 V/decade. The composition of Hf xZr 1-xB 2 solid solutions with x = 0.2 - 0.8, influenced the exchange current densities, overpotentials and Tafel slopes of the HER. As a result, the EIS data were fitted with a porous film equivalent circuit model in order to better understand the HER behavior. In addition, modeling calculations, using density functional theory approach, were carried out to estimate the density of states and band structure of the boride solid solutions.« less

  15. Efficacy and Safety Extrapolation Analyses for Atomoxetine in Young Children with Attention-Deficit/Hyperactivity Disorder

    PubMed Central

    Kratochvil, Christopher; Ghuman, Jaswinder; Camporeale, Angelo; Lipsius, Sarah; D'Souza, Deborah; Tanaka, Yoko

    2015-01-01

    Abstract Objectives: This extrapolation analysis qualitatively compared the efficacy and safety profile of atomoxetine from Lilly clinical trial data in 6–7-year-old patients with attention-deficit/hyperactivity disorder (ADHD) with that of published literature in 4–5-year-old patients with ADHD (two open-label [4–5-year-old patients] and one placebo-controlled study [5-year-old patients]). Methods: The main efficacy analyses included placebo-controlled Lilly data and the placebo-controlled external study (5-year-old patients) data. The primary efficacy variables used in these studies were the ADHD Rating Scale-IV Parent Version, Investigator Administered (ADHD-RS-IV-Parent:Inv) total score, or the Swanson, Nolan and Pelham (SNAP-IV) scale score. Safety analyses included treatment-emergent adverse events (TEAEs) and vital signs. Descriptive statistics (means, percentages) are presented. Results: Acute atomoxetine treatment improved core ADHD symptoms in both 6–7-year-old patients (n=565) and 5-year-old patients (n=37) (treatment effect: −10.16 and −7.42). In an analysis of placebo-controlled groups, the mean duration of exposure to atomoxetine was ∼7 weeks for 6–7-year-old patients and 9 weeks for 5-year-old patients. Decreased appetite was the most common TEAE in atomoxetine-treated patients. The TEAEs observed at a higher rate in 5-year-old versus 6–7-year-old patients were irritability (36.8% vs. 3.6%) and other mood-related events (6.9% each vs. <3.0%). Blood pressure and pulse increased in both 4–5-year-old patients and 6–7-year-old patients, whereas a weight increase was seen only in the 6–7-year-old patients. Conclusions: Although limited by the small sample size of the external studies, these analyses suggest that in 5-year-old patients with ADHD, atomoxetine may improve ADHD symptoms, but possibly to a lesser extent than in older children, with some adverse events occurring at a higher rate in 5-year-old patients. PMID:25265343

  16. Critical study of higher order numerical methods for solving the boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Wornom, S. F.

    1978-01-01

    A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.

  17. Neural Network Model for Survival and Growth of Salmonella enterica Serotype 8,20:-:z6 in Ground Chicken Thigh Meat during Cold Storage: Extrapolation to Other Serotypes.

    PubMed

    Oscar, T P

    2015-10-01

    Mathematical models that predict the behavior of human bacterial pathogens in food are valuable tools for assessing and managing this risk to public health. A study was undertaken to develop a model for predicting the behavior of Salmonella enterica serotype 8,20:-:z6 in chicken meat during cold storage and to determine how well the model would predict the behavior of other serotypes of Salmonella stored under the same conditions. To develop the model, ground chicken thigh meat (0.75 cm(3)) was inoculated with 1.7 log Salmonella 8,20:-:z6 and then stored for 0 to 8 -8 to 16°C. An automated miniaturized most-probable-number (MPN) method was developed and used for the enumeration of Salmonella. Commercial software (Excel and the add-in program NeuralTools) was used to develop a multilayer feedforward neural network model with one hidden layer of two nodes. The performance of the model was evaluated using the acceptable prediction zone (APZ) method. The number of Salmonella in ground chicken thigh meat stayed the same (P > 0.05) during 8 days of storage at -8 to 8°C but increased (P < 0.05) during storage at 9°C (+0.6 log) to 16°C (+5.1 log). The proportion of residual values (observed minus predicted values) in an APZ (pAPZ) from -1 log (fail-safe) to 0.5 log (fail-dangerous) was 0.939 for the data (n = 426 log MPN values) used in the development of the model. The model had a pAPZ of 0.944 or 0.954 when it was extrapolated to test data (n = 108 log MPN per serotype) for other serotypes (S. enterica serotype Typhimurium var 5-, Kentucky, Typhimurium, and Thompson) of Salmonella in ground chicken thigh meat stored for 0 to 8 days at -4, 4, 12, or 16°C under the same experimental conditions. A pAPZ of ≥0.7 indicates that a model provides predictions with acceptable bias and accuracy. Thus, the results indicated that the model provided valid predictions of the survival and growth of Salmonella 8,20:-:z6 in ground chicken thigh meat stored for 0 to 8 days at -8 to

  18. Electrode kinetics of a water vapor electrolysis cell

    NASA Technical Reports Server (NTRS)

    Jacobs, G.

    1974-01-01

    The anodic electrochemical behavior of the water vapor electrolysis cell was investigated. A theoretical review of various aspects of cell overvoltage is presented with special emphasis on concentration overvoltage and activation overvoltage. Other sources of overvoltage are described. The experimental apparatus controlled and measured anode potential and cell current. Potentials between 1.10 and 2.60 V (vs NHE) and currents between 0.1 and 3000 mA were investigated. Different behavior was observed between the standard cell and the free electrolyte cell. The free electrolyte cell followed typical Tafel behavior (i.e. activation overvoltage) with Tafel slopes of about 0.15, and the exchange current densities of 10 to the minus 9th power A/sq cm, both in good agreement with literature values. The standard cell exhibitied this same Tafel behavior at lower current densities but deviated toward lower than expected current densities at higher potentials. This behavior and other results were examined to determine their origin.

  19. In vitro methods for the determination of test chemicals metabolism utilizing fish liver subcellular fractions and hepatocytes

    EPA Science Inventory

    The purpose of this one-day short course is to train students on methods used to measure in vitro metabolism in fish and extrapolate this information to the intact animal. This talk is one of four presentations given by course instructors. The first part of this talk provides a...

  20. Attenuated Total Reflectance Fourier transform infrared spectroscopy for determination of Long Chain Free Fatty Acid concentration in oily wastewater using the double wavenumber extrapolation technique.

    PubMed

    Hao, Zisu; Malyala, Divya; Dean, Lisa; Ducoste, Joel

    2017-04-01

    Long Chain Free Fatty Acids (LCFFAs) from the hydrolysis of fat, oil and grease (FOG) are major components in the formation of insoluble saponified solids known as FOG deposits that accumulate in sewer pipes and lead to sanitary sewer overflows (SSOs). A Double Wavenumber Extrapolative Technique (DWET) was developed to simultaneously measure LCFFAs and FOG concentrations in oily wastewater suspensions. This method is based on the analysis of the Attenuated Total Reflectance-Fourier transform infrared spectroscopy (ATR-FTIR) spectrum, in which the absorbance of carboxyl bond (1710cm -1 ) and triglyceride bond (1745cm -1 ) were selected as the characteristic wavenumbers for total LCFFAs and FOG, respectively. A series of experiments using pure organic samples (Oleic acid/Palmitic acid in Canola oil) were performed that showed a linear relationship between the absorption at these two wavenumbers and the total LCFFA. In addition, the DWET method was validated using GC analyses, which displayed a high degree of agreement between the two methods for simulated oily wastewater suspensions (1-35% Oleic acid in Canola oil/Peanut oil). The average determination error of the DWET approach was ~5% when the LCFFA fraction was above 10wt%, indicating that the DWET could be applied as an experimental method for the determination of both LCFFAs and FOG concentrations in oily wastewater suspensions. Potential applications of this DWET approach includes: (1) monitoring the LCFFAs and FOG concentrations in grease interceptor (GI) effluents for regulatory compliance; (2) evaluating alternative LCFFAs/FOG removal technologies; and (3) quantifying potential FOG deposit high accumulation zones in the sewer collection system. Published by Elsevier B.V.

  1. [Births and children after assisted reproductive technologies. A retrospective analysis with special regard to multiple pregnancies at the Department of Obstetrics and Gynecology, Paracelsus Medical University Salzburg (2000-2009) with an extrapolation for Austria].

    PubMed

    Maier, B; Reitsamer-Tontsch, S; Weisser, C; Schreiner, B

    2011-10-01

    Austria still lacks a baby-take-home rate after assisted reproductive technologies (ART) and therefore an adequate quality management of ART. This paper extrapolates data about births/infants after ART at the University Clinic of Obstetrics and Gynaecology (PMU/SALK) in Salzburg for Austria, especially in regard to multiple births/infants collected between 2000 and 2009. On average 2 271 infants were born per year during the last 10 years. Among them, 76 infants (3.34% of all children) were born after ART. Of all children conceived by ART and born (759) at the University Clinic of Obstetrics and Gynaecology 368 are multiples. This is 48.5% of all children born after ART. 31.6% of all multiples born were conceived through ART. The extrapolation of data concerning multiples results in 1 255 multiples/year after ART for Austria. Without a baby-take-home rate, serious quality management of reproductive medicine is impossible. Online registration of deliveries and infants is the only adequate approach. The data of this statistical extrapolation from a single perinatal center not only provide a survey about the situation in Austria, but also support the claim of a quantitative (numbers) as well as qualitative (condition of infants) baby-take-home rate after ART. © Georg Thieme Verlag KG Stuttgart · New York.

  2. A Simple Method for Assessing Upper-Limb Force-Velocity Profile in Bench Press.

    PubMed

    Rahmani, Abderrahmane; Samozino, Pierre; Morin, Jean-Benoit; Morel, Baptiste

    2018-02-01

    To analyze the reliability and validity of a field computation method based on easy-to-measure data to assess the mean force ([Formula: see text]) and velocity ([Formula: see text]) produced during a ballistic bench-press movement and to verify that the force-velocity profile (F-v) obtained with multiple loaded trials is accurately described. Twelve participants performed ballistic bench presses against various lifted mass from 30% to 70% of their body mass. For each trial, [Formula: see text] and [Formula: see text] were determined from an accelerometer (sampling rate 500 Hz; reference method) and a simple computation method based on upper-limb mass, barbell flight height, and push-off distance. These [Formula: see text] and [Formula: see text] data were used to establish the F-v relationship for each individual and method. A strong to almost perfect reliability was observed between the 2 trials (ICC > .90 for [Formula: see text] and .80 for [Formula: see text], CV% < 10%), whatever the considered method. The mechanical variables ([Formula: see text], [Formula: see text]) measured with the 2 methods and all the variables extrapolated from the F-v relationships were strongly correlated (r 2  > .80, P < .001). The practical differences between the methods for the extrapolated mechanical parameters were all <5%, indicating very probably no differences. The findings suggest that the simple computation method used here provides valid and reliable information on force and velocity produced during ballistic bench press, in line with that observed in laboratory conditions. This simple method is thus a practical tool, requiring only 3 simple parameters (upper-limb mass, barbell flight height, and push-off distance).

  3. Extrapolation of radiation thermometry scales for determining the transition temperature of metal-carbon points. Experiments with the Co-C

    NASA Astrophysics Data System (ADS)

    Battuello, M.; Girard, F.; Florio, M.

    2009-02-01

    Four independent radiation temperature scales approximating the ITS-90 at 900 nm, 950 nm and 1.6 µm have been realized from the indium point (429.7485 K) to the copper point (1357.77 K) which were used to derive by extrapolation the transition temperature T90(Co-C) of the cobalt-carbon eutectic fixed point. An INRIM cell was investigated and an average value T90(Co-C) = 1597.20 K was found with the four values lying within 0.25 K. Alternatively, thermodynamic approximated scales were realized by assigning to the fixed points the best presently available thermodynamic values and deriving T(Co-C). An average value of 1597.27 K was found (four values lying within 0.25 K). The standard uncertainties associated with T90(Co-C) and T(Co-C) were 0.16 K and 0.17 K, respectively. INRIM determinations are compatible with recent thermodynamic determinations on three different cells (values lying between 1597.11 K and 1597.25 K) and with the result of a comparison on the same cell by an absolute radiation thermometer and an irradiance measurement with filter radiometers which give values of 1597.11 K and 1597.43 K, respectively (Anhalt et al 2006 Metrologia 43 S78-83). The INRIM approach allows the determination of both ITS-90 and thermodynamic temperature of a fixed point in a simple way and can provide valuable support to absolute radiometric methods in defining the transition temperature of new high-temperature fixed points.

  4. Comparison of soil sampling and analytical methods for asbestos at the Sumas Mountain Asbestos Site—Working towards a toolbox for better assessment

    EPA Science Inventory

    Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. En...

  5. SU-E-T-91: Correction Method to Determine Surface Dose for OSL Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynolds, T; Higgins, P

    Purpose: OSL detectors are commonly used in clinic due to their numerous advantages, such as linear response, negligible energy, angle and temperature dependence in clinical range, for verification of the doses beyond the dmax. Although, due to the bulky shielding envelope, this type of detectors fails to measure skin dose, which is an important assessment of patient ability to finish the treatment on time and possibility of acute side effects. This study aims to optimize the methodology of determination of skin dose for conventional accelerators and a flattening filter free Tomotherapy. Methods: Measurements were done for x-ray beams: 6 MVmore » (Varian Clinac 2300, 10×10 cm{sup 2} open field, SSD = 100 cm) and for 5.5 MV (Tomotherapy, 15×40 cm{sup 2} field, SAD = 85 cm). The detectors were placed at the surface of the solid water phantom and at the reference depth (dref=1.7cm (Varian 2300), dref =1.0 cm (Tomotherapy)). The measurements for OSLs were related to the externally exposed OSLs measurements, and further were corrected to surface dose using an extrapolation method indexed to the baseline Attix ion chamber measurements. A consistent use of the extrapolation method involved: 1) irradiation of three OSLs stacked on top of each other on the surface of the phantom; 2) measurement of the relative dose value for each layer; and, 3) extrapolation of these values to zero thickness. Results: OSL measurements showed an overestimation of surface doses by the factor 2.31 for Varian 2300 and 2.65 for Tomotherapy. The relationships: SD{sup 2300} = 0.68 × M{sup 2300}-12.7 and SDτoμo = 0.73 × Mτoμo-13.1 were found to correct the single OSL measurements to surface doses in agreement with Attix measurements to within 0.1% for both machines. Conclusion: This work provides simple empirical relationships for surface dose measurements using single OSL detectors.« less

  6. Comparison of the effectiveness of some common animal data scaling techniques in estimating human radiation dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sparks, R.B.; Aydogan, B.

    In the development of new radiopharmaceuticals, animal studies are typically performed to get a first approximation of the expected radiation dose in humans. This study evaluates the performance of some commonly used data extrapolation techniques to predict residence times in humans using data collected from animals. Residence times were calculated using animal and human data, and distributions of ratios of the animal results to human results were constructed for each extrapolation method. Four methods using animal data to predict human residence times were examined: (1) using no extrapolation, (2) using relative organ mass extrapolation, (3) using physiological time extrapolation, andmore » (4) using a combination of the mass and time methods. The residence time ratios were found to be log normally distributed for the nonextrapolated and extrapolated data sets. The use of relative organ mass extrapolation yielded no statistically significant change in the geometric mean or variance of the residence time ratios as compared to using no extrapolation. Physiologic time extrapolation yielded a statistically significant improvement (p < 0.01, paired t test) in the geometric mean of the residence time ratio from 0.5 to 0.8. Combining mass and time methods did not significantly improve the results of using time extrapolation alone. 63 refs., 4 figs., 3 tabs.« less

  7. Accurate potential energy surface for the 1(2)A' state of NH(2): scaling of external correlation versus extrapolation to the complete basis set limit.

    PubMed

    Li, Y Q; Varandas, A J C

    2010-09-16

    An accurate single-sheeted double many-body expansion potential energy surface is reported for the title system which is suitable for dynamics and kinetics studies of the reactions of N(2D) + H2(X1Sigmag+) NH(a1Delta) + H(2S) and their isotopomeric variants. It is obtained by fitting ab initio energies calculated at the multireference configuration interaction level with the aug-cc-pVQZ basis set, after slightly correcting semiempirically the dynamical correlation using the double many-body expansion-scaled external correlation method. The function so obtained is compared in detail with a potential energy surface of the same family obtained by extrapolating the calculated raw energies to the complete basis set limit. The topographical features of the novel global potential energy surface are examined in detail and found to be in general good agreement with those calculated directly from the raw ab initio energies, as well as previous calculations available in the literature. The novel function has been built so as to become degenerate at linear geometries with the ground-state potential energy surface of A'' symmetry reported by our group, where both form a Renner-Teller pair.

  8. Numerical quadrature methods for integrals of singular periodic functions and their application to singular and weakly singular integral equations

    NASA Technical Reports Server (NTRS)

    Sidi, A.; Israeli, M.

    1986-01-01

    High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

  9. Developing a Theory of Digitally-Enabled Trial-Based Problem Solving through Simulation Methods: The Case of Direct-Response Marketing

    ERIC Educational Resources Information Center

    Clark, Joseph Warren

    2012-01-01

    In turbulent business environments, change is rapid, continuous, and unpredictable. Turbulence undermines those adaptive problem solving methods that generate solutions by extrapolating from what worked (or did not work) in the past. To cope with this challenge, organizations utilize trial-based problem solving (TBPS) approaches in which they…

  10. An evaluation of rise time characterization and prediction methods

    NASA Technical Reports Server (NTRS)

    Robinson, Leick D.

    1994-01-01

    One common method of extrapolating sonic boom waveforms from aircraft to ground is to calculate the nonlinear distortion, and then add a rise time to each shock by a simple empirical rule. One common rule is the '3 over P' rule which calculates the rise time in milliseconds as three divided by the shock amplitude in psf. This rule was compared with the results of ZEPHYRUS, a comprehensive algorithm which calculates sonic boom propagation and extrapolation with the combined effects of nonlinearity, attenuation, dispersion, geometric spreading, and refraction in a stratified atmosphere. It is shown there that the simple empirical rule considerably overestimates the rise time estimate. In addition, the empirical rule does not account for variations in the rise time due to humidity variation or propagation history. It is also demonstrated that the rise time is only an approximate indicator of perceived loudness. Three waveforms with identical characteristics (shock placement, amplitude, and rise time), but with different shock shapes, are shown to give different calculated loudness. This paper is based in part on work performed at the Applied Research Laboratories, the University of Texas at Austin, and supported by NASA Langley.

  11. Does the Brain Extrapolate the Position of a Transient Moving Target?

    PubMed

    Quinet, Julie; Goffart, Laurent

    2015-08-26

    , we provide results that are critical for investigating and understanding the neural basis of motion extrapolation and prediction. Copyright © 2015 the authors 0270-6474/15/3511780-11$15.00/0.

  12. Development of MCAERO wing design panel method with interactive graphics module

    NASA Technical Reports Server (NTRS)

    Hawk, J. D.; Bristow, D. R.

    1984-01-01

    A reliable and efficient iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical pressure distribution. The design process is initialized by using MCAERO (MCAIR 3-D Subsonic Potential Flow Analysis Code) to analyze a baseline configuration. A second program DMCAERO is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter by applying a first-order expansion to the baseline equations in MCAERO. This matrix is calculated only once but is used in each iteration cycle to calculate the geometry perturbation and to analyze the perturbed geometry. The potential on the new geometry is calculated by linear extrapolation from the baseline solution. This extrapolated potential is converted to velocity by numerical differentiation, and velocity is converted to pressure by using Bernoulli's equation. There is an interactive graphics option which allows the user to graphically display the results of the design process and to interactively change either the geometry or the prescribed pressure distribution.

  13. Extrapolating the Trends of Test Drop Data with Opening Shock Factor Calculations: the Case of the Orion Main and Drogue Parachutes Inflating to 1st Reefed Stage

    NASA Technical Reports Server (NTRS)

    Potvin, Jean; Ray, Eric

    2017-01-01

    We describe a new calculation of the opening shock factor C (sub k) characterizing the inflation performance of NASA's Orion spacecraft main and drogue parachutes opening under a reefing constraint (1st stage reefing), as currently tested in the Capsule Parachute Assembly System (CPAS) program. This calculation is based on an application of the Momentum-Impulse Theorem at low mass ratio (R (sub m) is less than 10 (sup -1)) and on an earlier analysis of the opening performance of drogues decelerating point masses and inflating along horizontal trajectories. Herein we extend the reach of the Theorem to include the effects of payload drag and gravitational impulse during near-vertical motion - both important pre-requisites for CPAS parachute analysis. The result is a family of C (sub k) versus R (sub m) curves which can be used for extrapolating beyond the drop-tested envelope. The paper proves this claim in the case of the CPAS Mains and Drogues opening while trailing either a Parachute Compartment Drop Test Vehicle or a Parachute Test Vehicle (an Orion capsule boiler plate). It is seen that in all cases the values of the opening shock factor can be extrapolated over a range in mass ratio that is at least twice that of the test drop data.

  14. Addressing Early Life Sensitivity Using Physiologically Based Pharmacokinetic Modeling and In Vitro to In Vivo Extrapolation

    PubMed Central

    Yoon, Miyoung; Clewell, Harvey J.

    2016-01-01

    Physiologically based pharmacokinetic (PBPK) modeling can provide an effective way to utilize in vitro and in silico based information in modern risk assessment for children and other potentially sensitive populations. In this review, we describe the process of in vitro to in vivo extrapolation (IVIVE) to develop PBPK models for a chemical in different ages in order to predict the target tissue exposure at the age of concern in humans. We present our on-going studies on pyrethroids as a proof of concept to guide the readers through the IVIVE steps using the metabolism data collected either from age-specific liver donors or expressed enzymes in conjunction with enzyme ontogeny information to provide age-appropriate metabolism parameters in the PBPK model in the rat and human, respectively. The approach we present here is readily applicable to not just to other pyrethroids, but also to other environmental chemicals and drugs. Establishment of an in vitro and in silico-based evaluation strategy in conjunction with relevant exposure information in humans is of great importance in risk assessment for potentially vulnerable populations like early ages where the necessary information for decision making is limited. PMID:26977255

  15. Addressing Early Life Sensitivity Using Physiologically Based Pharmacokinetic Modeling and In Vitro to In Vivo Extrapolation.

    PubMed

    Yoon, Miyoung; Clewell, Harvey J

    2016-01-01

    Physiologically based pharmacokinetic (PBPK) modeling can provide an effective way to utilize in vitro and in silico based information in modern risk assessment for children and other potentially sensitive populations. In this review, we describe the process of in vitro to in vivo extrapolation (IVIVE) to develop PBPK models for a chemical in different ages in order to predict the target tissue exposure at the age of concern in humans. We present our on-going studies on pyrethroids as a proof of concept to guide the readers through the IVIVE steps using the metabolism data collected either from age-specific liver donors or expressed enzymes in conjunction with enzyme ontogeny information to provide age-appropriate metabolism parameters in the PBPK model in the rat and human, respectively. The approach we present here is readily applicable to not just to other pyrethroids, but also to other environmental chemicals and drugs. Establishment of an in vitro and in silico-based evaluation strategy in conjunction with relevant exposure information in humans is of great importance in risk assessment for potentially vulnerable populations like early ages where the necessary information for decision making is limited.

  16. Computationally efficient finite-difference modal method for the solution of Maxwell's equations.

    PubMed

    Semenikhin, Igor; Zanuccoli, Mauro

    2013-12-01

    In this work, a new implementation of the finite-difference (FD) modal method (FDMM) based on an iterative approach to calculate the eigenvalues and corresponding eigenfunctions of the Helmholtz equation is presented. Two relevant enhancements that significantly increase the speed and accuracy of the method are introduced. First of all, the solution of the complete eigenvalue problem is avoided in favor of finding only the meaningful part of eigenmodes by using iterative methods. Second, a multigrid algorithm and Richardson extrapolation are implemented. Simultaneous use of these techniques leads to an enhancement in terms of accuracy, which allows a simple method such as the FDMM with a typical three-point difference scheme to be significantly competitive with an analytical modal method.

  17. Alternative Method to Simulate a Sub-idle Engine Operation in Order to Synthesize Its Control System

    NASA Astrophysics Data System (ADS)

    Sukhovii, Sergii I.; Sirenko, Feliks F.; Yepifanov, Sergiy V.; Loboda, Igor

    2016-09-01

    The steady-state and transient engine performances in control systems are usually evaluated by applying thermodynamic engine models. Most models operate between the idle and maximum power points, only recently, they sometimes address a sub-idle operating range. The lack of information about the component maps at the sub-idle modes presents a challenging problem. A common method to cope with the problem is to extrapolate the component performances to the sub-idle range. Precise extrapolation is also a challenge. As a rule, many scientists concern only particular aspects of the problem such as the lighting combustion chamber or the turbine operation under the turned-off conditions of the combustion chamber. However, there are no reports about a model that considers all of these aspects and simulates the engine starting. The proposed paper addresses a new method to simulate the starting. The method substitutes the non-linear thermodynamic model with a linear dynamic model, which is supplemented with a simplified static model. The latter model is the set of direct relations between parameters that are used in the control algorithms instead of commonly used component performances. Specifically, this model consists of simplified relations between the gas path parameters and the corrected rotational speed.

  18. MTS-MD of Biomolecules Steered with 3D-RISM-KH Mean Solvation Forces Accelerated with Generalized Solvation Force Extrapolation.

    PubMed

    Omelyan, Igor; Kovalenko, Andriy

    2015-04-14

    We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD

  19. Extrapolation of systemic bioavailability assessing skin absorption and epidermal and hepatic metabolism of aromatic amine hair dyes in vitro

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manwaring, John, E-mail: manwaring.jd@pg.com; Rothe, Helga; Obringer, Cindy

    Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passagemore » through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis–Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte K{sub m} and V{sub max} values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and C{sub max} was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. - Highlights: • An entirely in silico/in vitro approach to predict in vivo exposure to dermally applied hair dyes • Skin penetration and epidermal conversion assessed in

  20. In vivo doses of butadiene epoxides as estimated from in vitro enzyme kinetics by using cob(I)alamin and measured hemoglobin adducts: An inter-species extrapolation approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motwani, Hitesh V., E-mail: hitesh.motwani@mmk.su.se; Törnqvist, Margareta

    2014-12-15

    1,3-Butadiene (BD) is a rodent and human carcinogen. In the cancer tests, mice have been much more susceptible than rats with regard to BD-induced carcinogenicity. The species-differences are dependent on metabolic formation/disappearance of the genotoxic BD epoxy-metabolites that lead to variations in the respective in vivo doses, i.e. “area under the concentration-time curve” (AUC). Differences in AUC of the most gentoxic BD epoxy-metabolite, diepoxybutane (DEB), are considered important with regard to cancer susceptibility. The present work describes: the application of cob(I)alamin for accurate measurements of in vitro enzyme kinetic parameters associated with BD epoxy-metabolites in human, mouse and rat; themore » use of published data on hemoglobin (Hb) adduct levels of BD epoxides from BD exposure studies on the three species to calculate the corresponding AUCs in blood; and a parallelogram approach for extrapolation of AUC of DEB based on the in vitro metabolism studies and adduct data from in vivo measurements. The predicted value of AUC of DEB for humans from the parallelogram approach was 0.078 nM · h for 1 ppm · h of BD exposure compared to 0.023 nM · h/ppm · h as calculated from Hb adduct levels observed in occupational exposure. The corresponding values in nM · h/ppm · h were for mice 41 vs. 38 and for rats 1.26 vs. 1.37 from the parallelogram approach vs. experimental exposures, respectively, showing a good agreement. This quantitative inter-species extrapolation approach will be further explored for the clarification of metabolic rates/pharmacokinetics and the AUC of other genotoxic electrophilic compounds/metabolites, and has a potential to reduce and refine animal experiments. - Highlights: • In vitro metabolism to in vivo dose extrapolation of butadiene metabolites was proposed. • A parallelogram approach was introduced to estimate dose (AUC) in humans and rodents. • AUC of diepoxybutane predicted in humans was 0.078 nM h/ppm h

  1. Density functional Theory Based Generalized Effective Fragment Potential Method (Postprint)

    DTIC Science & Technology

    2014-07-01

    is acceptable for other applications) leads to induced dipole moments within 10−6 to 10−7 au of the precise values . Thus, the applied field of 10−4...noncovalent interactions. The water-benzene clusters17 and WATER2711 reference values were also ob- tained at the CCSD(T)/CBS level, except for the clusters...with n = 20,42 where MP2/CBS was used. The n-alkane dimers18 benchmark values were CCSD(T)/CBS for ethane to butane and a linear extrapolation method

  2. Electrochemical water splitting using nano-zeolite Y supported tungsten oxide electrocatalysts

    NASA Astrophysics Data System (ADS)

    Anis, Shaheen Fatima; Hashaikeh, Raed

    2018-02-01

    Zeolites are often used as supports for metals and metal oxides because of their well-defined microporous structure and high surface area. In this study, nano-zeolite Y (50-150 nm range) and micro-zeolite Y (500-800 nm range) were loaded with WO3, by impregnating the zeolite support with ammonium metatungstate and thermally decomposing the salt thereafter. Two different loadings of WO3 were studied, 3 wt.% and 5 wt.% with respect to the overall catalyst. The prepared catalysts were characterized for their morphology, structure, and surface areas through scanning electron microscope (SEM), XRD, and BET. They were further compared for their electrocatalytic activity for hydrogen evolution reaction (HER) in 0.5 M H2SO4. On comparing the bare micro-zeolite particles with the nano-form, the nano-zeolite Y showed higher currents with comparable overpotentials and lower Tafel slope of 62.36 mV/dec. WO3 loading brought about a change in the electrocatalytic properties of the catalyst. The overpotentials and Tafel slopes were observed to decrease with zeolite-3 wt.% WO3. The smallest overpotential of 60 mV and Tafel slope of 31.9 mV/dec was registered for nano-zeolite with 3 wt.% WO3, while the micro-zeolite gave an overpotential of 370 mV and a Tafel slope of 98.1 mV/dec. It was concluded that even with the same metal oxide loading, nano-zeolite showed superior performance, which is attributed to its size and hence easier escape of hydrogen bubbles from the catalyst.

  3. Interplay of oxygen-evolution kinetics and photovoltaic power curves on the construction of artificial leaves

    PubMed Central

    Surendranath, Yogesh; Bediako, D. Kwabena; Nocera, Daniel G.

    2012-01-01

    An artificial leaf can perform direct solar-to-fuels conversion. The construction of an efficient artificial leaf or other photovoltaic (PV)-photoelectrochemical device requires that the power curve of the PV material and load curve of water splitting, composed of the catalyst Tafel behavior and cell resistances, be well-matched near the thermodynamic potential for water splitting. For such a condition, we show here that the current density-voltage characteristic of the catalyst is a key determinant of the solar-to-fuels efficiency (SFE). Oxidic Co and Ni borate (Co-Bi and Ni-Bi) thin films electrodeposited from solution yield oxygen-evolving catalysts with Tafel slopes of 52 mV/decade and 30 mV/decade, respectively. The consequence of the disparate Tafel behavior on the SFE is modeled using the idealized behavior of a triple-junction Si PV cell. For PV cells exhibiting similar solar power-conversion efficiencies, those displaying low open circuit voltages are better matched to catalysts with low Tafel slopes and high exchange current densities. In contrast, PV cells possessing high open circuit voltages are largely insensitive to the catalyst’s current density-voltage characteristics but sacrifice overall SFE because of less efficient utilization of the solar spectrum. The analysis presented herein highlights the importance of matching the electrochemical load of water-splitting to the onset of maximum current of the PV component, drawing a clear link between the kinetic profile of the water-splitting catalyst and the SFE efficiency of devices such as the artificial leaf. PMID:22689962

  4. New flowmetric measurement methods of power dissipated by an ultrasonic generator in an aqueous medium.

    PubMed

    Mancier, Valérie; Leclercq, Didier

    2007-02-01

    Two new determination methods of the power dissipated in an aqueous medium by an ultrasound generator were developed. They are based on the use of a heat flow sensor inserted between a tank and a heat sink that allows to measure the power directly coming through the sensor. To be exploitable, the first method requires waiting for stationary flow. On the other hand, the second, extrapolated from the first one, makes it possible to determine the dissipated power in only five minutes. Finally, the results obtained with the flowmetric method are compared to the classical calorimetric ones.

  5. [Development and prospect on skeletal age evaluation methods of X-ray film].

    PubMed

    Wang, Ya-hui; Zhu, Guang-you; Qiao, Ke; Bian, Shi-zhong; Fan, Li-hua; Cheng, Yi-bin; Ying, Chong-liang; Shen, Yan

    2007-10-01

    The traditional methods of skeletal age estimation mainly include Numeration, Atlas, and Counting scores. In recent years, other new methods were proposed by several scholars. Utilizing image logical characteristics of X-ray film to extrapolate skeletal age is a key means by present forensic medicine workers in evaluating skeletal age. However, there exist some variations when we present the conclusion of skeletal age as an "evidence" directly to the Justice Trial Authority. In order to enhance the accuracy of skeletal age determination, further investigation for appropriate methodology should be undertaken. After a collective study of pertinent domestic and international literatures, we present this review of the research and advancement on skeletal age evaluation methods of X-ray film.

  6. Human urine and plasma concentrations of bisphenol A extrapolated from pharmacokinetics established in in vivo experiments with chimeric mice with humanized liver and semi-physiological pharmacokinetic modeling.

    PubMed

    Miyaguchi, Takamori; Suemizu, Hiroshi; Shimizu, Makiko; Shida, Satomi; Nishiyama, Sayako; Takano, Ryohji; Murayama, Norie; Yamazaki, Hiroshi

    2015-06-01

    The aim of this study was to extrapolate to humans the pharmacokinetics of estrogen analog bisphenol A determined in chimeric mice transplanted with human hepatocytes. Higher plasma concentrations and urinary excretions of bisphenol A glucuronide (a primary metabolite of bisphenol A) were observed in chimeric mice than in control mice after oral administrations, presumably because of enterohepatic circulation of bisphenol A glucuronide in control mice. Bisphenol A glucuronidation was faster in mouse liver microsomes than in human liver microsomes. These findings suggest a predominantly urinary excretion route of bisphenol A glucuronide in chimeric mice with humanized liver. Reported human plasma and urine data for bisphenol A glucuronide after single oral administration of 0.1mg/kg bisphenol A were reasonably estimated using the current semi-physiological pharmacokinetic model extrapolated from humanized mice data using algometric scaling. The reported geometric mean urinary bisphenol A concentration in the U.S. population of 2.64μg/L underwent reverse dosimetry modeling with the current human semi-physiological pharmacokinetic model. This yielded an estimated exposure of 0.024μg/kg/day, which was less than the daily tolerable intake of bisphenol A (50μg/kg/day), implying little risk to humans. Semi-physiological pharmacokinetic modeling will likely prove useful for determining the species-dependent toxicological risk of bisphenol A. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Electrochemical behavior of 45S5 bioactive ceramic coating on Ti6Al4V alloy for dental applications

    NASA Astrophysics Data System (ADS)

    Machado López, M. M.; Espitia Cabrera, M. I.; Faure, J.; Contreras García, M. E.

    2016-04-01

    Titanium and its alloys are widely used as implant materials because of their mechanical properties and non-toxic behavior. Unfortunately, they are not bioinert, which means that they can release ions and can only fix the bone by mechanical anchorage, this can lead to the encapsulation of dense fibrous tissue in the body. The bone fixation is required in clinical conditions treated by orthopedic and dental medicine. The proposal is to coat metallic implants with bioactive materials to establish good interfacial bonds between the metal substrate and bone by increasing bioactivity. Bioactive glasses, ceramics specifically 45 S5 Bioglass, have drawn attention as a serious functional biomaterial because osseointegration capacity. The EPD method of bioglass gel precursor was proposed in the present work as a new method to obtain 45S5/Ti6A14V for dental applications. The coatings, were thermally treated at 700 and 800°C and presented the 45 S5 bioglass characteristic phases showing morphology and uniformity with no defects, quantification percentages by EDS of Si, Ca, Na, P and O elements in the coating scratched powders, showed a good proportional relationship demonstrating the obtention of the 45S5 bioglass. The corrosion tests were carried out in Hank's solution. By Tafel extrapolation, Ti6Al4V alloy showed good corrosion resistance in Hank's solution media, by the formation of a passivation layer on the metal surface, however, in the system 45S5/Ti6Al4V there was an increase in the corrosion resistance; icon-, Ecorr and corrosion rate decreased, the mass loss and the rate of release of ions, were lower in this system than in the titanium alloy without coating.

  8. An automated leaching method for the determination of opal in sediments and particulate matter

    NASA Astrophysics Data System (ADS)

    Müller, Peter J.; Schneider, Ralph

    1993-03-01

    An automated leaching method for the analysis of biogenic silica (opal) in sediments and particulate matter is described. The opaline material is extracted with 1 M NaOH at 85°C in a stainless steel vessel under constant stirring, and the increase in dissolved silica is continuously monitored. For this purpose, a minor portion of the leaching solution is cycled to an autoanalyzer and analyzed for dissolved silicon by molybdate-blue spectrophotometry. The resulting absorbance versus time plot is then evaluated according to the extrapolation procedure of DEMASTER (1981). The method has been tested on sponge spicules, radiolarian tests. Recent and Pliocene diatomaceous ooze samples, clay minerals and quartz, artificial sediment mixtures, and on various plankton, sediment trap and sediment samples. The results show that the relevant forms of biogenic opal in Quaternary sediments are quantitatively recovered. The time required for an analysis is dependent on the sample type, ranging from 10 to 20 min for plankton and sediment trap material and up to 40-60 min for Quaternary sediments. The silica co-extracted from silicate minerals is largely compensated for by the applied extrapolation technique. The remaining degree of uncertainty is on the order of 0.4 wt% SiO 2 or less, depending on the clay mineral composition and content.

  9. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, T.

    Progress of parallel/vector computers has driven us to develop suitable numerical integrators utilizing their computational power to the full extent while being independent on the size of system to be integrated. Unfortunately, the parallel version of Runge-Kutta type integrators are known to be not so efficient. Recently we developed a parallel version of the extrapolation method (Ito and Fukushima 1997), which allows variable timesteps and still gives an acceleration factor of 3-4 for general problems. While the vector-mode usage of Picard-Chebyshev method (Fukushima 1997a, 1997b) will lead the acceleration factor of order of 1000 for smooth problems such as planetary/satellites orbit integration. The success of multiple-correction PECE mode of time-symmetric implicit Hermitian integrator (Kokubo 1998) seems to enlighten Milankar's so-called "pipelined predictor corrector method", which is expected to lead an acceleration factor of 3-4. We will review these directions and discuss future prospects.

  10. An effective method for terrestrial arthropod euthanasia.

    PubMed

    Bennie, Neil A C; Loaring, Christopher D; Bennie, Mikaella M G; Trim, Steven A

    2012-12-15

    As scientific understanding of invertebrate life increases, so does the concern for how to end that life in an effective way that minimises (potential) suffering and is also safe for those carrying out the procedure. There is increasing debate on the most appropriate euthanasia methods for invertebrates as their use in experimental research and zoological institutions grows. Their popularity as pet species has also led to an increase in the need for greater veterinary understanding. Through the use of a local injection of potassium chloride (KCl) initially developed for use in American lobsters, this paper describes a safe and effective method for euthanasia in terrestrial invertebrates. Initial work focused on empirically determining the dose for cockroaches, which was then extrapolated to other arthropod species. For this method of euthanasia, we propose the term 'targeted hyperkalosis' to describe death through terminal depolarisation of the thoracic ganglia as a result of high potassium concentration.

  11. The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.

    PubMed

    Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo

    2014-09-01

    Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus

  12. A hybrid method for the computation of quasi-3D seismograms.

    NASA Astrophysics Data System (ADS)

    Masson, Yder; Romanowicz, Barbara

    2013-04-01

    The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these

  13. Periodic Pulay method for robust and efficient convergence acceleration of self-consistent field iterations

    DOE PAGES

    Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.

    2016-01-21

    Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.

  14. WC Nanocrystals Grown on Vertically Aligned Carbon Nanotubes: An Efficient and Stable Electrocatalyst for Hydrogen Evolution Reaction.

    PubMed

    Fan, Xiujun; Zhou, Haiqing; Guo, Xia

    2015-05-26

    Single nanocrystalline tungsten carbide (WC) was first synthesized on the tips of vertically aligned carbon nanotubes (VA-CNTs) with a hot filament chemical vapor deposition (HF-CVD) method through the directly reaction of tungsten metal with carbon source. The VA-CNTs with preservation of vertical structure integrity and alignment play an important role to support the nanocrystalline WC growth. With the high crystallinity, small size, and uniform distribution of WC particles on the carbon support, the formed WC-CNTs material exhibited an excellent catalytic activity for hydrogen evolution reaction (HER), giving a η10 (the overpotential for driving a current of 10 mA cm(-2)) of 145 mV, onset potential of 15 mV, exchange current density@ 300 mV of 117.6 mV and Tafel slope values of 72 mV dec(-1) in acid solution, and η10 of 137 mV, onset potential of 16 mV, exchange current density@ 300 mV of 33.1 mV and Tafel slope values of 106 mV dec(-1) in alkaline media, respectively. Electrochemical stability test further confirms the long-term operation of the catalyst in both acidic and alkaline media.

  15. Graphene: corrosion-inhibiting coating.

    PubMed

    Prasai, Dhiraj; Tuberquia, Juan Carlos; Harl, Robert R; Jennings, G Kane; Rogers, Bridget R; Bolotin, Kirill I

    2012-02-28

    We report the use of atomically thin layers of graphene as a protective coating that inhibits corrosion of underlying metals. Here, we employ electrochemical methods to study the corrosion inhibition of copper and nickel by either growing graphene on these metals, or by mechanically transferring multilayer graphene onto them. Cyclic voltammetry measurements reveal that the graphene coating effectively suppresses metal oxidation and oxygen reduction. Electrochemical impedance spectroscopy measurements suggest that while graphene itself is not damaged, the metal under it is corroded at cracks in the graphene film. Finally, we use Tafel analysis to quantify the corrosion rates of samples with and without graphene coatings. These results indicate that copper films coated with graphene grown via chemical vapor deposition are corroded 7 times slower in an aerated Na(2)SO(4) solution as compared to the corrosion rate of bare copper. Tafel analysis reveals that nickel with a multilayer graphene film grown on it corrodes 20 times slower while nickel surfaces coated with four layers of mechanically transferred graphene corrode 4 times slower than bare nickel. These findings establish graphene as the thinnest known corrosion-protecting coating.

  16. Unified Scaling Law for flux pinning in practical superconductors: II. Parameter testing, scaling constants, and the Extrapolative Scaling Expression

    NASA Astrophysics Data System (ADS)

    Ekin, Jack W.; Cheggour, Najib; Goodrich, Loren; Splett, Jolene; Bordini, Bernardo; Richter, David

    2016-12-01

    A scaling study of several thousand Nb3Sn critical-current (I c) measurements is used to derive the Extrapolative Scaling Expression (ESE), a relation that can quickly and accurately extrapolate limited datasets to obtain full three-dimensional dependences of I c on magnetic field (B), temperature (T), and mechanical strain (ɛ). The relation has the advantage of being easy to implement, and offers significant savings in sample characterization time and a useful tool for magnet design. Thorough data-based analysis of the general parameterization of the Unified Scaling Law (USL) shows the existence of three universal scaling constants for practical Nb3Sn conductors. The study also identifies the scaling parameters that are conductor specific and need to be fitted to each conductor. This investigation includes two new, rare, and very large I c(B,T,ɛ) datasets (each with nearly a thousand I c measurements spanning magnetic fields from 1 to 16 T, temperatures from ˜2.26 to 14 K, and intrinsic strains from -1.1% to +0.3%). The results are summarized in terms of the general USL parameters given in table 3 of Part 1 (Ekin J W 2010 Supercond. Sci. Technol. 23 083001) of this series of articles. The scaling constants determined for practical Nb3Sn conductors are: the upper-critical-field temperature parameter v = 1.50 ± 0.04 the cross-link parameter w = 3.0 ± 0.3 and the strain curvature parameter u = 1.7 ± 0.1 (from equation (29) for b c2(ɛ) in Part 1). These constants and required fitting parameters result in the ESE relation, given by I c ( B , T , ɛ ) B = C [ b c 2 ( ɛ ) ] s ( 1 - t 1.5 ) η - μ ( 1 - t 2 ) μ b p ( 1 - b ) q with reduced magnetic field b ≡ B/B c2*(T,ɛ) and reduced temperature t ≡ T/T c*(ɛ), where: B c 2 * ( T , ɛ ) = B c 2 * ( 0 , 0 ) ( 1 - t 1.5 ) b c 2 ( ɛ ) T c * ( ɛ ) = T c * ( 0 ) [ b c 2 ( ɛ ) ] 1/3 and fitting parameters: C, B c2*(0,0), T c*(0), s, either η or μ (but not both), plus the parameters in the strain function b c2

  17. Use of a probabilistic PBPK/PD model to calculate Data Derived Extrapolation Factors for chlorpyrifos.

    PubMed

    Poet, Torka S; Timchalk, Charles; Bartels, Michael J; Smith, Jordan N; McDougal, Robin; Juberg, Daland R; Price, Paul S

    2017-06-01

    A physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model combined with Monte Carlo analysis of inter-individual variation was used to assess the effects of the insecticide, chlorpyrifos and its active metabolite, chlorpyrifos oxon in humans. The PBPK/PD model has previously been validated and used to describe physiological changes in typical individuals as they grow from birth to adulthood. This model was updated to include physiological and metabolic changes that occur with pregnancy. The model was then used to assess the impact of inter-individual variability in physiology and biochemistry on predictions of internal dose metrics and quantitatively assess the impact of major sources of parameter uncertainty and biological diversity on the pharmacodynamics of red blood cell acetylcholinesterase inhibition. These metrics were determined in potentially sensitive populations of infants, adult women, pregnant women, and a combined population of adult men and women. The parameters primarily responsible for inter-individual variation in RBC acetylcholinesterase inhibition were related to metabolic clearance of CPF and CPF-oxon. Data Derived Extrapolation Factors that address intra-species physiology and biochemistry to replace uncertainty factors with quantitative differences in metrics were developed in these same populations. The DDEFs were less than 4 for all populations. These data and modeling approach will be useful in ongoing and future human health risk assessments for CPF and could be used for other chemicals with potential human exposure. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Quality of the log-geometric distribution extrapolation for smaller undiscovered oil and gas pool size

    USGS Publications Warehouse

    Chenglin, L.; Charpentier, R.R.

    2010-01-01

    The U.S. Geological Survey procedure for the estimation of the general form of the parent distribution requires that the parameters of the log-geometric distribution be calculated and analyzed for the sensitivity of these parameters to different conditions. In this study, we derive the shape factor of a log-geometric distribution from the ratio of frequencies between adjacent bins. The shape factor has a log straight-line relationship with the ratio of frequencies. Additionally, the calculation equations of a ratio of the mean size to the lower size-class boundary are deduced. For a specific log-geometric distribution, we find that the ratio of the mean size to the lower size-class boundary is the same. We apply our analysis to simulations based on oil and gas pool distributions from four petroleum systems of Alberta, Canada and four generated distributions. Each petroleum system in Alberta has a different shape factor. Generally, the shape factors in the four petroleum systems stabilize with the increase of discovered pool numbers. For a log-geometric distribution, the shape factor becomes stable when discovered pool numbers exceed 50 and the shape factor is influenced by the exploration efficiency when the exploration efficiency is less than 1. The simulation results show that calculated shape factors increase with those of the parent distributions, and undiscovered oil and gas resources estimated through the log-geometric distribution extrapolation are smaller than the actual values. ?? 2010 International Association for Mathematical Geology.

  19. Gaussian process model for extrapolation of scattering observables for complex molecules: From benzene to benzonitrile

    NASA Astrophysics Data System (ADS)

    Cui, Jie; Li, Zhiying; Krems, Roman V.

    2015-10-01

    We consider a problem of extrapolating the collision properties of a large polyatomic molecule A-H to make predictions of the dynamical properties for another molecule related to A-H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A-X. We assume that the effect of the -H →-X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C6H5CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C6H6 collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C6H5CN with He.

  20. A Model Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete Point Linear Models

    DTIC Science & Technology

    2016-04-01

    incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching architecture...Simulation Model, Quasi -Nonlinear, Piloted Simulation, Flight-Test Implications, System Identification, Off-Nominal Loading Extrapolation, Stability...incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching

  1. Prediction of sonic boom from experimental near-field overpressure data. Volume 1: Method and results

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Reiners, S. J.

    1975-01-01

    A computerized procedure for predicting sonic boom from experimental near-field overpressure data has been developed. The procedure extrapolates near-field pressure signatures for a specified flight condition to the ground by the Thomas method. Near-field pressure signatures are interpolated from a data base of experimental pressure signatures. The program is an independently operated ODIN (Optimal Design Integration) program which obtains flight path information from other ODIN programs or from input.

  2. In vitro to In vivo extrapolation of hepatic metabolism in fish: An inter-laboratory comparison of In vitro methods

    EPA Science Inventory

    Chemical biotransformation represents the single largest source of uncertainty in chemical bioaccumulation assessments for fish. In vitro methods employing isolated hepatocytes and liver subcellular fractions (S9) can be used to estimate whole-body rates of chemical metabolism, ...

  3. Numerical methods for stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Kloeden, Peter; Platen, Eckhard

    1991-06-01

    The numerical analysis of stochastic differential equations differs significantly from that of ordinary differential equations due to the peculiarities of stochastic calculus. This book provides an introduction to stochastic calculus and stochastic differential equations, both theory and applications. The main emphasise is placed on the numerical methods needed to solve such equations. It assumes an undergraduate background in mathematical methods typical of engineers and physicists, through many chapters begin with a descriptive summary which may be accessible to others who only require numerical recipes. To help the reader develop an intuitive understanding of the underlying mathematicals and hand-on numerical skills exercises and over 100 PC Exercises (PC-personal computer) are included. The stochastic Taylor expansion provides the key tool for the systematic derivation and investigation of discrete time numerical methods for stochastic differential equations. The book presents many new results on higher order methods for strong sample path approximations and for weak functional approximations, including implicit, predictor-corrector, extrapolation and variance-reduction methods. Besides serving as a basic text on such methods. the book offers the reader ready access to a large number of potential research problems in a field that is just beginning to expand rapidly and is widely applicable.

  4. A citizen science based survey method for estimating the density of urban carnivores.

    PubMed

    Scott, Dawn M; Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W; Mill, Aileen C; Smith, Graham C; Tolhurst, Bryony A

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980's. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on

  5. A citizen science based survey method for estimating the density of urban carnivores

    PubMed Central

    Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on

  6. Highly efficient molecular simulation methods for evaluation of thermodynamic properties of crystalline phases

    NASA Astrophysics Data System (ADS)

    Moustafa, Sabry Gad Al-Hak Mohammad

    Molecular simulation (MS) methods (e.g. Monte Carlo (MC) and molecular dynamics (MD)) provide a reliable tool (especially at extreme conditions) to measure solid properties. However, measuring them accurately and efficiently (smallest uncertainty for a given time) using MS can be a big challenge especially with ab initio-type models. In addition, comparing with experimental results through extrapolating properties from finite size to the thermodynamic limit can be a critical obstacle. We first estimate the free energy (FE) of crystalline system of simple discontinuous potential, hard-spheres (HS), at its melting condition. Several approaches are explored to determine the most efficient route. The comparison study shows a considerable improvement in efficiency over the standard MS methods that are known for solid phases. In addition, we were able to accurately extrapolate to the thermodynamic limit using relatively small system sizes. Although the method is applied to HS model, it is readily extended to more complex hard-body potentials, such as hard tetrahedra. The harmonic approximation of the potential energy surface is usually an accurate model (especially at low temperature and large density) to describe many realistic solid phases. In addition, since the analysis is done numerically the method is relatively cheap. Here, we apply lattice dynamics (LD) techniques to get the FE of clathrate hydrates structures. Rigid-bonds model is assumed to describe water molecules; this, however, requires additional orientation degree-of-freedom in order to specify each molecule. However, we were able to efficiently avoid using those degrees of freedom through a mathematical transformation that only uses the atomic coordinates of water molecules. In addition, the proton-disorder nature of hydrate water networks adds extra complexity to the problem, especially when extrapolating to the thermodynamic limit is needed. The finite-size effects of the proton disorder contribution is

  7. Extrapolation of the dna fragment-size distribution after high-dose irradiation to predict effects at low doses

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Cucinotta, F. A.; Sachs, R. K.; Brenner, D. J.; Peterson, L. E.

    2001-01-01

    The patterns of DSBs induced in the genome are different for sparsely and densely ionizing radiations: In the former case, the patterns are well described by a random-breakage model; in the latter, a more sophisticated tool is needed. We used a Monte Carlo algorithm with a random-walk geometry of chromatin, and a track structure defined by the radial distribution of energy deposition from an incident ion, to fit the PFGE data for fragment-size distribution after high-dose irradiation. These fits determined the unknown parameters of the model, enabling the extrapolation of data for high-dose irradiation to the low doses that are relevant for NASA space radiation research. The randomly-located-clusters formalism was used to speed the simulations. It was shown that only one adjustable parameter, Q, the track efficiency parameter, was necessary to predict DNA fragment sizes for wide ranges of doses. This parameter was determined for a variety of radiations and LETs and was used to predict the DSB patterns at the HPRT locus of the human X chromosome after low-dose irradiation. It was found that high-LET radiation would be more likely than low-LET radiation to induce additional DSBs within the HPRT gene if this gene already contained one DSB.

  8. A facile method to synthesize boron-doped Ni/Fe alloy nano-chains as electrocatalyst for water oxidation

    NASA Astrophysics Data System (ADS)

    Yang, Yisu; Zhuang, Linzhou; Lin, Rijia; Li, Mengran; Xu, Xiaoyong; Rufford, Thomas E.; Zhu, Zhonghua

    2017-05-01

    We report a novel magnetic field assisted chemical reduction method for the synthesis of boron-doped Ni/Fe nano-chains as promising catalysts for the oxygen evolution reaction (OER). The boron-doped Ni/Fe nano-chains were synthesised in a one step process at room temperature using NaBH4 as a reducing agent. The addition of boron reduced the magnetic moment of the intermediate synthesis products and produced nano-chains with a high specific surface area of 73.4 m2 g-1. The boron-doped Ni/Fe nano-chains exhibited catalytic performance superior to state-of-the-art Ba0.5Sr0.5Co0.8Fe0.2O3-δ perovskite and RuO2 noble metal oxide catalysts. The mass normalized activity of the boron-doped Ni/Fe nano-chains measured at an overpotential of 0.35 V was 64.0 A g-1, with a Tafel slope of only 40 mV dec-1. The excellent performance of the boron-doped Ni/Fe nano-chains can be attributed to the uniform elemental distribution and highly amorphous structure of the B-doped nano-chains. These results provide new insights into the effect of doping transition-metal based OER catalysts with non-metallic elements. The study demonstrates a facile approach to prepare transition metal nano-chains using magnetic field assisted chemical reduction method as cheap and highly active catalysts for electrochemical water oxidation.

  9. A study of numerical methods of solution of the equations of motion of a controlled satellite under the influence of gravity gradient torque

    NASA Technical Reports Server (NTRS)

    Thompson, J. F.; Mcwhorter, J. C.; Siddiqi, S. A.; Shanks, S. P.

    1973-01-01

    Numerical methods of integration of the equations of motion of a controlled satellite under the influence of gravity-gradient torque are considered. The results of computer experimentation using a number of Runge-Kutta, multi-step, and extrapolation methods for the numerical integration of this differential system are presented, and particularly efficient methods are noted. A large bibliography of numerical methods for initial value problems for ordinary differential equations is presented, and a compilation of Runge-Kutta and multistep formulas is given. Less common numerical integration techniques from the literature are noted for further consideration.

  10. Flight-Test Evaluation of Flutter-Prediction Methods

    NASA Technical Reports Server (NTRS)

    Lind, RIck; Brenner, Marty

    2003-01-01

    The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.

  11. Comparison of Cliff-Lorimer-Based Methods of Scanning Transmission Electron Microscopy (STEM) Quantitative X-Ray Microanalysis for Application to Silicon Oxycarbides Thin Films.

    PubMed

    Parisini, Andrea; Frabboni, Stefano; Gazzadi, Gian Carlo; Rosa, Rodolfo; Armigliato, Aldo

    2018-06-01

    In this work, we compare the results of different Cliff-Lorimer (Cliff & Lorimer 1975) based methods in the case of a quantitative energy dispersive spectrometry investigation of light elements in ternary C-O-Si thin films. To determine the Cliff-Lorimer (C-L) k-factors, we fabricated, by focused ion beam, a standard consisting of a wedge lamella with a truncated tip, composed of two parallel SiO2 and 4H-SiC stripes. In 4H-SiC, it was not possible to obtain reliable k-factors from standard extrapolation methods owing to the strong CK-photon absorption. To overcome this problem, an extrapolation method exploiting the shape of the truncated tip of the lamella is proposed herein. The k-factors thus determined, were then used in an application of the C-L quantification procedure to a defect found at the SiO2/4H-SiC interface in the channel region of a metal-oxide field-effect-transistor device. As in this procedure, the sample thickness is required, a method to determine this quantity from the averaged and normalized scanning transmission electron microscopy intensity is also detailed. Monte Carlo simulations were used to investigate the discrepancy between experimental and theoretical k-factors and to bridge the gap between the k-factor and the Watanabe and Williams ζ-factor methods (Watanabe & Williams, 2006).

  12. Gaussian process model for extrapolation of scattering observables for complex molecules: From benzene to benzonitrile

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Jie; Krems, Roman V.; Li, Zhiying

    2015-10-21

    We consider a problem of extrapolating the collision properties of a large polyatomic molecule A–H to make predictions of the dynamical properties for another molecule related to A–H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A–X. We assume that the effect of the −H →−X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can bemore » used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C{sub 6}H{sub 5}CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C{sub 6}H{sub 6} collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C{sub 6}H{sub 5}CN with He.« less

  13. An automatic multigrid method for the solution of sparse linear systems

    NASA Technical Reports Server (NTRS)

    Shapira, Yair; Israeli, Moshe; Sidi, Avram

    1993-01-01

    An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.

  14. Interpolation and Extrapolation of Creep Rupture Data by the Minimum Commitment Method. Part 3: Analysis of Multiheats

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Ensign, C. R.

    1978-01-01

    The Minimum Commitment Method was applied to two sets of data for which multiple heat information was available. For one alloy, a 304 stainless steel studied in Japan, data on nine well characterized heats were used, while for a proprietary low alloy carbon steel studied in the United Kingdom data were available on seven heats - in many cases to very long rupture times. For this preliminary study no instability factors were used. It was discovered that heat-to-heat variations would be accounted for by introducing heat identifiers in the form A + B log sigma where sigma is the stress and the constants A and B depend only on the heat. With these identifiers all the data could be collapsed onto a single master curve, even though there was considerable scatter among heats. Using these identifiers together with the average behavior of all heats made possible the determination of an accurate constitutive equation for each individual heat. Two basic approaches are discussed for applying the results of the analysis.

  15. Facile Synthesis of Single Crystal Vanadium Disulfide Nanosheets by Chemical Vapor Deposition for Efficient Hydrogen Evolution Reaction.

    PubMed

    Yuan, Jiangtan; Wu, Jingjie; Hardy, Will J; Loya, Philip; Lou, Minhan; Yang, Yingchao; Najmaei, Sina; Jiang, Menglei; Qin, Fan; Keyshar, Kunttal; Ji, Heng; Gao, Weilu; Bao, Jiming; Kono, Junichiro; Natelson, Douglas; Ajayan, Pulickel M; Lou, Jun

    2015-10-07

    A facile chemical vapor deposition method to prepare single-crystalline VS2 nanosheets for the hydrogen evolution reaction is reported. The electrocatalytic hydrogen evolution reaction (HER) activities of VS2 show an extremely low overpotential of -68 mV at 10 mA cm(-2), small Tafel slopes of ≈34 mV decade(-1), as well as high stability, demonstrating its potential as a candidate non-noble-metal catalyst for the HER. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Preparation and Electrochemical Properties of Graphene/Epoxy Resin Composite Coating

    NASA Astrophysics Data System (ADS)

    Liao, Zijun; Zhang, Tianchi; Qiao, Sen; Zhang, Luyihang

    2017-11-01

    The multilayer graphene powder as filler, epoxy modified silicone resin as film-forming agent, anticorrosion composite coating has been created using sand dispersion method, the electrochemical performance was compared with different content of graphene composite coating and pure epoxy resin coating. The open circuit potential (OCP), potentiodynamic polarization curves (Tafel Plot) and electrochemical impedance spectroscopy (EIS) were tested. The test results showed that the anti-corrosion performance of multilayer graphene added has improved greatly, and the content of the 5% best corrosion performance of graphene composite coating.

  17. Extrapolative capability of two models that estimating soil water retention curve between saturation and oven dryness.

    PubMed

    Lu, Sen; Ren, Tusheng; Lu, Yili; Meng, Ping; Sun, Shiyou

    2014-01-01

    Accurate estimation of soil water retention curve (SWRC) at the dry region is required to describe the relation between soil water content and matric suction from saturation to oven dryness. In this study, the extrapolative capability of two models for predicting the complete SWRC from limited ranges of soil water retention data was evaluated. When the model parameters were obtained from SWRC data in the 0-1500 kPa range, the FX model (Fredlund and Xing, 1994) estimations agreed well with measurements from saturation to oven dryness with RMSEs less than 0.01. The GG model (Groenevelt and Grant, 2004) produced larger errors at the dry region, with significantly larger RMSEs and MEs than the FX model. Further evaluations indicated that when SWRC measurements in the 0-100 kPa suction range was applied for model establishment, the FX model was capable of producing acceptable SWRCs across the entire water content range. For a higher accuracy, the FX model requires soil water retention data at least in the 0- to 300-kPa range to extend the SWRC to oven dryness. Comparing with the Khlosi et al. (2006) model, which requires measurements in the 0-500 kPa range to reproduce the complete SWRCs, the FX model has the advantage of requiring less SWRC measurements. Thus the FX modeling approach has the potential to eliminate the processes for measuring soil water retention in the dry range.

  18. Are there differences in the catalytic activity per unit enzyme of recombinantly expressed and human liver microsomal cytochrome P450 2C9? A systematic investigation into inter-system extrapolation factors.

    PubMed

    Crewe, H K; Barter, Z E; Yeo, K Rowland; Rostami-Hodjegan, A

    2011-09-01

    The 'relative activity factor' (RAF) compares the activity per unit of microsomal protein in recombinantly expressed cytochrome P450 enzymes (rhCYP) and human liver without separating the potential sources of variation (i.e. abundance of enzyme per mg of protein or variation of activity per unit enzyme). The dimensionless 'inter-system extrapolation factor' (ISEF) dissects differences in activity from those in CYP abundance. Detailed protocols for the determination of this scalar, which is used in population in vitro-in vivo extrapolation (IVIVE), are currently lacking. The present study determined an ISEF for CYP2C9 and, for the first time, systematically evaluated the effects of probe substrate, cytochrome b5 and methods for assessing the intrinsic clearance (CL(int) ). Values of ISEF for S-warfarin, tolbutamide and diclofenac were 0.75 ± 0.18, 0.57 ± 0.07 and 0.37 ± 0.07, respectively, using CL(int) values derived from the kinetic values V(max) and K(m) of metabolite formation in rhCYP2C9 + reductase + b5 BD Supersomes™. The ISEF values obtained using rhCYP2C9 + reductase BD Supersomes™ were more variable, with values of 7.16 ± 1.25, 0.89 ± 0.52 and 0.50 ± 0.05 for S-warfarin, tolbutamide and diclofenac, respectively. Although the ISEF values obtained from rhCYP2C9 + reductase + b5 for the three probe substrates were statistically different (p < 0.001), the use of the mean value of 0.54 resulted in predicted oral clearance values for all three substrates within 1.4 fold of the observed literature values. For consistency in the relative activity across substrates, use of a b5 expressing recombinant system, with the intrinsic clearance calculated from full kinetic data is recommended for generation of the CYP2C9 ISEF. Furthermore, as ISEFs have been found to be sensitive to differences in accessory proteins, rhCYP system specific ISEFs are recommended. Copyright © 2011 John Wiley & Sons, Ltd.

  19. Overview of Drug Delivery Methods in Exotics, Including Their Anatomic and Physiologic Considerations.

    PubMed

    Coutant, Thomas; Vergneau-Grosset, Claire; Langlois, Isabelle

    2018-05-01

    Drug delivery to exotic animals may be extrapolated from domestic animals, but some physiologic and anatomic differences complicate treatment administration. Knowing these differences enables one to choose optimal routes for drug delivery. This review provides practitioners with a detailed review of the currently reported methods used for drug delivery of various medications in the most common exotic animal species. Exotic animal peculiarities that are relevant for drug administration are discussed in the text and outlined in tables and boxes to help the reader easily find targeted information. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Estimation of Heat Transfer Coefficient in Squeeze Casting of Magnesium Alloy AM60 by Experimental Polynomial Extrapolation Method

    NASA Astrophysics Data System (ADS)

    Sun, Zhizhong; Niu, Xiaoping; Hu, Henry

    In this work, a different wall-thickness 5-step (with thicknesses as 3, 5, 8, 12, 20 mm) casting mold was designed, and squeeze casting of magnesium alloy AM60 was performed in a hydraulic press. The casting-die interfacial heat transfer coefficients (IHTC) in 5-step casting were determined based on experimental thermal histories data throughout the die and inside the casting which were recorded by fine type-K thermocouples. With measured temperatures, heat flux and IHTC were evaluated using the polynomial curve fitting method. The results show that the wall thickness affects IHTC peak values significantly. The IHTC value for the thick step is higher than that for the thin steps.

  1. Extrapolating subsurface geometry by surface expressions in transpressional strike slip fault, deduced from analogue experiments with settings of rheology and convergence angle

    NASA Astrophysics Data System (ADS)

    Hsieh, Shang Yu; Neubauer, Franz

    2015-04-01

    The internal structure of major strike-slip faults is still poorly understood, particularly how to extrapolate subsurface structures by surface expressions. Series of brittle analogue experiments by Leever et al., 2011 resulted the convergence angle is the most influential factor for surface structures. Further analogue models with different ductile settings allow a better understanding in extrapolating surface structures to the subsurface geometry of strike-slip faults. Fifteen analogue experiments were constructed to represent strike-slip faults in nature in different geological settings. As key parameters investigated in this study include: (a) the angle of convergence, (b) the thickness of brittle layer, (c) the influence of a rheological weak layer within the crust, and (d) influence of a thick and rheologically weak layer at the base of the crust. The experiments are aimed to explain first order structures along major transcurrent strike-slip faults such as the Altyn, Kunlun, San Andrea and Greendale (Darfield earthquake 2010) faults. The preliminary results show that convergence angle significantly influences the overall geometry of the transpressional system with greater convergence angles resulting in wider fault zones and higher elevation. Different positions, densities and viscosities of weak rheological layers have not only different surface expressions but also affect the fault geometry in the subsurface. For instance, rheological weak material in the bottom layer results in stretching when experiment reaches a certain displacement and a buildup of a less segmented, wide positive flower structure. At the surface, a wide fault valley in the middle of the fault zone is the reflection of stretching along the velocity discontinuity at depth. In models with a thin and rheologically weaker layer in the middle of the brittle layer, deformation is distributed over more faults and the geometry of the fault zone below and above the weak zone shows significant

  2. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    PubMed

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Methods of Technological Forecasting,

    DTIC Science & Technology

    1977-05-01

    Trend Extrapolation Progress Curve Analogy Trend Correlation Substitution Analysis or Substitution Growth Curves Envelope Curve Advances in the State of...the Art Technological Mapping Contextual Mapping Matrix Input-Output Analysis Mathematical Models Simulation Models Dynamic Modelling. CHAPTER IV...Generation Interaction between Needs and Possibilities Map of the Technological Future — (‘ross- Impact Matri x Discovery Matrix Morphological Analysis

  4. A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND

    PubMed Central

    Bessonova, O.V.; Khokhlova, V.A.; Canney, M.S.; Bailey, M.R.; Crum, L.A.

    2010-01-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue. PMID:20582159

  5. A derating method for therapeutic applications of high intensity focused ultrasound

    NASA Astrophysics Data System (ADS)

    Bessonova, O. V.; Khokhlova, V. A.; Canney, M. S.; Bailey, M. R.; Crum, L. A.

    2010-05-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. A new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal wave-forms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.

  6. A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND.

    PubMed

    Bessonova, O V; Khokhlova, V A; Canney, M S; Bailey, M R; Crum, L A

    2010-01-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.

  7. Static and wind tunnel near-field/far-field jet noise measurements from model scale single-flow baseline and suppressor nozzles. Volume 1: Noise source locations and extrapolation of static free-field jet noise data

    NASA Technical Reports Server (NTRS)

    Jaeck, C. L.

    1976-01-01

    A test was conducted in the Boeing Large Anechoic Chamber to determine static jet noise source locations of six baseline and suppressor nozzle models, and establish a technique for extrapolating near field data into the far field. The test covered nozzle pressure ratios from 1.44 to 2.25 and jet velocities from 412 to 594 m/s at a total temperature of 844 K.

  8. Enhanced Hydrogen Evolution Reactions on Nanostructured Cu2ZnSnS4 (CZTS) Electrocatalyst

    NASA Astrophysics Data System (ADS)

    Digraskar, Renuka V.; Mulik, Balaji B.; Walke, Pravin S.; Ghule, Anil V.; Sathe, Bhaskar R.

    2017-08-01

    A novel and facile one-step sonochemical method is used to synthesize Cu2ZnSnS4 (CZTS) nanoparticles (2.6 ± 0.4 nm) as cathode electrocatalyst for hydrogen evolution reactions. The detailed morphology, crystal and surface structure, and composition of the CZTS nanostructures were characterized by high resolution transmission electron microscopy (HR-TEM), Selected area electron diffraction (SAED), X-ray diffraction, Raman spectroscopy, FTIR analysis, Brunauer-Emmett-Teller (BET) surface area measurements, Electron dispersive analysis, X-ray photoelectron spectroscopy respectively. Electrocatalytic abilities of the nanoparticles toward Hydrogen Evolution Reactions (HER) were verified through cyclic voltammograms (CV) and Linear sweep voltammetry (LSV), electrochemical impedance spectroscopy (EIS), and Tafel polarization measurements. It reveals enhanced activity at lower onset potential 300 mV v/s RHE, achieved at exceptionally high current density -130 mA/cm2, which is higher than the existing non-nobel metal based cathodes. Further result exhibits Tafel slope of 85 mV/dec, exchange current density of 882 mA/cm2, excellent stability (> 500 cycles) and lower charge transfer resistance. This sonochemically fabricated CZTSs nanoparticles are leading to significantly reduce cell cost and simplification of preparation process over existing high efficiency Pt and other nobel metal-free cathode electrocatalyst.

  9. Methods to determine the growth domain in a multidimensional environmental space.

    PubMed

    Le Marc, Yvan; Pin, Carmen; Baranyi, József

    2005-04-15

    Data from a database on microbial responses to the food environment (ComBase, see www.combase.cc) were used to study the boundary of growth several pathogens (Aeromonas hydrophila, Escherichia coli, Listeria monocytogenes, Yersinia enterocolitica). Two methods were used to evaluate the growth/no growth interface. The first one is an application of the Minimum Convex Polyhedron (MCP) introduced by Baranyi et al. [Baranyi, J., Ross, T., McMeekin, T., Roberts, T.A., 1996. The effect of parameterisation on the performance of empirical models used in Predictive Microbiology. Food Microbiol. 13, 83-91.]. The second method applies logistic regression to define the boundary of growth. The combination of these two different techniques can be a useful tool to handle the problem of extrapolation of predictive models at the growth limits.

  10. Prediction of UT1-UTC, LOD and AAM χ3 by combination of least-squares and multivariate stochastic methods

    NASA Astrophysics Data System (ADS)

    Niedzielski, Tomasz; Kosek, Wiesław

    2008-02-01

    This article presents the application of a multivariate prediction technique for predicting universal time (UT1-UTC), length of day (LOD) and the axial component of atmospheric angular momentum (AAM χ 3). The multivariate predictions of LOD and UT1-UTC are generated by means of the combination of (1) least-squares (LS) extrapolation of models for annual, semiannual, 18.6-year, 9.3-year oscillations and for the linear trend, and (2) multivariate autoregressive (MAR) stochastic prediction of LS residuals (LS + MAR). The MAR technique enables the use of the AAM χ 3 time-series as the explanatory variable for the computation of LOD or UT1-UTC predictions. In order to evaluate the performance of this approach, two other prediction schemes are also applied: (1) LS extrapolation, (2) combination of LS extrapolation and univariate autoregressive (AR) prediction of LS residuals (LS + AR). The multivariate predictions of AAM χ 3 data, however, are computed as a combination of the extrapolation of the LS model for annual and semiannual oscillations and the LS + MAR. The AAM χ 3 predictions are also compared with LS extrapolation and LS + AR prediction. It is shown that the predictions of LOD and UT1-UTC based on LS + MAR taking into account the axial component of AAM are more accurate than the predictions of LOD and UT1-UTC based on LS extrapolation or on LS + AR. In particular, the UT1-UTC predictions based on LS + MAR during El Niño/La Niña events exhibit considerably smaller prediction errors than those calculated by means of LS or LS + AR. The AAM χ 3 time-series is predicted using LS + MAR with higher accuracy than applying LS extrapolation itself in the case of medium-term predictions (up to 100 days in the future). However, the predictions of AAM χ 3 reveal the best accuracy for LS + AR.

  11. Method and system for non-linear motion estimation

    NASA Technical Reports Server (NTRS)

    Lu, Ligang (Inventor)

    2011-01-01

    A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.

  12. Corrosion resistance of steel materials in LiCl-KCl melts

    NASA Astrophysics Data System (ADS)

    Wang, Le; Li, Bing; Shen, Miao; Li, Shi-yan; Yu, Jian-guo

    2012-10-01

    The corrosion behaviors of 304SS, 316LSS, and Q235A in LiCl-KCl melts were investigated at 450°C by Tafel curves and electrochemical impedance spectroscopy (EIS). 316LSS shows the best corrosion resistance behaviors among the three materials, including the most positive corrosion potential and the smallest corrosion current from the Tafel curves and the largest electron transfer resistance from the Nyquist plots. The results are in good agreement with the weight losses in the static corrosion experiments for 45 h. This may be attributed to the better corrosion resistance of Mo and Ni existing as alloy elements in 316LSS, which exhibit the lower corrosion current densities and more positive corrosion potentials than 316LSS in the same melts.

  13. Evaluation of absolute measurement using a 4π plastic scintillator for the 4πβ-γ coincidence counting method.

    PubMed

    Unno, Y; Sanami, T; Sasaki, S; Hagiwara, M; Yunoki, A

    2018-04-01

    Absolute measurement by the 4πβ-γ coincidence counting method was conducted by two photomultipliers facing across a plastic scintillator to be focused on β ray counting efficiency. The detector was held with a through-hole-type NaI(Tl) detector. The results include absolutely determined activity and its uncertainty especially about extrapolation. A comparison between the obtained and known activities showed agreement within their uncertainties. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Semiempirical Theories of the Affinities of Negative Atomic Ions

    NASA Technical Reports Server (NTRS)

    Edie, John W.

    1961-01-01

    The determination of the electron affinities of negative atomic ions by means of direct experimental investigation is limited. To supplement the meager experimental results, several semiempirical theories have been advanced. One commonly used technique involves extrapolating the electron affinities along the isoelectronic sequences, The most recent of these extrapolations Is studied by extending the method to Include one more member of the isoelectronic sequence, When the results show that this extension does not increase the accuracy of the calculations, several possible explanations for this situation are explored. A different approach to the problem is suggested by the regularities appearing in the electron affinities. Noting that the regular linear pattern that exists for the ionization potentials of the p electrons as a function of Z, repeats itself for different degrees of ionization q, the slopes and intercepts of these curves are extrapolated to the case of the negative Ion. The method is placed on a theoretical basis by calculating the Slater parameters as functions of q and n, the number of equivalent p-electrons. These functions are no more than quadratic in q and n. The electron affinities are calculated by extending the linear relations that exist for the neutral atoms and positive ions to the negative ions. The extrapolated. slopes are apparently correct, but the intercepts must be slightly altered to agree with experiment. For this purpose one or two experimental affinities (depending on the extrapolation method) are used in each of the two short periods. The two extrapolation methods used are: (A) an isoelectronic sequence extrapolation of the linear pattern as such; (B) the same extrapolation of a linearization of this pattern (configuration centers) combined with an extrapolation of the other terms of the ground configurations. The latter method Is preferable, since it requires only experimental point for each period. The results agree within

  15. SU-G-201-05: Comparison of Different Methods for Output Verification of Eleckta Nucletron’s Valencia Skin Applicators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, J; Yudelev, M

    2016-06-15

    Purpose: The provided output factors for Elekta Nucletron’s skin applicators are based on Monte Carlo simulations. These outputs have not been independently verified, and there is no recognized method for output verification of the vendor’s applicators. The purpose of this work is to validate the outputs provided by the vendor experimentally. Methods: Using a Flexitron Ir-192 HDR unit, three experimental methods were employed to determine dose with the 30 mm diameter Valencia applicator: first a gradient method using extrapolation ionization chamber (Far West Technology, EIC-1) measurements in solid water phantom at 3 mm SCD was used. The dose was derivedmore » based on first principles. Secondly a combination of a parallel plate chamber (Exradin A-10) and the EIC-1 was used to determine air kerma at 3 mm SCD. The air kerma was converted to dose to water in line with TG-61 formalism by using a muen ratio and a scatter factor measured with the skin applicators. Similarly a combination of the A-10 parallel plate chamber and gafchromic film (EBT 3) was also used. The Nk factor for the A-10 chamber was obtained through linear interpolation between ADCL supplied Nk factors for Cs-137 and M250. Results: EIC-1 measurements in solid water defined the outputs factor at 3 mm as 0.1343 cGy/U hr. The combination of A-10/ EIC-1 and A-10/EBT3 lead to output factors of 0.1383 and 0.1568 cGy/U hr, respectively. For comparison the output recommended by the vendor is 0.1659 cGy/U hr. Conclusion: All determined dose rates were lower than the vendor supplied values. The observed discrepancy between extrapolation chamber and film methods can be ascribed to extracameral gradient effects that may not be fully accounted for by the former method.« less

  16. Durability predictions of adhesively bonded composite structures using accelerated characterization methods

    NASA Technical Reports Server (NTRS)

    Brinson, H. F.

    1985-01-01

    The utilization of adhesive bonding for composite structures is briefly assessed. The need for a method to determine damage initiation and propagation for such joints is outlined. Methods currently in use to analyze both adhesive joints and fiber reinforced plastics is mentioned and it is indicated that all methods require the input of the mechanical properties of the polymeric adhesive and composite matrix material. The mechanical properties of polymers are indicated to be viscoelastic and sensitive to environmental effects. A method to analytically characterize environmentally dependent linear and nonlinear viscoelastic properties is given. It is indicated that the methodology can be used to extrapolate short term data to long term design lifetimes. That is, the method can be used for long term durability predictions. Experimental results for near adhesive resins, polymers used as composite matrices and unidirectional composite laminates is given. The data is fitted well with the analytical durability methodology. Finally, suggestions are outlined for the development of an analytical methodology for the durability predictions of adhesively bonded composite structures.

  17. SAR/QSAR methods in public health practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demchuk, Eugene, E-mail: edemchuk@cdc.gov; Ruiz, Patricia; Chou, Selene

    2011-07-15

    Methods of (Quantitative) Structure-Activity Relationship ((Q)SAR) modeling play an important and active role in ATSDR programs in support of the Agency mission to protect human populations from exposure to environmental contaminants. They are used for cross-chemical extrapolation to complement the traditional toxicological approach when chemical-specific information is unavailable. SAR and QSAR methods are used to investigate adverse health effects and exposure levels, bioavailability, and pharmacokinetic properties of hazardous chemical compounds. They are applied as a part of an integrated systematic approach in the development of Health Guidance Values (HGVs), such as ATSDR Minimal Risk Levels, which are used to protectmore » populations exposed to toxic chemicals at hazardous waste sites. (Q)SAR analyses are incorporated into ATSDR documents (such as the toxicological profiles and chemical-specific health consultations) to support environmental health assessments, prioritization of environmental chemical hazards, and to improve study design, when filling the priority data needs (PDNs) as mandated by Congress, in instances when experimental information is insufficient. These cases are illustrated by several examples, which explain how ATSDR applies (Q)SAR methods in public health practice.« less

  18. Statistical atlas based extrapolation of CT data

    NASA Astrophysics Data System (ADS)

    Chintalapani, Gouthami; Murphy, Ryan; Armiger, Robert S.; Lepisto, Jyri; Otake, Yoshito; Sugano, Nobuhiko; Taylor, Russell H.; Armand, Mehran

    2010-02-01

    We present a framework to estimate the missing anatomical details from a partial CT scan with the help of statistical shape models. The motivating application is periacetabular osteotomy (PAO), a technique for treating developmental hip dysplasia, an abnormal condition of the hip socket that, if untreated, may lead to osteoarthritis. The common goals of PAO are to reduce pain, joint subluxation and improve contact pressure distribution by increasing the coverage of the femoral head by the hip socket. While current diagnosis and planning is based on radiological measurements, because of significant structural variations in dysplastic hips, a computer-assisted geometrical and biomechanical planning based on CT data is desirable to help the surgeon achieve optimal joint realignments. Most of the patients undergoing PAO are young females, hence it is usually desirable to minimize the radiation dose by scanning only the joint portion of the hip anatomy. These partial scans, however, do not provide enough information for biomechanical analysis due to missing iliac region. A statistical shape model of full pelvis anatomy is constructed from a database of CT scans. The partial volume is first aligned with the statistical atlas using an iterative affine registration, followed by a deformable registration step and the missing information is inferred from the atlas. The atlas inferences are further enhanced by the use of X-ray images of the patient, which are very common in an osteotomy procedure. The proposed method is validated with a leave-one-out analysis method. Osteotomy cuts are simulated and the effect of atlas predicted models on the actual procedure is evaluated.

  19. Out-of-Sample Extrapolation utilizing Semi-Supervised Manifold Learning (OSE-SSL): Content Based Image Retrieval for Histopathology Images

    PubMed Central

    Sparks, Rachel; Madabhushi, Anant

    2016-01-01

    Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01. PMID:27264985

  20. Emissions of sulfur gases from marine and freshwater wetlands of the Florida Everglades: Rates and extrapolation using remote sensing

    NASA Technical Reports Server (NTRS)

    Hines, Mark E.; Pelletier, Ramona E.; Crill, Patrick M.

    1992-01-01

    Rates of emissions of the biogenic sulfur (S) gases carbonyl sulfide (COS), methyl mercaptan (MSH), dimethyl sulfide (DMS), and carbon disulfide (CS2) were measured in a variety of marine and freshwater wetland habitats in the Florida Everglades during a short duration period in October using dynamic chambers, cryotrapping techniques, and gas chromatography. The most rapid emissions of greater than 500 nmol/m(sup -2)h(sup -1) occurred in red mangrove-dominated sites that were adjacent to open seawater and contained numerous crab burrows. Poorly drained red mangrove sites exhibited lower fluxes of approximately 60 nmol/m(sup -2)h(sup -1) which were similar to fluxes from the black mangrove areas which dominated the marine-influenced wetland sites in the Everglades. DMS was the dominant organo-S gas emitted especially in the freshwater areas. Spectral data from a scene from the Landsat thematic mapper were used to map habitats in the Everglades. Six vegetation categories were delineated using geographical information system software and S gas emission were extrapolated for the entire Everglades National Park. The black mangrove-dominated areas accounted for the largest portion of S gas emissions to the area. The large area extent of the saw grass communities (42 percent) accounted for approximately 24 percent of the total S emissions.

  1. In vivo fascicle length measurements via B-mode ultrasound imaging with single vs dual transducer arrangements.

    PubMed

    Brennan, Scott F; Cresswell, Andrew G; Farris, Dominic J; Lichtwark, Glen A

    2017-11-07

    Ultrasonography is a useful technique to study muscle contractions in vivo, however larger muscles like vastus lateralis may be difficult to visualise with smaller, commonly used transducers. Fascicle length is often estimated using linear trigonometry to extrapolate fascicle length to regions where the fascicle is not visible. However, this approach has not been compared to measurements made with a larger field of view for dynamic muscle contractions. Here we compared two different single-transducer extrapolation methods to measure VL muscle fascicle length to a direct measurement made using two synchronised, in-series transducers. The first method used pennation angle and muscle thickness to extrapolate fascicle length outside the image (extrapolate method). The second method determined fascicle length based on the extrapolated intercept between a fascicle and the aponeurosis (intercept method). Nine participants performed maximal effort, isometric, knee extension contractions on a dynamometer at 10° increments from 50 to 100° of knee flexion. Fascicle length and torque were simultaneously recorded for offline analysis. The dual transducer method showed similar patterns of fascicle length change (overall mean coefficient of multiple correlation was 0.76 and 0.71 compared to extrapolate and intercept methods respectively), but reached different absolute lengths during the contractions. This had the effect of producing force-length curves of the same shape, but each curve was shifted in terms of absolute length. We concluded that dual transducers are beneficial for studies that examine absolute fascicle lengths, whereas either of the single transducer methods may produce similar results for normalised length changes, and repeated measures experimental designs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. High-order Newton-penalty algorithms

    NASA Astrophysics Data System (ADS)

    Dussault, Jean-Pierre

    2005-10-01

    Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.

  3. Statistical Bayesian method for reliability evaluation based on ADT data

    NASA Astrophysics Data System (ADS)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  4. A Spatial Method to Calculate Small-Scale Fisheries Extent

    NASA Astrophysics Data System (ADS)

    Johnson, A. F.; Moreno-Báez, M.; Giron-Nava, A.; Corominas, J.; Erisman, B.; Ezcurra, E.; Aburto-Oropeza, O.

    2016-02-01

    Despite global catch per unit effort having redoubled since the 1950's, the global fishing fleet is estimated to be twice the size that the oceans can sustainably support. In order to gauge the collateral impacts of fishing intensity, we must be able to estimate the spatial extent and amount of fishing vessels in the oceans. Methods that do currently exist are built around electronic tracking and log book systems and generally focus on industrial fisheries. Spatial extent for small-scale fisheries therefore remains elusive for many small-scale fishing fleets; even though these fisheries land the same biomass for human consumption as industrial fisheries. Current methods are data-intensive and require extensive extrapolation when estimated across large spatial scales. We present an accessible, spatial method of calculating the extent of small-scale fisheries based on two simple measures that are available, or at least easily estimable, in even the most data poor fisheries: the number of boats and the local coastal human population. We demonstrate this method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This method provides an important first step towards estimating the fishing extent of the small-scale fleet, globally.

  5. Superhydrophobic Copper Surfaces with Anticorrosion Properties Fabricated by Solventless CVD Methods.

    PubMed

    Vilaró, Ignasi; Yagüe, Jose L; Borrós, Salvador

    2017-01-11

    Due to continuous miniaturization and increasing number of electrical components in electronics, copper interconnections have become critical for the design of 3D integrated circuits. However, corrosion attack on the copper metal can affect the electronic performance of the material. Superhydrophobic coatings are a commonly used strategy to prevent this undesired effect. In this work, a solventless two-steps process was developed to fabricate superhydrophobic copper surfaces using chemical vapor deposition (CVD) methods. The superhydrophobic state was achieved through the design of a hierarchical structure, combining micro-/nanoscale domains. In the first step, O 2 - and Ar-plasma etchings were performed on the copper substrate to generate microroughness. Afterward, a conformal copolymer, 1H,1H,2H,2H-perfluorodecyl acrylate-ethylene glycol diacrylate [p(PFDA-co-EGDA)], was deposited on top of the metal via initiated CVD (iCVD) to lower the surface energy of the surface. The copolymer topography exhibited a very characteristic and unique nanoworm-like structure. The combination of the nanofeatures of the polymer with the microroughness of the copper led to achievement of the superhydrophobic state. AFM, SEM, and XPS were used to characterize the evolution in topography and chemical composition during the CVD processes. The modified copper showed water contact angles as high as 163° and hysteresis as low as 1°. The coating withstood exposure to aggressive media for extended periods of time. Tafel analysis was used to compare the corrosion rates between bare and modified copper. Results indicated that iCVD-coated copper corrodes 3 orders of magnitude slower than untreated copper. The surface modification process yielded repeatable and robust superhydrophobic coatings with remarkable anticorrosion properties.

  6. Communication — Modeling polymer-electrolyte fuel-cell agglomerates with double-trap kinetics

    DOE PAGES

    Pant, Lalit M.; Weber, Adam Z.

    2017-04-14

    A new semi-analytical agglomerate model is presented for polymer-electrolyte fuel-cell cathodes. The model uses double-trap kinetics for the oxygen-reduction reaction, which can capture the observed potential-dependent coverage and Tafel-slope changes. An iterative semi-analytical approach is used to obtain reaction rate constants from the double-trap kinetics, oxygen concentration at the agglomerate surface, and overall agglomerate reaction rate. The analytical method can predict reaction rates within 2% of the numerically simulated values for a wide range of oxygen concentrations, overpotentials, and agglomerate sizes, while saving simulation time compared to a fully numerical approach.

  7. Interim methods for development of inhalation reference concentrations. Draft report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blackburn, K.; Dourson, M.; Erdreich, L.

    1990-08-01

    An inhalation reference concentration (RfC) is an estimate of continuous inhalation exposure over a human lifetime that is unlikely to pose significant risk of adverse noncancer health effects and serves as a benchmark value for assisting in risk management decisions. Derivation of an RfC involves dose-response assessment of animal data to determine the exposure levels at which no significant increase in the frequency or severity of adverse effects between the exposed population and its appropriate control exists. The assessment requires an interspecies dose extrapolation from a no-observed-adverse-effect level (NOAEL) exposure concentration of an animal to a human equivalent NOAEL (NOAEL(HBC)).more » The RfC is derived from the NOAEL(HBC) by the application of generally order-of-magnitude uncertainty factors. Intermittent exposure scenarios in animals are extrapolated to chronic continuous human exposures. Relationships between external exposures and internal doses depend upon complex simultaneous and consecutive processes of absorption, distribution, metabolism, storage, detoxification, and elimination. To estimate NOAEL(HBC)s when chemical-specific physiologically-based pharmacokinetic models are not available, a dosimetric extrapolation procedure based on anatomical and physiological parameters of the exposed human and animal and the physical parameters of the toxic chemical has been developed which gives equivalent or more conservative exposure concentrations values than those that would be obtained with a PB-PK model.« less

  8. COnstrained Data Extrapolation (CODE): A new approach for high definition vascular imaging from low resolution data.

    PubMed

    Song, Yang; Hamtaei, Ehsan; Sethi, Sean K; Yang, Guang; Xie, Haibin; Mark Haacke, E

    2017-12-01

    To introduce a new approach to reconstruct high definition vascular images using COnstrained Data Extrapolation (CODE) and evaluate its capability in estimating vessel area and stenosis. CODE is based on the constraint that the full width half maximum of a vessel can be accurately estimated and, since it represents the best estimate for the width of the object, higher k-space data can be generated from this information. To demonstrate the potential of extracting high definition vessel edges using low resolution data, both simulated and human data were analyzed to better visualize the vessels and to quantify both area and stenosis measurements. The results from CODE using one-fourth of the fully sampled k-space data were compared with a compressed sensing (CS) reconstruction approach using the same total amount of data but spread out between the center of k-space and the outer portions of the original k-space to accelerate data acquisition by a factor of four. For a sufficiently high signal-to-noise ratio (SNR) such as 16 (8), we found that objects as small as 3 voxels in the 25% under-sampled data (6 voxels when zero-filled) could be used for CODE and CS and provide an estimate of area with an error <5% (10%). For estimating up to a 70% stenosis with an SNR of 4, CODE was found to be more robust to noise than CS having a smaller variance albeit a larger bias. Reconstruction times were >200 (30) times faster for CODE compared to CS in the simulated (human) data. CODE was capable of producing sharp sub-voxel edges and accurately estimating stenosis to within 5% for clinically relevant studies of vessels with a width of at least 3pixels in the low resolution images. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Discovery of pyridine-based agrochemicals by using Intermediate Derivatization Methods.

    PubMed

    Guan, Ai-Ying; Liu, Chang-Ling; Sun, Xu-Feng; Xie, Yong; Wang, Ming-An

    2016-02-01

    Pyridine-based compounds have been playing a crucial role as agrochemicals or pesticides including fungicides, insecticides/acaricides and herbicides, etc. Since most of the agrochemicals listed in the Pesticide Manual were discovered through screening programs that relied on trial-and-error testing and new agrochemical discovery is not benefiting as much from the in silico new chemical compound identification/discovery techniques used in pharmaceutical research, it has become more important to find new methods to enhance the efficiency of discovering novel lead compounds in the agrochemical field to shorten the time of research phases in order to meet changing market requirements. In this review, we selected 18 representative known agrochemicals containing a pyridine moiety and extrapolate their discovery from the perspective of Intermediate Derivatization Methods in the hope that this approach will have greater appeal to researchers engaged in the discovery of agrochemicals and/or pharmaceuticals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Efficacy and Safety Extrapolation Analyses for Atomoxetine in Young Children with Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Upadhyaya, Himanshu; Kratochvil, Christopher; Ghuman, Jaswinder; Camporeale, Angelo; Lipsius, Sarah; D'Souza, Deborah; Tanaka, Yoko

    2015-12-01

    This extrapolation analysis qualitatively compared the efficacy and safety profile of atomoxetine from Lilly clinical trial data in 6-7-year-old patients with attention-deficit/hyperactivity disorder (ADHD) with that of published literature in 4-5-year-old patients with ADHD (two open-label [4-5-year-old patients] and one placebo-controlled study [5-year-old patients]). The main efficacy analyses included placebo-controlled Lilly data and the placebo-controlled external study (5-year-old patients) data. The primary efficacy variables used in these studies were the ADHD Rating Scale-IV Parent Version, Investigator Administered (ADHD-RS-IV-Parent:Inv) total score, or the Swanson, Nolan and Pelham (SNAP-IV) scale score. Safety analyses included treatment-emergent adverse events (TEAEs) and vital signs. Descriptive statistics (means, percentages) are presented. Acute atomoxetine treatment improved core ADHD symptoms in both 6-7-year-old patients (n=565) and 5-year-old patients (n=37) (treatment effect: -10.16 and -7.42). In an analysis of placebo-controlled groups, the mean duration of exposure to atomoxetine was ∼ 7 weeks for 6-7-year-old patients and 9 weeks for 5-year-old patients. Decreased appetite was the most common TEAE in atomoxetine-treated patients. The TEAEs observed at a higher rate in 5-year-old versus 6-7-year-old patients were irritability (36.8% vs. 3.6%) and other mood-related events (6.9% each vs. <3.0%). Blood pressure and pulse increased in both 4-5-year-old patients and 6-7-year-old patients, whereas a weight increase was seen only in the 6-7-year-old patients. Although limited by the small sample size of the external studies, these analyses suggest that in 5-year-old patients with ADHD, atomoxetine may improve ADHD symptoms, but possibly to a lesser extent than in older children, with some adverse events occurring at a higher rate in 5-year-old patients.

  11. Mice, myths, and men

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fry, R.J.M.

    The author discusses some examples of how different experimental animal systems have helped to answer questions about the effects of radiation, in particular, carcinogenesis, and to indicate how the new experimental model systems promise an even more exciting future. Entwined in these themes will be observations about susceptibility and extrapolation across species. The hope of developing acceptable methods of extrapolation of estimates of the risk of radiogenic cancer increases as molecular biology reveals the trail of remarkable similarities in the genetic control of many functions common to many species. A major concern about even attempting to extrapolate estimates of risksmore » of radiation-induced cancer across species has been that the mechanisms of carcinogenesis were so different among different species that it would negate the validity of extrapolation. The more that has become known about the genes involved in cancer, especially those related to the initial events in carcinogenesis, the more have the reasons for considering methods of extrapolation across species increased.« less

  12. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    1999-01-01

    This paper reviews three recent works on the numerical methods to integrate ordinary differential equations (ODE), which are specially designed for parallel, vector, and/or multi-processor-unit(PU) computers. The first is the Picard-Chebyshev method (Fukushima, 1997a). It obtains a global solution of ODE in the form of Chebyshev polynomial of large (> 1000) degree by applying the Picard iteration repeatedly. The iteration converges for smooth problems and/or perturbed dynamics. The method runs around 100-1000 times faster in the vector mode than in the scalar mode of a certain computer with vector processors (Fukushima, 1997b). The second is a parallelization of a symplectic integrator (Saha et al., 1997). It regards the implicit midpoint rules covering thousands of timesteps as large-scale nonlinear equations and solves them by the fixed-point iteration. The method is applicable to Hamiltonian systems and is expected to lead an acceleration factor of around 50 in parallel computers with more than 1000 PUs. The last is a parallelization of the extrapolation method (Ito and Fukushima, 1997). It performs trial integrations in parallel. Also the trial integrations are further accelerated by balancing computational load among PUs by the technique of folding. The method is all-purpose and achieves an acceleration factor of around 3.5 by using several PUs. Finally, we give a perspective on the parallelization of some implicit integrators which require multiple corrections in solving implicit formulas like the implicit Hermitian integrators (Makino and Aarseth, 1992), (Hut et al., 1995) or the implicit symmetric multistep methods (Fukushima, 1998), (Fukushima, 1999).

  13. Sediment rating curve & Co. - a contest of prediction methods

    NASA Astrophysics Data System (ADS)

    Francke, T.; Zimmermann, A.

    2012-04-01

    In spite of the recent technological progress in sediment monitoring, often the calculation of sediment yield (SSY) still relies on intermittent measurements because of the use of historic records, instrument-failure in continuous recording or financial constraints. Therefore, available measurements are usually inter- and even extrapolated using the sediment rating curve approach, which uses continuously available discharge data to predict sediment concentrations. Extending this idea by further aspects like the inclusion of other predictors (e.g. rainfall, discharge-characteristics, etc.), or the consideration of prediction uncertainty led to a variety of new methods. Now, with approaches such as Fuzzy Logic, Artificial Neural Networks, Tree-based regression, GLMs, etc., the user is left to decide which method to apply. Trying multiple approaches is usually not an option, as considerable effort and expertise may be needed for their application. To establish a helpful guideline in selecting the most appropriate method for SSY-computation, we initiated a study to compare and rank available methods. Depending on problem attributes like hydrological and sediment regime, number of samples, sampling scheme, and availability of ancillary predictors, the performance of different methods is compared. Our expertise allowed us to "register" Random Forests, Quantile Regression Forests and GLMs for the contest. To include many different methods and ensure their sophisticated use we invite scientists that are willing to benchmark their favourite method(s) with us. The more diverse the participating methods are, the more exciting the contest will be.

  14. Extrapolations and prognostications

    NASA Astrophysics Data System (ADS)

    Swartz, Clifford E.

    2000-01-01

    It's that time of millennium again when we look to the past and prophesy about the future. My own family memories don't go much past the 20th century, although my wife is a direct descendant of a woman convicted of witchcraft in Salem. Not recently, of course. A century ago, my mother was going to school in a one-room schoolhouse with student desks in rows and a blackboard up front for the teacher. In high school she studied physics but there were no student laboratory exercises and only a few demonstrations by the teacher. She didn't like it.

  15. Quantitative trait locus gene mapping: a new method for locating alcohol response genes.

    PubMed

    Crabbe, J C

    1996-01-01

    Alcoholism is a multigenic trait with important non-genetic determinants. Studies with genetic animal models of susceptibility to several of alcohol's effects suggest that several genes contributing modest effects on susceptibility (Quantitative Trait Loci, or QTLs) are important. A new technique of QTL gene mapping has allowed the identification of the location in mouse genome of several such QTLs. The method is described, and the locations of QTLs affecting the acute alcohol withdrawal reaction are described as an example of the method. Verification of these QTLs in ancillary studies is described and the strengths, limitations, and future directions to be pursued are discussed. QTL mapping is a promising method for identifying genes in rodents with the hope of directly extrapolating the results to the human genome. This review is based on a paper presented at the First International Congress of the Latin American Society for Biomedical Research on Alcoholism, Santiago, Chile, November 1994.

  16. Are rapid population estimates accurate? A field trial of two different assessment methods.

    PubMed

    Grais, Rebecca F; Coulombier, Denis; Ampuero, Julia; Lucas, Marcelino E S; Barretto, Avertino T; Jacquier, Guy; Diaz, Francisco; Balandine, Serge; Mahoudeau, Claude; Brown, Vincent

    2006-09-01

    Emergencies resulting in large-scale displacement often lead to populations resettling in areas where basic health services and sanitation are unavailable. To plan relief-related activities quickly, rapid population size estimates are needed. The currently recommended Quadrat method estimates total population by extrapolating the average population size living in square blocks of known area to the total site surface. An alternative approach, the T-Square, provides a population estimate based on analysis of the spatial distribution of housing units taken throughout a site. We field tested both methods and validated the results against a census in Esturro Bairro, Beira, Mozambique. Compared to the census (population: 9,479), the T-Square yielded a better population estimate (9,523) than the Quadrat method (7,681; 95% confidence interval: 6,160-9,201), but was more difficult for field survey teams to implement. Although applicable only to similar sites, several general conclusions can be drawn for emergency planning.

  17. A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Fan, Liang-Shih

    2014-07-01

    A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge-Kutta schemes in the coupled fluid-particle interaction. The major challenge to implement high-order Runge-Kutta schemes in the LBM is that the flow information such as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid-particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge-Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and -0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered. This finding

  18. Track propagation methods for the correlation of charged tracks with clusters in the calorimeter of the bar PANDA experiment

    NASA Astrophysics Data System (ADS)

    Nasawasd, T.; Simantathammakul, T.; Herold, C.; Stockmanns, T.; Ritman, J.; Kobdaj, C.

    2018-02-01

    To classify clusters of hits in the electromagnetic calorimeter (EMC) of bar PANDA (antiProton ANnihilation at DArmstadt), one has to match these EMC clusters with tracks of charged particles reconstructed from hits in the tracking system. Therefore the tracks are propagated to the surface of the EMC and associated with EMC clusters which are nearby and below a cut parameter. In this work, we propose a helix propagator to extrapolate the track from the Straw Tube Tracker (STT) to the inner surface of the EMC instead of the GEANE propagator which is already embedded within the PandaRoot computational framework. The results for both propagation methods show a similar quality, with a 30% gain in CPU time when using the helix propagator. We use Monte-Carlo truth information to compare the particle ID of the EMC clusters with the ID of the extrapolated points, thus deciding upon the correctness of the matches. By varying the cut parameter as a function of transverse momentum and particle type, our simulations show that the purity can be increased by 3-5% compared to the default value which is a constant cut in the bar PANDA simulation framework PandaRoot.

  19. Power maps and wavefront for progressive addition lenses in eyeglass frames.

    PubMed

    Mejía, Yobani; Mora, David A; Díaz, Daniel E

    2014-10-01

    To evaluate a method for measuring the cylinder, sphere, and wavefront of progressive addition lenses (PALs) in eyeglass frames. We examine the contour maps of cylinder, sphere, and wavefront of a PAL assembled in an eyeglass frame using an optical system based on a Hartmann test. To reduce the data noise, particularly in the border of the eyeglass frame, we implement a method based on the Fourier analysis to extrapolate spots outside the eyeglass frame. The spots are extrapolated up to a circular pupil that circumscribes the eyeglass frame and compared with data obtained from a circular uncut PAL. By using the Fourier analysis to extrapolate spots outside the eyeglass frame, we can remove the edge artifacts of the PAL within its frame and implement the modal method to fit wavefront data with Zernike polynomials within a circular aperture that circumscribes the frame. The extrapolated modal maps from framed PALs accurately reflect maps obtained from uncut PALs and provide smoothed maps for the cylinder and sphere inside the eyeglass frame. The proposed method for extrapolating spots outside the eyeglass frame removes edge artifacts of the contour maps (wavefront, cylinder, and sphere), which may be useful to facilitate measurements such as the length and width of the progressive corridor for a PAL in its frame. The method can be applied to any shape of eyeglass frame.

  20. Microstructure, Phase Occurrence, and Corrosion Behavior of As-Solidified and As-Annealed Al-Pd Alloys

    NASA Astrophysics Data System (ADS)

    Ďuriška, Libor; Palcut, Marián; Špoták, Martin; Černičková, Ivona; Gondek, Ján; Priputen, Pavol; Čička, Roman; Janičkovič, Dušan; Janovec, Jozef

    2018-02-01

    In the present work, we studied the microstructure, phase constitution, and corrosion performance of Al88Pd12, Al77Pd23, Al72Pd28, and Al67Pd33 alloys (metal concentrations are given in at.%). The alloys were prepared by repeated arc melting of Al and Pd granules in argon atmosphere. The as-solidified samples were further annealed at 700 °C for 500 h. The microstructure and phase constitution of the as-solidified and as-annealed alloys were studied by scanning electron microscopy, energy-dispersive x-ray spectroscopy, and x-ray diffraction. The alloys were found to consist of (Al), ɛ n ( Al3Pd), and δ (Al3Pd2) in various fractions. The corrosion testing of the alloys was performed in aqueous NaCl (0.6 M) using a standard 3-electrode cell monitored by potentiostat. The corrosion current densities and corrosion potentials were determined by Tafel extrapolation. The corrosion potentials of the alloys were found between - 763 and - 841 mV versus Ag/AgCl. An active alloy dissolution has been observed, and it has been found that (Al) was excavated, whereas Al in ɛ n was de-alloyed. The effects of bulk chemical composition, phase occurrence and microstructure on the corrosion behavior are evaluated. The local nobilities of ɛ n and δ are discussed. Finally, the conclusions about the alloy's corrosion resistance in saline solutions are provided.

  1. Determination of the liquidus temperature of tin using the heat pulse-based melting and comparison with traditional methods

    NASA Astrophysics Data System (ADS)

    Joung, Wukchul; Park, Jihye; Pearce, Jonathan V.

    2018-06-01

    In this work, the liquidus temperature of tin was determined by melting the sample using the pressure-controlled loop heat pipe. Square wave-type pressure steps generated periodic 0.7 °C temperature steps in the isothermal region in the vicinity of the tin sample, and the tin was melted with controllable heat pulses from the generated temperature changes. The melting temperatures at specific melted fractions were measured, and they were extrapolated to the melted fraction of unity to determine the liquidus temperature of tin. To investigate the influence of the impurity distribution on the melting behavior, a molten tin sample was solidified by an outward slow freezing or by quenching to segregate the impurities inside the sample with concentrations increasing outwards or to spread the impurities uniformly, respectively. The measured melting temperatures followed the local solidus temperature variations well in the case of the segregated sample and stayed near the solidus temperature in the quenched sample due to the microscopic melting behavior. The extrapolated melting temperatures of the segregated and quenched samples were 0.95 mK and 0.49 mK higher than the outside-nucleated freezing temperature of tin (with uncertainties of 0.15 mK and 0.16 mK, at approximately 95% level of confidence), respectively. The extrapolated melting temperature of the segregated sample was supposed to be a closer approximation to the liquidus temperature of tin, whereas the quenched sample yielded the possibility of a misleading extrapolation to the solidus temperature. Therefore, the determination of the liquidus temperature could result in different extrapolated melting temperatures depending on the way the impurities were distributed within the sample, which has implications for the contemporary methodology for realizing temperature fixed points of the International Temperature Scale of 1990 (ITS-90).

  2. Kinetics of nickel electrodeposition from low electrolyte concentration and at a narrow interelectrode gap

    NASA Astrophysics Data System (ADS)

    Widayatno, Tri

    2015-12-01

    Electrodeposition of nickel onto copper in a system of low Ni2+ concentration and at a narrow interelectrode gap has been carried out. This electrochemical system was required for maskless pattern transfer through electroplating (Enface technique). Kinetics of Electrochemical reaction of Nickel is relatively slow, where such electrochemical system has never been used in this technology. Study on the kinetics of the electrochemical reaction of nickel in such system is essential due to the fact that the quality of an electrodeposited nickel is affected by kinetics. Analytical and graphical methods were utilised to determine kinetic parameters. The kinetic model was approximated by Butler-Volmer and j-η equation. Kinetic parameters such as exchange current density (j0) and charge transfer coefficient (α) were also graphically determined using the plot of η vs. log|j| known as Tafel plot. The polarisation data for an unstirred 0.19 M nickel sulfamate solution at 0.5 mV/s scan rate and RDE system was used. The results indicate that both methods are fairly accurate. For the analytical, the Tafel slope, the exchange current density, and charge transfer coefficient were found to be 149 mV/dec, 1.60 × 10-4 mA/cm2, and 0.39 respectively, whilst for the graphical method were 159 mV/dec, 3.16 × 10-4 mA/cm2, and 0.37. The kinetics parameters in this current study were also compared to those in literature. Significant differences were observed which might be due to the effect of composition and concentration of the electrolytes, operating temperature, and pH leading to the different reaction mechanism. However, the results obtained in this work are in the range of acceptable values. These kinetic parameters will then be used in further study of nickel deposition by modelling and simulation

  3. Preliminary Groundwater Simulations To Compare Different Reconstruction Methods of 3-d Alluvial Heterogeneity

    NASA Astrophysics Data System (ADS)

    Teles, V.; de Marsily, G.; Delay, F.; Perrier, E.

    Alluvial floodplains are extremely heterogeneous aquifers, whose three-dimensional structures are quite difficult to model. In general, when representing such structures, the medium heterogeneity is modeled with classical geostatistical or Boolean meth- ods. Another approach, still in its infancy, is called the genetic method because it simulates the generation of the medium by reproducing sedimentary processes. We developed a new genetic model to obtain a realistic three-dimensional image of allu- vial media. It does not simulate the hydrodynamics of sedimentation but uses semi- empirical and statistical rules to roughly reproduce fluvial deposition and erosion. The main processes, either at the stream scale or at the plain scale, are modeled by simple rules applied to "sediment" entities or to conceptual "erosion" entities. The model was applied to a several kilometer long portion of the Aube River floodplain (France) and reproduced the deposition and erosion cycles that occurred during the inferred climate periods (15 000 BP to present). A three-dimensional image of the aquifer was gener- ated, by extrapolating the two-dimensional information collected on a cross-section of the floodplain. Unlike geostatistical methods, this extrapolation does not use a statis- tical spatial analysis of the data, but a genetic analysis, which leads to a more realistic structure. Groundwater flow and transport simulations in the alluvium were carried out with a three-dimensional flow code or simulator (MODFLOW), using different rep- resentations of the alluvial reservoir of the Aube River floodplain: first an equivalent homogeneous medium, and then different heterogeneous media built either with the traditional geostatistical approach simulating the permeability distribution, or with the new genetic model presented here simulating sediment facies. In the latter case, each deposited entity of a given lithology was assigned a constant hydraulic conductivity value. Results of these

  4. Analytical Method to Evaluate Failure Potential During High-Risk Component Development

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Communicating failure mode information during design and manufacturing is a crucial task for failure prevention. Most processes use Failure Modes and Effects types of analyses, as well as prior knowledge and experience, to determine the potential modes of failures a product might encounter during its lifetime. When new products are being considered and designed, this knowledge and information is expanded upon to help designers extrapolate based on their similarity with existing products and the potential design tradeoffs. This paper makes use of similarities and tradeoffs that exist between different failure modes based on the functionality of each component/product. In this light, a function-failure method is developed to help the design of new products with solutions for functions that eliminate or reduce the potential of a failure mode. The method is applied to a simplified rotating machinery example in this paper, and is proposed as a means to account for helicopter failure modes during design and production, addressing stringent safety and performance requirements for NASA applications.

  5. Nonparametric methods for drought severity estimation at ungauged sites

    NASA Astrophysics Data System (ADS)

    Sadri, S.; Burn, D. H.

    2012-12-01

    The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.

  6. Density-matrix renormalization group method for the conductance of one-dimensional correlated systems using the Kubo formula

    NASA Astrophysics Data System (ADS)

    Bischoff, Jan-Moritz; Jeckelmann, Eric

    2017-11-01

    We improve the density-matrix renormalization group (DMRG) evaluation of the Kubo formula for the zero-temperature linear conductance of one-dimensional correlated systems. The dynamical DMRG is used to compute the linear response of a finite system to an applied ac source-drain voltage; then the low-frequency finite-system response is extrapolated to the thermodynamic limit to obtain the dc conductance of an infinite system. The method is demonstrated on the one-dimensional spinless fermion model at half filling. Our method is able to replicate several predictions of the Luttinger liquid theory such as the renormalization of the conductance in a homogeneous conductor, the universal effects of a single barrier, and the resonant tunneling through a double barrier.

  7. A Floating Potential Method for Determining Ion Density

    NASA Astrophysics Data System (ADS)

    Evans, John D.; Chen, Francis F.

    2001-10-01

    The density n in partially ionized discharges is often found from the saturation ion current Ii of a cylindrical Langmuir probe. Collisionless probe theories, however, disagree with measured I - V curves probably because of collisions^1. We use a heuristic method that yields n from probe data agreeing with microwave interferometry. Probe current I is raised to the 4/3 power and fitted to a straight line on an I^4/3-V plot. The line is extrapolated to the floating potential V_f, thus approximating I_i(V_f). The sheath thickness d_sh for V = Vf is calculated from the Child-Langmuir (CL) law, and applying the Bohm sheath criterion to the surface at r_sh = Rp + d_sh yields n when Ii = I_i(V_f). This method works, but it cannot be justified by theory. Neglected are (a) cylindrical convergence of the ion charge, (b) finite ion energy at r = r_sh, (c) ions orbiting the probe, and (d) escape of ions axially. The Allen-Boyd-Reynolds theory, which treats (a) and (b) and neglects (c) and (d), gives too low n's. Apparently the errors self-cancel, and the simple Vf method gives the right result. ^1 F.F. Chen, Phys. Plasmas 8, 3029 (2001).

  8. Cross-Species Extrapolation of Uptake and Disposition of Neutral Organic Chemicals in Fish Using a Multispecies Physiologically-Based Toxicokinetic Model Framework.

    PubMed

    Brinkmann, Markus; Schlechtriem, Christian; Reininghaus, Mathias; Eichbaum, Kathrin; Buchinger, Sebastian; Reifferscheid, Georg; Hollert, Henner; Preuss, Thomas G

    2016-02-16

    The potential to bioconcentrate is generally considered to be an unwanted property of a substance. Consequently, chemical legislation, including the European REACH regulations, requires the chemical industry to provide bioconcentration data for chemicals that are produced or imported at volumes exceeding 100 tons per annum or if there is a concern that a substance is persistent, bioaccumulative, and toxic. For the filling of the existing data gap for chemicals produced or imported at levels that are below this stipulated volume, without the need for additional animal experiments, physiologically-based toxicokinetic (PBTK) models can be used to predict whole-body and tissue concentrations of neutral organic chemicals in fish. PBTK models have been developed for many different fish species with promising results. In this study, we developed PBTK models for zebrafish (Danio rerio) and roach (Rutilus rutilus) and combined them with existing models for rainbow trout (Onchorhynchus mykiss), lake trout (Salvelinus namaycush), and fathead minnow (Pimephales promelas). The resulting multispecies model framework allows for cross-species extrapolation of the bioaccumulative potential of neutral organic compounds. Predictions were compared with experimental data and were accurate for most substances. Our model can be used for probabilistic risk assessment of chemical bioaccumulation, with particular emphasis on cross-species evaluations.

  9. Advances in variable selection methods I: Causal selection methods versus stepwise regression and principal component analysis on data of known and unknown functional relationships

    EPA Science Inventory

    Hydrological predictions at a watershed scale are commonly based on extrapolation and upscaling of hydrological behavior at plot and hillslope scales. Yet, dominant hydrological drivers at a hillslope may not be as dominant at the watershed scale because of the heterogeneity of w...

  10. In vitro-in vivo extrapolation of zolpidem as a perpetrator of metabolic interactions involving CYP3A.

    PubMed

    Polasek, Thomas M; Sadagopal, Janani S; Elliot, David J; Miners, John O

    2010-03-01

    To evaluate zolpidem as a mechanism-based inactivator of human CYP3A in vitro, and to assess its metabolic interaction potential with CYP3A drugs (in vitro-in vivo extrapolation; IV-IVE). A co- vs. pre-incubation strategy was used to quantify time-dependent inhibition of human liver microsomal (HLM) and recombinant CYP3A4 (rCYP3A4) by zolpidem. Experiments involving a 10-fold dilution step were employed to determine the kinetic constants of inactivation (K (I) and k (inact)) and to assess the in vitro mechanism-based inactivation (MBI) criteria. Inactivation data were entered into the Simcyp population-based ADME simulator to predict the increase in the area under the plasma concentration-time curve (AUC) for orally administered midazolam. Consistent with MBI, the inhibitory potency of zolpidem toward CYP3A was increased following pre-incubation. In HLMs, the concentration required for half maximal inactivation (K (I)) was 122 microM and the maximal rate of inactivation (k (inact)) was 0.094 min(-1). In comparison, K (I) and k (inact) values with rCYP3A4 were 50 microM and 0.229 min(-1), respectively. Zolpidem fulfilled all other in vitro MBI criteria, including irreversible inhibition. The mean oral AUC for midazolam in healthy volunteers was predicted to increase 1.1- to 1.7-fold due to the inhibition of metabolic clearance by zolpidem. Elderly subjects were more sensitive to the interaction, with mean increases in midazolam AUC of 1.2- and 2.2-fold for HLM IV-IVE and rCYP3A4 IV-IVE, respectively. Zolpidem is a relatively weak mechanism-based inactivator of human CYP3A in vitro. Zolpidem is unlikely to act as a significant perpetrator of metabolic interactions involving CYP3A.

  11. Determination of correction factors in beta radiation beams using Monte Carlo method.

    PubMed

    Polo, Ivón Oramas; Santos, William de Souza; Caldas, Linda V E

    2018-06-15

    The absorbed dose rate is the main characterization quantity for beta radiation. The extrapolation chamber is considered the primary standard instrument. To determine absorbed dose rates in beta radiation beams, it is necessary to establish several correction factors. In this work, the correction factors for the backscatter due to the collecting electrode and to the guard ring, and the correction factor for Bremsstrahlung in beta secondary standard radiation beams are presented. For this purpose, the Monte Carlo method was applied. The results obtained are considered acceptable, and they agree within the uncertainties. The differences between the backscatter factors determined by the Monte Carlo method and those of the ISO standard were 0.6%, 0.9% and 2.04% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. The differences between the Bremsstrahlung factors determined by the Monte Carlo method and those of the ISO were 0.25%, 0.6% and 1% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Temperature dependence of the electrode kinetics of oxygen reduction at the platinum/Nafion interface - A microelectrode investigation

    NASA Technical Reports Server (NTRS)

    Parthasarathy, Arvind; Srinivasan, Supramanian; Appleby, A. J.; Martin, Charles R.

    1992-01-01

    Results of a study of the temperature dependence of the oxygen reduction kinetics at the Pt/Nafion interface are presented. This study was carried out in the temperature range of 30-80 C and at 5 atm of oxygen pressure. The results showed a linear increase of the Tafel slope with temperature in the low current density region, but the Tafel slope was found to be independent of temperature in the high current density region. The values of the activation energy for oxygen reduction at the platinum/Nafion interface are nearly the same as those obtained at the platinum/trifluoromethane sulfonic acid interface but less than values obtained at the Pt/H3PO4 and Pt/HClO4 interfaces. The diffusion coefficient of oxygen in Nafion increases with temperature while its solubility decreases with temperature. These temperatures also depend on the water content of the membrane.

  13. Vessel Segmentation and Blood Flow Simulation Using Level-Sets and Embedded Boundary Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deschamps, T; Schwartz, P; Trebotich, D

    In this article we address the problem of blood flow simulation in realistic vascular objects. The anatomical surfaces are extracted by means of Level-Sets methods that accurately model the complex and varying surfaces of pathological objects such as aneurysms and stenoses. The surfaces obtained are defined at the sub-pixel level where they intersect the Cartesian grid of the image domain. It is therefore straightforward to construct embedded boundary representations of these objects on the same grid, for which recent work has enabled discretization of the Navier-Stokes equations for incompressible fluids. While most classical techniques require construction of a structured meshmore » that approximates the surface in order to extrapolate a 3D finite-element gridding of the whole volume, our method directly simulates the blood-flow inside the extracted surface without losing any complicated details and without building additional grids.« less

  14. The Spectrophotometric Method of Determining the Transmission of Solar Energy in Salt Gradient Solar Ponds

    NASA Technical Reports Server (NTRS)

    Giulianelli, J.

    1984-01-01

    In order to predict the thermal efficiency of a solar pond it is necessary to know total average solar energy reaching the storage layer. One method for determining this energy for water containing dissolved colored species is based upon spectral transmission measurements using a laboratory spectrophotometer. This method is examined and some of the theoretical ground work needed to discuss the measurement of transmission of light water. Results of in situ irradiance measurements from oceanography research are presented and the difficulties inherent in extrapolating laboratory data obtained with ten centimeter cells to real three dimensional pond situations is discussed. Particular emphasis is put on the need to account for molecular and particulate scattering in measurements done on low absorbing solutions. Despite these considerations it is expected that attenuation calculations based upon careful measurements using a dual beam spectrophotometer technique combined with known attenuation coefficients will be useful in solar pond modeling and monitoring for color buildup. Preliminary results using the CSM method are presented.

  15. A New "Quasi-Dynamic" Method for Determining the Hamaker Constant of Solids Using an Atomic Force Microscope.

    PubMed

    Fronczak, Sean G; Dong, Jiannan; Browne, Christopher A; Krenek, Elizabeth C; Franses, Elias I; Beaudoin, Stephen P; Corti, David S

    2017-01-24

    In order to minimize the effects of surface roughness and deformation, a new method for estimating the Hamaker constant, A, of solids using the approach-to-contact regime of an atomic force microscope (AFM) is presented. First, a previous "jump-into-contact" quasi-static method for determining A from AFM measurements is analyzed and then extended to include various AFM tip-surface force models of interest. Then, to test the efficacy of the "jump-into-contact" method, a dynamic model of the AFM tip motion is developed. For finite AFM cantilever-surface approach speeds, a true "jump" point, or limit of stability, is found not to appear, and the quasi-static model fails to represent the dynamic tip behavior at close tip-surface separations. Hence, a new "quasi-dynamic" method for estimating A is proposed that uses the dynamically well-defined deflection at which the tip and surface first come into contact, d c , instead of the dynamically ill-defined "jump" point. With the new method, an apparent Hamaker constant, A app , is calculated from d c and a corresponding quasi-static-based equation. Since A app depends on the cantilever's approach speed, v c , and the AFM's sampling resolution, δ, a double extrapolation procedure is used to determine A app in the quasi-static (v c → 0) and continuous sampling (δ → 0) limits, thereby recovering the "true" value of A. The accuracy of the new method is validated using simulated AFM data. To enable the experimental implementation of this method, a new dimensionless parameter τ is introduced to guide cantilever selection and the AFM operating conditions. The value of τ quantifies how close a given cantilever is to its quasi-static limit for a chosen cantilever-surface approach speed. For sufficiently small values of τ (i.e., a cantilever that effectively behaves "quasi-statically"), simulated data indicate that A app will be within ∼3% or less of the inputted value of the Hamaker constant. This implies that Hamaker

  16. QSRR modeling for diverse drugs using different feature selection methods coupled with linear and nonlinear regressions.

    PubMed

    Goodarzi, Mohammad; Jensen, Richard; Vander Heyden, Yvan

    2012-12-01

    A Quantitative Structure-Retention Relationship (QSRR) is proposed to estimate the chromatographic retention of 83 diverse drugs on a Unisphere poly butadiene (PBD) column, using isocratic elutions at pH 11.7. Previous work has generated QSRR models for them using Classification And Regression Trees (CART). In this work, Ant Colony Optimization is used as a feature selection method to find the best molecular descriptors from a large pool. In addition, several other selection methods have been applied, such as Genetic Algorithms, Stepwise Regression and the Relief method, not only to evaluate Ant Colony Optimization as a feature selection method but also to investigate its ability to find the important descriptors in QSRR. Multiple Linear Regression (MLR) and Support Vector Machines (SVMs) were applied as linear and nonlinear regression methods, respectively, giving excellent correlation between the experimental, i.e. extrapolated to a mobile phase consisting of pure water, and predicted logarithms of the retention factors of the drugs (logk(w)). The overall best model was the SVM one built using descriptors selected by ACO. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Resonance strength measurement at astrophysical energies: The {sup 17}O(p,α){sup 14}N reaction studied via Trojan Horse Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sergi, M. L., E-mail: sergi@lns.infn.it; La Cognata, M.; Pizzone, R. G.

    2015-10-15

    In recent years, the Trojan Horse Method (THM) has been used to investigate the low-energy cross sections of proton-induced reactions on {sup 17}O nuclei, overcoming extrapolation procedures and enhancement effects due to electron screening. We will report on the indirect study of the {sup 17}O(p,α){sup 14}N reaction via the THM by applying the approach developed for extracting the resonance strength of narrow resonance in the ultralow energy region. Two measurements will be described and the experimental THM cross sections will be shown for both experiments.

  18. Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.

    PubMed

    OConnor, William; Runquist, Elizabeth A

    2008-07-01

    Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.

  19. A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry

    NASA Astrophysics Data System (ADS)

    Al-Marouf, M.; Samtaney, R.

    2017-05-01

    We present an embedded ghost fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.

  20. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    USGS Publications Warehouse

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  1. Investigation of catalytic activity towards oxygen reduction reaction of Pt dispersed on boron doped graphene in acid medium.

    PubMed

    Pullamsetty, Ashok; Sundara, Ramaprabhu

    2016-10-01

    Boron doped graphene was prepared by a facile method and platinum (Pt) decoration over boron doped graphene was done in various chemical reduction methods such as sodium borohydride (NaBH4), polyol and modified polyol. X-ray diffraction analysis indicates that the synthesized catalyst particles are present in a nanocrystalline structure and transmission and scanning electron microscopy were employed to investigate the morphology and particle distribution. The electrochemical properties were investigated with the help of the rotating disk electrode (RDE) technique and cyclic voltammetry. The results show that the oxygen reduction reaction (ORR) takes place by a four-electron process. The kinetics of the ORR was evaluated using K-L and Tafel plots. The electrocatalyst obtained in modified polyol reduction method has shown the better catalytic activity compared to other two electrocatalysts. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. A critical evaluation of the experimental design of studies of mechanism based enzyme inhibition, with implications for in vitro-in vivo extrapolation.

    PubMed

    Ghanbari, F; Rowland-Yeo, K; Bloomer, J C; Clarke, S E; Lennard, M S; Tucker, G T; Rostami-Hodjegan, A

    2006-04-01

    The published literature on mechanism based inhibition (MBI) of CYPs was evaluated with respect to experimental design, methodology and data analysis. Significant variation was apparent in the dilution factor, ratio of preincubation to incubation times and probe substrate concentrations used, and there were some anomalies in the estimation of associated kinetic parameters (k(inact), K(I), r). The impact of the application of inaccurate values of k(inact) and K(I) when extrapolating to the extent of inhibition in vivo is likely to be greatest for those compounds of intermediate inhibitory potency, but this also depends on the fraction of the net clearance of substrate subject to MBI and the pre-systemic and systemic exposure to the inhibitor. For potent inhibitors, the experimental procedure is unlikely to have a material influence on the maximum inhibition. Nevertheless, the bias in the values of the kinetic parameters may influence the time for recovery of enzyme activity following re-synthesis of the enzyme. Careful attention to the design of in vitro experiments to obtain accurate kinetic parameters is necessary for a reliable prediction of different aspects of the in vivo consequences of MBI. The review calls for experimental studies to quantify the impact of study design in studies of MBI, with a view to better harmonisation of protocols.

  3. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  4. [Financial impact of smoking on health systems in Latin America: A study of seven countries and extrapolation to the regional level].

    PubMed

    Pichon-Riviere, Andrés; Bardach, Ariel; Augustovski, Federico; Alcaraz, Andrea; Reynales-Shigematsu, Luz Myriam; Pinto, Márcia Teixeira; Castillo-Riquelme, Marianela; Torres, Esperanza Peña; Osorio, Diana Isabel; Huayanay, Leandro; Munarriz, César Loza; de Miera-Juárez, Belén Sáenz; Gallegos-Rivero, Verónica; Puente, Catherine De La; Navia-Bueno, María Del Pilar; Caporale, Joaquín

    2016-10-01

    Estimate smoking-attributable direct medical costs in Latin American health systems. A microsimulation model was used to quantify financial impact of cardiovascular and cerebrovascular disease, chronic obstructive pulmonary disease (COPD), pneumonia, lung cancer, and nine other neoplasms. A systematic search for epidemiological data and event costs was carried out. The model was calibrated and validated for Argentina, Bolivia, Brazil, Chile, Colombia, Mexico, and Peru, countries that account for 78% of Latin America's population; the results were then extrapolated to the regional level. Every year, smoking is responsible for 33 576 billion dollars in direct costs to health systems. This amounts to 0.7% of the region's gross domestic product (GDP) and 8.3% of its health budget. Cardiovascular disease, COPD, and cancer were responsible for 30.3%, 26.9%, and 23.7% of these expenditures, respectively. Smoking-attributable costs ranged from 0.4% (Mexico and Peru) to 0.9% (Chile) of GDP and from 5.2% (Brazil) to 12.7% (Bolivia) of health expenditures. In the region, tax revenues from cigarette sales barely cover 37% of smoking-attributable health expenditures (8.1% in Bolivia and 67.3% in Argentina). Smoking is responsible for a significant proportion of health spending in Latin America, and tax revenues from cigarette sales are far from covering it. The region's countries should seriously consider stronger measures, such as an increase in tobacco taxes.

  5. A nowcasting technique based on application of the particle filter blending algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai

    2017-10-01

    To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.

  6. In-Situ Electrochemical Corrosion Behavior of Nickel-Base 718 Alloy Under Various CO2 Partial Pressures at 150 and 205 °C in NaCl Solution

    NASA Astrophysics Data System (ADS)

    Zhang, Yubi; Zhao, Yongtao; Tang, An; Yang, Wenjie; Li, Enzuo

    2018-07-01

    The electrochemical corrosion behavior of nickel-base alloy 718 was investigated using electrochemical impedance spectroscopy and potentiodynamic polarization techniques at various partial pressures of CO2 (P_{{{CO}2 }}s) in a 25 wt% NaCl solution at 150 and 205 °C. The passive films composed of FeCO3 exhibit good corrosion resistance with a feature of Warburg impedance, Tafel plots show a complete passivation and the anodic reactions was dominated by a diffusion process at low P_{{{CO}2 }}s (1.8-9.8 MPa) at 150 °C. While numerous dented corrosion areas appeared on the sample surface for the P_{{{CO}2 }} of 11.6 MPa at 205 °C, the Tafel plot with three anodic peaks and the Nyquist diagram with an atrophied impedance arc were present. This dented corrosion attribute to the synergistic effects of stress, temperature, P_{{{CO}2 }} and Cl-, the temperature and stress could play crucial roles on the corrosion of the alloy 718.

  7. In-Situ Electrochemical Corrosion Behavior of Nickel-Base 718 Alloy Under Various CO2 Partial Pressures at 150 and 205 °C in NaCl Solution

    NASA Astrophysics Data System (ADS)

    Zhang, Yubi; Zhao, Yongtao; Tang, An; Yang, Wenjie; Li, Enzuo

    2018-03-01

    The electrochemical corrosion behavior of nickel-base alloy 718 was investigated using electrochemical impedance spectroscopy and potentiodynamic polarization techniques at various partial pressures of CO2 (P_{{{CO}2 }} s) in a 25 wt% NaCl solution at 150 and 205 °C. The passive films composed of FeCO3 exhibit good corrosion resistance with a feature of Warburg impedance, Tafel plots show a complete passivation and the anodic reactions was dominated by a diffusion process at low P_{{{CO}2 }} s (1.8-9.8 MPa) at 150 °C. While numerous dented corrosion areas appeared on the sample surface for the P_{{{CO}2 }} of 11.6 MPa at 205 °C, the Tafel plot with three anodic peaks and the Nyquist diagram with an atrophied impedance arc were present. This dented corrosion attribute to the synergistic effects of stress, temperature, P_{{{CO}2 }} and Cl-, the temperature and stress could play crucial roles on the corrosion of the alloy 718.

  8. Corrosion Behavior of Pure Copper Surrounded by Hank's Physiological Electrolyte at 310 K (37 °C) as a Potential Biomaterial for Contraception: An Analogy Drawn Between Micro- and Nano-grained Copper

    NASA Astrophysics Data System (ADS)

    Fattah-alhosseini, Arash; Imantalab, Omid; Vafaeian, Saeed; Ansari, Ghazaleh

    2017-08-01

    This work aims to evaluate the corrosion behavior of pure copper from the microstructural viewpoint for a biomedical application, namely intrauterine devices. For this purpose, Tafel polarization and electrochemical impedance spectroscopy (EIS) techniques were used to evaluate the corrosion behavior of annealed pure copper (with the average grain size of 45 ± 1 µm) and nano-grained microstructure in physiological electrolyte of Hank at 310 K (37 °C). Pure copper in nanoscale grain size, typically an average of 90 ± 5 nm, was successfully made by eight-cycle accumulative roll bonding process at room temperature. On the basis of Tafel polarization results, it was revealed that nano-grained sample had lower corrosion current density and more noble corrosion potential for prolonged exposure in Hank's physiological solution at 310 K (37 °C). In addition, the EIS results showed that the nano-grained sample had more corrosion resistance compared to the coarse-grained one for long-time immersion.

  9. Multilayer graphene as an effective corrosion protection coating for copper

    NASA Astrophysics Data System (ADS)

    Ravishankar, Vasumathy; Ramaprabhu, S.; Jaiswal, Manu

    2018-04-01

    Graphene grown by chemical vapor deposition (CVD) has been studied as a protective layer against corrosion of copper. The layer number dependence on the protective nature of graphene has been investigated using techniques such as Tafel analysis and Electroimpedance Spectroscopy. Multiple layers of graphene were achieved by wet transfer above CVD grown graphene. Though this might cause grain boundaries, the sites where corrosion is initiated, to be staggered, wet transfer inherently carries the disadvantage of tearing of graphene, as confirmed by Raman spectroscopy measurements. However, Electroimpedance Spectroscopy (EIS) reflects that graphene protected copper has a layer dependent resistance to corrosion. Decrease in corrosion current (Icorr) for graphene protected copper is presented. There is only small dependence of corrosion current on the layer number, Tafel plots clearly indicate passivation in the presence of graphene, whether it be single layer or multiple layers. Notwithstanding the crystallite size, defect free layers of graphene with staggered grain boundaries combined with passivation could offer good corrosion protection for metals.

  10. Experimental investigation of microbiologically influenced corrosion of selected steels in sugarcane juice environment.

    PubMed

    Wesley, Sunil Bala; Maurya, Devendra Prasad; Goyal, Hari Sharan; Negi, Sangeeta

    2013-12-01

    In the current study, ferritic stainless grades AISI 439 and AISI 444 were investigated as possible construction materials for machinery and equipment in the cane-sugar industry. Their performance in corrosive cane-sugar juice environment was compared with the presently used low carbon steel AISI 1010 and austenitic stainless steel AISI 304. The Tafel plot electrochemical technique was used to evaluate general corrosion performance. Microbiologically influenced corrosion (MIC) behaviour in sugarcane juice environment was studied. Four microbial colonies were isolated from the biofilms on the metal coupon surfaces on the basis of their different morphology. These were characterized as Brevibacillus parabrevis, Bacillus azotoformans, Paenibacillus lautus and Micrococcus sp. The results of SEM micrographs showed that AISI 439 and AISI 304 grades had suffered maximum localized corrosion. MIC investigations revealed that AISI 444 steel had the best corrosion resistance among the tested materials. However from the Tafel plots it was evident that AISI 1010 had the least corrosion resistance and AISI 439 the best corrosion resistance.

  11. Prognostics of slurry pumps based on a moving-average wear degradation index and a general sequential Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tse, Peter W.

    2015-05-01

    Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.

  12. Correlation Of 2-Chlorobiphenyl Dechlorination By Fe/Pd With Iron Corrosion At Different pH

    EPA Science Inventory

    The rate of 2-chlorobiphenyl dechlorination by palladized iron (Fe/Pd) decreased with increasing pH until pH > 12.5. Iron corrosion potential (Ec) and current (jc), obtained from polarization curves of a rotating disk electrode of iron, followed the Tafel e...

  13. Poisson’s Ratio Extrapolation from Digital Image Correlation Experiments

    DTIC Science & Technology

    2013-03-01

    prior to dewetting ). Also, it is often impractical to measure compressibility. Current rocket laboratory methods measure strains in propellants...distribution unlimited. Public Affairs Clearance Number XXXXX. Damage Characterization of Propellants 16 Dewetting Results 0 2 4 6 8 10 0 5 10 15 20

  14. Solving ODE Initial Value Problems With Implicit Taylor Series Methods

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2000-01-01

    In this paper we introduce a new class of numerical methods for integrating ODE initial value problems. Specifically, we propose an extension of the Taylor series method which significantly improves its accuracy and stability while also increasing its range of applicability. To advance the solution from t (sub n) to t (sub n+1), we expand a series about the intermediate point t (sub n+mu):=t (sub n) + mu h, where h is the stepsize and mu is an arbitrary parameter called an expansion coefficient. We show that, in general, a Taylor series of degree k has exactly k expansion coefficients which raise its order of accuracy. The accuracy is raised by one order if k is odd, and by two orders if k is even. In addition, if k is three or greater, local extrapolation can be used to raise the accuracy two additional orders. We also examine stability for the problem y'= lambda y, Re (lambda) less than 0, and identify several A-stable schemes. Numerical results are presented for both fixed and variable stepsizes. It is shown that implicit Taylor series methods provide an effective integration tool for most problems, including stiff systems and ODE's with a singular point.

  15. An expanded calibration study of the explicitly correlated CCSD(T)-F12b method using large basis set standard CCSD(T) atomization energies.

    PubMed

    Feller, David; Peterson, Kirk A

    2013-08-28

    The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.

  16. Robust approaches to quantification of margin and uncertainty for sparse data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin

    Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less

  17. A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Qiang; Fan, Liang-Shih, E-mail: fan.1@osu.edu

    A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge–Kutta schemes in the coupled fluid–particle interaction. The major challenge to implement high-order Runge–Kutta schemes in the LBM is that the flow information suchmore » as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid–particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge–Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and −0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is

  18. A comparison of small-field tissue phantom ratio data generation methods for an Elekta Agility 6 MV photon beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richmond, Neil, E-mail: neil.richmond@stees.nhs.uk; Brackenridge, Robert

    2014-04-01

    Tissue-phantom ratios (TPRs) are a common dosimetric quantity used to describe the change in dose with depth in tissue. These can be challenging and time consuming to measure. The conversion of percentage depth dose (PDD) data using standard formulae is widely employed as an alternative method in generating TPR. However, the applicability of these formulae for small fields has been questioned in the literature. Functional representation has also been proposed for small-field TPR production. This article compares measured TPR data for small 6 MV photon fields against that generated by conversion of PDD using standard formulae to assess the efficacymore » of the conversion data. By functionally fitting the measured TPR data for square fields greater than 4 cm in length, the TPR curves for smaller fields are generated and compared with measurements. TPRs and PDDs were measured in a water tank for a range of square field sizes. The PDDs were converted to TPRs using standard formulae. TPRs for fields of 4 × 4 cm{sup 2} and larger were used to create functional fits. The parameterization coefficients were used to construct extrapolated TPR curves for 1 × 1 cm{sup 2}, 2 × 2-cm{sup 2}, and 3 × 3-cm{sup 2} fields. The TPR data generated using standard formulae were in excellent agreement with direct TPR measurements. The TPR data for 1 × 1-cm{sup 2}, 2 × 2-cm{sup 2}, and 3 × 3-cm{sup 2} fields created by extrapolation of the larger field functional fits gave inaccurate initial results. The corresponding mean differences for the 3 fields were 4.0%, 2.0%, and 0.9%. Generation of TPR data using a standard PDD-conversion methodology has been shown to give good agreement with our directly measured data for small fields. However, extrapolation of TPR data using the functional fit to fields of 4 × 4 cm{sup 2} or larger resulted in generation of TPR curves that did not compare well with the measured data.« less

  19. Comparison of two DSC-based methods to predict drug-polymer solubility.

    PubMed

    Rask, Malte Bille; Knopp, Matthias Manne; Olesen, Niels Erik; Holm, René; Rades, Thomas

    2018-04-05

    The aim of the present study was to compare two DSC-based methods to predict drug-polymer solubility (melting point depression method and recrystallization method) and propose a guideline for selecting the most suitable method based on physicochemical properties of both the drug and the polymer. Using the two methods, the solubilities of celecoxib, indomethacin, carbamazepine, and ritonavir in polyvinylpyrrolidone, hydroxypropyl methylcellulose, and Soluplus® were determined at elevated temperatures and extrapolated to room temperature using the Flory-Huggins model. For the melting point depression method, it was observed that a well-defined drug melting point was required in order to predict drug-polymer solubility, since the method is based on the depression of the melting point as a function of polymer content. In contrast to previous findings, it was possible to measure melting point depression up to 20 °C below the glass transition temperature (T g ) of the polymer for some systems. Nevertheless, in general it was possible to obtain solubility measurements at lower temperatures using polymers with a low T g . Finally, for the recrystallization method it was found that the experimental composition dependence of the T g must be differentiable for compositions ranging from 50 to 90% drug (w/w) so that one T g corresponds to only one composition. Based on these findings, a guideline for selecting the most suitable thermal method to predict drug-polymer solubility based on the physicochemical properties of the drug and polymer is suggested in the form of a decision tree. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Determination of the Kwall correction factor for a cylindrical ionization chamber to measure air-kerma in 60Co gamma beams.

    PubMed

    Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M

    2002-07-21

    The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.

  1. Retrieving Leaf Area Index (LAI) Using Remote Sensing: Theories, Methods and Sensors

    PubMed Central

    Zheng, Guang; Moskal, L. Monika

    2009-01-01

    The ability to accurately and rapidly acquire leaf area index (LAI) is an indispensable component of process-based ecological research facilitating the understanding of gas-vegetation exchange phenomenon at an array of spatial scales from the leaf to the landscape. However, LAI is difficult to directly acquire for large spatial extents due to its time consuming and work intensive nature. Such efforts have been significantly improved by the emergence of optical and active remote sensing techniques. This paper reviews the definitions and theories of LAI measurement with respect to direct and indirect methods. Then, the methodologies for LAI retrieval with regard to the characteristics of a range of remotely sensed datasets are discussed. Remote sensing indirect methods are subdivided into two categories of passive and active remote sensing, which are further categorized as terrestrial, aerial and satellite-born platforms. Due to a wide variety in spatial resolution of remotely sensed data and the requirements of ecological modeling, the scaling issue of LAI is discussed and special consideration is given to extrapolation of measurement to landscape and regional levels. PMID:22574042

  2. Retrieving Leaf Area Index (LAI) Using Remote Sensing: Theories, Methods and Sensors.

    PubMed

    Zheng, Guang; Moskal, L Monika

    2009-01-01

    The ability to accurately and rapidly acquire leaf area index (LAI) is an indispensable component of process-based ecological research facilitating the understanding of gas-vegetation exchange phenomenon at an array of spatial scales from the leaf to the landscape. However, LAI is difficult to directly acquire for large spatial extents due to its time consuming and work intensive nature. Such efforts have been significantly improved by the emergence of optical and active remote sensing techniques. This paper reviews the definitions and theories of LAI measurement with respect to direct and indirect methods. Then, the methodologies for LAI retrieval with regard to the characteristics of a range of remotely sensed datasets are discussed. Remote sensing indirect methods are subdivided into two categories of passive and active remote sensing, which are further categorized as terrestrial, aerial and satellite-born platforms. Due to a wide variety in spatial resolution of remotely sensed data and the requirements of ecological modeling, the scaling issue of LAI is discussed and special consideration is given to extrapolation of measurement to landscape and regional levels.

  3. WORKSHOP ON APPLICATION OF STATISTICAL METHODS TO BIOLOGICALLY-BASED PHARMACOKINETIC MODELING FOR RISK ASSESSMENT

    EPA Science Inventory

    Biologically-based pharmacokinetic models are being increasingly used in the risk assessment of environmental chemicals. These models are based on biological, mathematical, statistical and engineering principles. Their potential uses in risk assessment include extrapolation betwe...

  4. An evaluation of methods for estimating decadal stream loads

    NASA Astrophysics Data System (ADS)

    Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-11-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between

  5. An evaluation of methods for estimating decadal stream loads

    USGS Publications Warehouse

    Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-01-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between

  6. High-resolution wave-theory-based ultrasound reflection imaging using the split-step fourier and globally optimized fourier finite-difference methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie

    Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Resultsmore » from various data input to the method indicate significant improvements are provided in both image quality and resolution.« less

  7. Vapor Pressure of Three Brominated Flame Retardants Determined via Knudsen Effusion Method

    PubMed Central

    Fu, Jinxia; Suuberg, Eric M.

    2012-01-01

    Brominated flame retardants (BFRs) have been used in a variety of consumer products in the past four decades. The vapor pressures for three widely used BFRs, that is, tetrabromobisphenol A (TBBPA), hexabromocyclododecane (HBCD), and octabromodiphenyl ethers (octaBDEs) mixtures, were determined using the Knudsen effusion method and compared to those of decabromodiphenyl ether (BDE209). The values measured extrapolated to 298.15 K are 8.47 × 10−9, 7.47 × 10−10, and 2.33 × 10−9 Pa, respectively. The enthalpies of sublimation for these BFRs were estimated using the Clausius-Clapeyron equation and are 143.6 ± 0.4, 153.7 ± 3.1, and 150.8 ± 3.2 kJ/mole, respectively. In addition, the enthalpies of fusion and melting temperatures for these BFRs were also measured in the present study. PMID:22213441

  8. Automatable Measurement of Gas Exchange Rate in Streams: Oxygen-Carbon Method

    NASA Astrophysics Data System (ADS)

    Pennington, R.; Haggerty, R.; Argerich, A.; Wondzell, S. M.

    2015-12-01

    Gas exchange rates between streams and the atmosphere are critically important to measurement of in-stream ecologic processes, as well as fate and transport of hazardous pollutants such as mercury and PCBs. Methods to estimate gas exchange rates include empirical relations to hydraulics, and direct injection of a tracer gas such as propane or SF6. Empirical relations are inconsistent and inaccurate, particularly for lower order, high-roughness streams. Gas injections are labor-intensive, and measured gas exchange rates are difficult to extrapolate in time since they change with discharge and stream geometry. We propose a novel method for calculation of gas exchange rates utilizing O2, pCO2, pH, and temperature data. Measurements, which can be automated using data loggers and probes, are made on the upstream and downstream end of the study reach. Gas exchange rates are then calculated from a solution to the transport equations for oxygen and dissolved inorganic carbon. Field tests in steep, low order, high roughness streams of the HJ Andrews Experimental Forest indicate the method to be viable along stream reaches with high downstream gas concentration gradients and high rates of gas transfer velocity. Automated and continuous collection of oxygen and carbonate chemistry data is increasingly common, thus the method may be used to estimate gas exchange rates through time, and is well suited for interactivity with databases.

  9. Methods to Detect Nitric Oxide and its Metabolites in Biological Samples

    PubMed Central

    Bryan, Nathan S.; Grisham, Matthew B.

    2007-01-01

    Nitric oxide (NO) methodology is a complex and often confusing science and the focus of many debates and discussion concerning NO biochemistry. NO is involved in many physiological processes including regulation of blood pressure, immune response and neural communication. Therefore its accurate detection and quantification is critical to understanding health and disease. Due to the extremely short physiological half life of this gaseous free radical, alternative strategies for the detection of reaction products of NO biochemistry have been developed. The quantification of NO metabolites in biological samples provides valuable information with regards to in vivo NO production, bioavailability and metabolism. Simply sampling a single compartment such as blood or plasma may not always provide an accurate assessment of whole body NO status, particularly in tissues. Therefore, extrapolation of plasma or blood NO status to specific tissues of interest is no longer a valid approach. As a result, methods continue to be developed and validated which allow the detection and quantification of NO and NO-related products/metabolites in multiple compartments of experimental animals in vivo. The methods described in this review is not an exhaustive or comprehensive discussion of all methods available for the detection of NO but rather a description of the most commonly used and practical methods which allow accurate and sensitive quantification of NO products/metabolites in multiple biological matrices under normal physiological conditions. PMID:17664129

  10. Lung Cancer Screening Using Low Dose CT Scanning in Germany. Extrapolation of results from the National Lung Screening Trial.

    PubMed

    Stang, Andreas; Schuler, Martin; Kowall, Bernd; Darwiche, Kaid; Kühl, Hilmar; Jöckel, Karl-Heinz

    2015-09-18

    It is now debated whether the screening of heavy smokers for lung cancer with low dose computed tomography (low dose CT) might lower their mortality due to lung cancer. We use data from the National Lung Screening Trial (NLST) in the USA to predict the likely effects of such screening in Germany. The number of heavy smokers aged 55-74 in Germany was extrapolated from survey data obtained by the Robert Koch Institute. Published data from the NLST were then used to estimate the likely effects of low dose CT screening of heavy smokers in Germany. If low dose CT screening were performed on 50% of the heavy smokers in Germany aged 55-74, an estimated 1 329 506 persons would undergo such screening. If the screening were repeated annually, then, over three years, 916 918 screening CTs would reveal suspect lesions, and the diagnosis of lung cancer would be confirmed thereafter in 32 826 persons. At least one positive test result in three years would be obtained in 39.1% of the participants (519 837 persons). 4155 deaths from lung cancer would be prevented over 6.5 years, and the number of persons aged 55-74 who die of lung cancer in Germany would fall by 2.6%. 12 449 persons would have at least one complication, and 1074 persons would die in the 60 days following screening. The screening of heavy smokers for lung cancer can lower their risk of dying of lung cancer by 20% in relative terms, corresponding to an absolute risk reduction of 0.3 percentage points. These figures can provide the background for a critical discussion of the putative utility of this type of screening in Germany.

  11. Spectral Irradiance Calibration in the Infrared. Part 7; New Composite Spectra, Comparison with Model Atmospheres, and Far-Infrared Extrapolations

    NASA Technical Reports Server (NTRS)

    Cohen, Martin; Witteborn, Fred C.; Carbon, Duane F.; Davies, John K.; Wooden, Diane H.; Bregman, Jesse D.

    1996-01-01

    We present five new absolutely calibrated continuous stellar spectra constructed as far as possible from spectral fragments observed from the ground, the Kuiper Airborne Observatory (KAO), and the IRAS Low Resolution Spectrometer. These stars-alpha Boo, gamma Dra, alpha Cet, gamma Cru, and mu UMa-augment our six, published, absolutely calibrated spectra of K and early-M giants. All spectra have a common calibration pedigree. A revised composite for alpha Boo has been constructed from higher quality spectral fragments than our previously published one. The spectrum of gamma Dra was created in direct response to the needs of instruments aboard the Infrared Space Observatory (ISO); this star's location near the north ecliptic pole renders it highly visible throughout the mission. We compare all our low-resolution composite spectra with Kurucz model atmospheres and find good agreement in shape, with the obvious exception of the SiO fundamental, still lacking in current grids of model atmospheres. The CO fundamental seems slightly too deep in these models, but this could reflect our use of generic models with solar metal abundances rather than models specific to the metallicities of the individual stars. Angular diameters derived from these spectra and models are in excellent agreement with the best observed diameters. The ratio of our adopted Sirius and Vega models is vindicated by spectral observations. We compare IRAS fluxes predicted from our cool stellar spectra with those observed and conclude that, at 12 and 25 microns, flux densities measured by IRAS should be revised downwards by about 4.1% and 5.7%, respectively, for consistency with our absolute calibration. We have provided extrapolated continuum versions of these spectra to 300 microns, in direct support of ISO (PHT and LWS instruments). These spectra are consistent with IRAS flux densities at 60 and 100 microns.

  12. Analysis of trends in experimental observables: Reconstruction of the implosion dynamics and implications for fusion yield extrapolation for direct-drive cryogenic targets on OMEGA

    NASA Astrophysics Data System (ADS)

    Bose, A.; Betti, R.; Mangino, D.; Woo, K. M.; Patel, D.; Christopherson, A. R.; Gopalaswamy, V.; Mannion, O. M.; Regan, S. P.; Goncharov, V. N.; Edgell, D. H.; Forrest, C. J.; Frenje, J. A.; Gatu Johnson, M.; Yu Glebov, V.; Igumenshchev, I. V.; Knauer, J. P.; Marshall, F. J.; Radha, P. B.; Shah, R.; Stoeckl, C.; Theobald, W.; Sangster, T. C.; Shvarts, D.; Campbell, E. M.

    2018-06-01

    This paper describes a technique for identifying trends in performance degradation for inertial confinement fusion implosion experiments. It is based on reconstruction of the implosion core with a combination of low- and mid-mode asymmetries. This technique was applied to an ensemble of hydro-equivalent deuterium-tritium implosions on OMEGA which achieved inferred hot-spot pressures ≈56 ± 7 Gbar [Regan et al., Phys. Rev. Lett. 117, 025001 (2016)]. All the experimental observables pertaining to the core could be reconstructed simultaneously with the same combination of low and mid-modes. This suggests that in addition to low modes, which can cause a degradation of the stagnation pressure, mid-modes are present which reduce the size of the neutron and x-ray producing volume. The systematic analysis shows that asymmetries can cause an overestimation of the total areal density in these implosions. It is also found that an improvement in implosion symmetry resulting from correction of either the systematic mid or low modes would result in an increase in the hot-spot pressure from 56 Gbar to ≈ 80 Gbar and could produce a burning plasma when the implosion core is extrapolated to an equivalent 1.9 MJ symmetric direct illumination [Bose et al., Phys. Rev. E 94, 011201(R) (2016)].

  13. Measurement of the {sup 13}C(α,n){sup 16}O reaction with the Trojan horse method: Focus on the sub threshold resonance at −3 keV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Cognata, M.; Spitaleri, C.; Guardo, G. L.

    2014-05-02

    The {sup 13}C(α,n){sup 16}O reaction is the neutron source of the main component of the s-process. The astrophysical S(E)-factor is dominated by the −3 keV sub-threshold resonance due to the 6.356 MeV level in {sup 17}O. Its contribution is still controversial as extrapolations, e.g., through R-matrix calculations, and indirect techniques, such as the asymptotic normalization coefficient (ANC), yield inconsistent results. Therefore, we have applied the Trojan Horse Method (THM) to the {sup 13}C({sup 6}Li,n{sup 16}O)d reaction to measure its contribution. For the first time, the ANC for the 6.356 MeV level has been deduced through the THM, allowing to attainmore » an unprecedented accuracy. Though a larger ANC for the 6.356 MeV level is measured, our experimental S(E) factor agrees with the most recent extrapolation in the literature in the 140-230 keV energy interval, the accuracy being greatly enhanced thanks to this innovative approach, merging together two well establish indirect techniques, namely, the THM and the ANC.« less

  14. Redox switching and oxygen evolution at oxidized metal and metal oxide electrodes: iron in base.

    PubMed

    Lyons, Michael E G; Doyle, Richard L; Brandon, Michael P

    2011-12-28

    Outstanding issues regarding the film formation, redox switching characteristics and the oxygen evolution reaction (OER) electrocatalytic behaviour of multicycled iron oxyhydroxide films in aqueous alkaline solution have been revisited. The oxide is grown using a repetitive potential multicycling technique, and the mechanism of the latter hydrous oxide formation process has been discussed. A duplex layer model of the oxide/solution interphase region is proposed. The acid/base behaviour of the hydrous oxide and the microdispersed nature of the latter material has been emphasised. The hydrous oxide is considered as a porous assembly of interlinked octahedrally coordinated anionic metal oxyhydroxide surfaquo complexes which form an open network structure. The latter contains considerable quantities of water molecules which facilitate hydroxide ion discharge at the metal site during active oxygen evolution, and also charge compensating cations. The dynamics of redox switching has been quantified via analysis of the cyclic voltammetry response as a function of potential sweep rate using the Laviron-Aoki electron hopping diffusion model by analogy with redox polymer modified electrodes. Steady state Tafel plot analysis has been used to elucidate the kinetics and mechanism of oxygen evolution. Tafel slope values of ca. 60 mV dec(-1) and ca. 120 mV dec(-1) are found at low and high overpotentials respectively, whereas the reaction order with respect to hydroxide ion activity changes from ca. 3/2 to ca. 1 as the potential is increased. These observations are rationalised in terms of a kinetic scheme involving Temkin adsorption and the rate determining formation of a physisorbed hydrogen peroxide intermediate on the oxide surface. The dual Tafel slope behaviour is ascribed to the potential dependence of the surface coverage of adsorbed intermediates.

  15. Cross-species extrapolation of toxicity information using the ...

    EPA Pesticide Factsheets

    In the United States, the Endocrine Disruptor Screening Program (EDSP) was established to identify chemicals that may lead to adverse effects via perturbation of the endocrine system (i.e., estrogen, androgen, and thyroid hormone systems). In the mid-1990s the EDSP adopted a two tiered approach for screening chemicals that applied standardized in vitro and in vivo toxicity tests. The Tier 1 screening assays were designed to identify substances that have the potential of interacting with the endocrine system and Tier 2 testing was developed to identify adverse effects caused by the chemical, with documentation of dose-response relationships. While this tiered approach was effective in identifying possible endocrine disrupting chemicals, the cost and time to screen a single chemical was significant. Therefore, in 2012 the EDSP proposed a transition to make greater use of computational approaches (in silico) and high-throughput screening (HTS; in vitro) assays to more rapidly and cost-efficiently screen chemicals for endocrine activity. This transition from resource intensive, primarily in vivo, screening methods to more pathway-based approaches aligns with the simultaneously occurring transformation in toxicity testing termed “Toxicity Testing in the 21st Century” which shifts the focus to the disturbance of the biological pathway predictive of the observable toxic effects. An example of such screening tools include the US Environmental Protection Agency’s

  16. A scientific and statistical analysis of accelerated aging for pharmaceuticals. Part 1: accuracy of fitting methods.

    PubMed

    Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L

    2014-10-01

    Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  17. Cluster Dynamical Mean Field Methods and the Momentum-selective Mott transition

    NASA Astrophysics Data System (ADS)

    Gull, Emanuel

    2011-03-01

    Innovations in methodology and computational power have enabled cluster dynamical mean field calculations of the Hubbard model with interaction strengths and band structures representative of high temperature copper oxide superconductors, for clusters large enough that the thermodyamic limit behavior may be determined. We present the methods and show how extrapolations to the thermodynamic limit work in practice. We show that the Hubbard model with next-nearest neighbor hopping at intermediate interaction strength captures much of the exotic behavior characteristic of the high temperature superconductors. An important feature of the results is a pseudogap for hole doping but not for electron doping. The pseudogap regime is characterized by a gap for momenta near Brillouin zone face and gapless behavior near the zone diagonal. for dopings outside of the pseudogap regime we find scattering rates which vary around the fermi surface in a way consistent with recent transport measurements. Using the maximum entropy method we calculate spectra, self-energies, and response functions for Raman spectroscopy and optical conductivities, finding results also in good agreement with experiment. Olivier Parcollet, Philipp Werner, Nan Lin, Michel Ferrero, Antoine Georges, Andrew J. Millis; NSF-DMR-0705847.

  18. Efficient hydrogen production on MoNi4 electrocatalysts with fast water dissociation kinetics

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Wang, Tao; Liu, Pan; Liao, Zhongquan; Liu, Shaohua; Zhuang, Xiaodong; Chen, Mingwei; Zschech, Ehrenfried; Feng, Xinliang

    2017-05-01

    Various platinum-free electrocatalysts have been explored for hydrogen evolution reaction in acidic solutions. However, in economical water-alkali electrolysers, sluggish water dissociation kinetics (Volmer step) on platinum-free electrocatalysts results in poor hydrogen-production activities. Here we report a MoNi4 electrocatalyst supported by MoO2 cuboids on nickel foam (MoNi4/MoO2@Ni), which is constructed by controlling the outward diffusion of nickel atoms on annealing precursor NiMoO4 cuboids on nickel foam. Experimental and theoretical results confirm that a rapid Tafel-step-decided hydrogen evolution proceeds on MoNi4 electrocatalyst. As a result, the MoNi4 electrocatalyst exhibits zero onset overpotential, an overpotential of 15 mV at 10 mA cm-2 and a low Tafel slope of 30 mV per decade in 1 M potassium hydroxide electrolyte, which are comparable to the results for platinum and superior to those for state-of-the-art platinum-free electrocatalysts. Benefiting from its scalable preparation and stability, the MoNi4 electrocatalyst is promising for practical water-alkali electrolysers.

  19. Curcumin Derivatives as Green Corrosion Inhibitors for α-Brass in Nitric Acid Solution

    NASA Astrophysics Data System (ADS)

    Fouda, A. S.; Elattar, K. M.

    2012-11-01

    1,7- Bis-(4-hydroxy-3-methoxy-phenyl)-hepta-1,6-diene-4-arylazo-3,5-dione I-V have been investigated as corrosion inhibitors for α-brass in 2 M nitric acid solution using weight-loss and galvanostatic polarization techniques. The efficiency of the inhibitors increases with the increase in the inhibitor concentration but decreases with a rise in temperature. The conjoint effect of the curcumin derivatives and KSCN has also been studied. The apparent activation energy ( E a*) and other thermodynamic parameters for the corrosion process have also been calculated. The galvanostatic polarization data indicated that the inhibitors were of mixed-type, but the cathode is more polarized than the anode. The slopes of the cathodic and anodic Tafel lines ( b c and b a) are maintained approximately equal for various inhibitor concentrations. However, the value of the Tafel slopes increases together as inhibitor concentration increases. The adsorption of these compounds on α-brass surface has been found to obey the Frumkin's adsorption isotherm. The mechanism of inhibition was discussed in the light of the chemical structure of the undertaken inhibitors.

  20. Progress in the Simulation of Steady and Time-Dependent Flows with 3D Parallel Unstructured Cartesian Methods

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Murman, S. M.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The proposed paper will present recent extensions in the development of an efficient Euler solver for adaptively-refined Cartesian meshes with embedded boundaries. The paper will focus on extensions of the basic method to include solution adaptation, time-dependent flow simulation, and arbitrary rigid domain motion. The parallel multilevel method makes use of on-the-fly parallel domain decomposition to achieve extremely good scalability on large numbers of processors, and is coupled with an automatic coarse mesh generation algorithm for efficient processing by a multigrid smoother. Numerical results are presented demonstrating parallel speed-ups of up to 435 on 512 processors. Solution-based adaptation may be keyed off truncation error estimates using tau-extrapolation or a variety of feature detection based refinement parameters. The multigrid method is extended to for time-dependent flows through the use of a dual-time approach. The extension to rigid domain motion uses an Arbitrary Lagrangian-Eulerlarian (ALE) formulation, and results will be presented for a variety of two- and three-dimensional example problems with both simple and complex geometry.

  1. Evidence for using Monte Carlo calculated wall attenuation and scatter correction factors for three styles of graphite-walled ion chamber.

    PubMed

    McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O

    2004-06-21

    The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.

  2. Loss tolerant speech decoder for telecommunications

    NASA Technical Reports Server (NTRS)

    Prieto, Jr., Jaime L. (Inventor)

    1999-01-01

    A method and device for extrapolating past signal-history data for insertion into missing data segments in order to conceal digital speech frame errors. The extrapolation method uses past-signal history that is stored in a buffer. The method is implemented with a device that utilizes a finite-impulse response (FIR) multi-layer feed-forward artificial neural network that is trained by back-propagation for one-step extrapolation of speech compression algorithm (SCA) parameters. Once a speech connection has been established, the speech compression algorithm device begins sending encoded speech frames. As the speech frames are received, they are decoded and converted back into speech signal voltages. During the normal decoding process, pre-processing of the required SCA parameters will occur and the results stored in the past-history buffer. If a speech frame is detected to be lost or in error, then extrapolation modules are executed and replacement SCA parameters are generated and sent as the parameters required by the SCA. In this way, the information transfer to the SCA is transparent, and the SCA processing continues as usual. The listener will not normally notice that a speech frame has been lost because of the smooth transition between the last-received, lost, and next-received speech frames.

  3. Electrochemical maps and movies of the hydrogen evolution reaction on natural crystals of molybdenite (MoS2): basal vs. edge plane activity† †Electronic supplementary information (ESI) available: Movies S1 to S4: spatially resolved LSV-SECCM movies obtained from the electrocatalytic HER on the surface of bulk MoS2. Fig. S1 to S14: XRD, XPS, Raman, SEM and OM characterization of MoS2; SEM images of the nanopipets; WCA measurements; LSVs and Tafel plots obtained from the HER on MoS2. See DOI: 10.1039/c7sc02545a Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file.

    PubMed Central

    Kang, Minkyung; Maddar, Faduma M.; Li, Fengwang; Walker, Marc; Zhang, Jie

    2017-01-01

    Two dimensional (2D) semiconductor materials, such as molybdenum disulfide (MoS2) have attracted considerable interest in a range of chemical and electrochemical applications, for example, as an abundant and low-cost alternative electrocatalyst to platinum for the hydrogen evolution reaction (HER). While it has been proposed that the edge plane of MoS2 possesses high catalytic activity for the HER relative to the “catalytically inert” basal plane, this conclusion has been drawn mainly from macroscale electrochemical (voltammetric) measurements, which reflect the “average” electrocatalytic behavior of complex electrode ensembles. In this work, we report the first spatially-resolved measurements of HER activity on natural crystals of molybdenite, achieved using voltammetric scanning electrochemical cell microscopy (SECCM), whereby pixel-resolved linear-sweep voltammogram (LSV) measurements have allowed the HER to be visualized at multiple different potentials to construct electrochemical flux movies with nanoscale resolution. Key features of the SECCM technique are that characteristic surface sites can be targeted and analyzed in detail and, further, that the electrocatalyst area is known with good precision (in contrast to many macroscale measurements on supported catalysts). Through correlation of the local voltammetric response with information from scanning electron microscopy (SEM) and atomic force microscopy (AFM) in a multi-microscopy approach, it is demonstrated unequivocally that while the basal plane of bulk MoS2 (2H crystal phase) possesses significant activity, the HER is greatly facilitated at the edge plane (e.g., surface defects such as steps, edges or crevices). Semi-quantitative treatment of the voltammetric data reveals that the HER at the basal plane of MoS2 has a Tafel slope and exchange current density (J 0) of ∼120 mV per decade and 2.5 × 10–6 A cm–2 (comparable to polycrystalline Co, Ni, Cu and Au), respectively, while the edge

  4. New Methods of Enhancing the Thermal Durability of Silica Optical Fibers.

    PubMed

    Wysokiński, Karol; Stańczyk, Tomasz; Gibała, Katarzyna; Tenderenda, Tadeusz; Ziołowicz, Anna; Słowikowski, Mateusz; Broczkowska, Małgorzata; Nasiłowski, Tomasz

    2014-10-13

    Microstructured optical fibers can be precisely tailored for many different applications, out of which sensing has been found to be particularly interesting. However, placing silica optical fiber sensors in harsh environments results in their quick destruction as a result of the hydrolysis process. In this paper, the degradation mechanism of bare and metal-coated optical fibers at high temperatures under longitudinal strain has been determined by detailed analysis of the thermal behavior of silica and metals, like copper and nickel. We furthermore propose a novel method of enhancing the lifetime of optical fibers by the deposition of electroless nickel-phosphorous alloy in a low-temperature chemical process. The best results were obtained for a coating comprising an inner layer of copper and outer layer of low phosphorous nickel. Lifetime values obtained during the annealing experiments were extrapolated to other temperatures by a dedicated model elaborated by the authors. The estimated copper-coated optical fiber lifetime under cycled longitudinal strain reached 31 h at 450 °C.

  5. Charge transfer kinetics at the solid-solid interface in porous electrodes

    NASA Astrophysics Data System (ADS)

    Bai, Peng; Bazant, Martin Z.

    2014-04-01

    Interfacial charge transfer is widely assumed to obey the Butler-Volmer kinetics. For certain liquid-solid interfaces, the Marcus-Hush-Chidsey theory is more accurate and predictive, but it has not been applied to porous electrodes. Here we report a simple method to extract the charge transfer rates in carbon-coated LiFePO4 porous electrodes from chronoamperometry experiments, obtaining curved Tafel plots that contradict the Butler-Volmer equation but fit the Marcus-Hush-Chidsey prediction over a range of temperatures. The fitted reorganization energy matches the Born solvation energy for electron transfer from carbon to the iron redox site. The kinetics are thus limited by electron transfer at the solid-solid (carbon-LixFePO4) interface rather than by ion transfer at the liquid-solid interface, as previously assumed. The proposed experimental method generalizes Chidsey’s method for phase-transforming particles and porous electrodes, and the results show the need to incorporate Marcus kinetics in modelling batteries and other electrochemical systems.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latychevskaia, Tatiana, E-mail: tatiana@physik.uzh.ch; Fink, Hans-Werner; Chushkin, Yuriy

    Coherent diffraction imaging is a high-resolution imaging technique whose potential can be greatly enhanced by applying the extrapolation method presented here. We demonstrate the enhancement in resolution of a non-periodical object reconstructed from an experimental X-ray diffraction record which contains about 10% missing information, including the pixels in the center of the diffraction pattern. A diffraction pattern is extrapolated beyond the detector area and as a result, the object is reconstructed at an enhanced resolution and better agreement with experimental amplitudes is achieved. The optimal parameters for the iterative routine and the limits of the extrapolation procedure are discussed.

  7. An Optimal Deconvolution Method for Reconstructing Pneumatically Distorted Near-Field Sonic Boom Pressure Measurements

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Haering, Edward A., Jr.; Ehernberger, L. J.

    1996-01-01

    In-flight measurements of the SR-71 near-field sonic boom were obtained by an F-16XL airplane at flightpath separation distances from 40 to 740 ft. Twenty-two signatures were obtained from Mach 1.60 to Mach 1.84 and altitudes from 47,600 to 49,150 ft. The shock wave signatures were measured by the total and static sensors on the F-16XL noseboo. These near-field signature measurements were distorted by pneumatic attenuation in the pitot-static sensors and accounting for their effects using optimal deconvolution. Measurement system magnitude and phase characteristics were determined from ground-based step-response tests and extrapolated to flight conditions using analytical models. Deconvolution was implemented using Fourier transform methods. Comparisons of the shock wave signatures reconstructed from the total and static pressure data are presented. The good agreement achieved gives confidence of the quality of the reconstruction analysis. although originally developed to reconstruct the sonic boom signatures from SR-71 sonic boom flight tests, the methods presented here generally apply to other types of highly attenuated or distorted pneumatic measurements.

  8. Ambiguities and completeness of SAS data analysis: investigations of apoferritin by SAXS/SANS EID and SEC-SAXS methods

    NASA Astrophysics Data System (ADS)

    Zabelskii, D. V.; Vlasov, A. V.; Ryzhykau, Yu L.; Murugova, T. N.; Brennich, M.; Soloviov, D. V.; Ivankov, O. I.; Borshchevskiy, V. I.; Mishin, A. V.; Rogachev, A. V.; Round, A.; Dencher, N. A.; Büldt, G.; Gordeliy, V. I.; Kuklin, A. I.

    2018-03-01

    The method of small angle scattering (SAS) is widely used in the field of biophysical research of proteins in aqueous solutions. Obtaining low-resolution structure of proteins is still a highly valuable method despite the advances in high-resolution methods such as X-ray diffraction, cryo-EM etc. SAS offers the unique possibility to obtain structural information under conditions close to those of functional assays, i.e. in solution, without different additives, in the mg/mL concentration range. SAS method has a long history, but there are still many uncertainties related to data treatment. We compared 1D SAS profiles of apoferritin obtained by X-ray diffraction (XRD) and SAS methods. It is shown that SAS curves for X-ray diffraction crystallographic structure of apoferritin differ more significantly than it might be expected due to the resolution of the SAS instrument. Extrapolation to infinite dilution (EID) method does not sufficiently exclude dimerization and oligomerization effects and therefore could not guarantee total absence of dimers account in the final SAS curve. In this study, we show that EID SAXS, EID SANS and SEC-SAXS methods give complementary results and when they are used all together, it allows obtaining the most accurate results and high confidence from SAS data analysis of proteins.

  9. The Vertical Flux Method (VFM) for regional estimates of temporally and spatially varying nitrate fluxes in unsaturated zone and groundwater

    NASA Astrophysics Data System (ADS)

    Green, C. T.; Liao, L.; Nolan, B. T.; Juckem, P. F.; Ransom, K.; Harter, T.

    2017-12-01

    Process-based modeling of regional NO3- fluxes to groundwater is critical for understanding and managing water quality. Measurements of atmospheric tracers of groundwater age and dissolved-gas indicators of denitrification progress have potential to improve estimates of NO3- reactive transport processes. This presentation introduces a regionalized version of a vertical flux method (VFM) that uses simple mathematical estimates of advective-dispersive reactive transport with regularization procedures to calibrate estimated tracer concentrations to observed equivalents. The calibrated VFM provides estimates of chemical, hydrologic and reaction parameters (source concentration time series, recharge, effective porosity, dispersivity, reaction rate coefficients) and derived values (e.g. mean unsaturated zone travel time, eventual depth of the NO3- front) for individual wells. Statistical learning methods are used to extrapolate parameters and predictions from wells to continuous areas. The regional VFM was applied to 473 well samples in central-eastern Wisconsin. Chemical measurements included O2, NO3-, N2 from denitrification, and atmospheric tracers of groundwater age including carbon-14, chlorofluorocarbons, tritium, and triogiogenic helium. VFM results were consistent with observed chemistry, and calibrated parameters were in-line with independent estimates. Results indicated that (1) unsaturated zone travel times were a substantial portion of the transit time to wells and streams (2) fractions of N leached to groundwater have changed over time, with increasing fractions from manure and decreasing fractions from fertilizer, and (3) under current practices and conditions, 60% of the shallow aquifer will eventually be affected by NO3- contamination. Based on GIS coverages of variables related to soils, land use and hydrology, the VFM results at individual wells were extrapolated regionally using boosted regression trees, a statistical learning approach, that related

  10. A novel cost-effectiveness model of prescription eicosapentaenoic acid extrapolated to secondary prevention of cardiovascular diseases in the United States.

    PubMed

    Philip, Sephy; Chowdhury, Sumita; Nelson, John R; Benjamin Everett, P; Hulme-Lowe, Carolyn K; Schmier, Jordana K

    2016-10-01

    Given the substantial economic and health burden of cardiovascular disease and the residual cardiovascular risk that remains despite statin therapy, adjunctive therapies are needed. The purpose of this model was to estimate the cost-effectiveness of high-purity prescription eicosapentaenoic acid (EPA) omega-3 fatty acid intervention in secondary prevention of cardiovascular diseases in statin-treated patient populations extrapolated to the US. The deterministic model utilized inputs for cardiovascular events, costs, and utilities from published sources. Expert opinion was used when assumptions were required. The model takes the perspective of a US commercial, third-party payer with costs presented in 2014 US dollars. The model extends to 5 years and applies a 3% discount rate to costs and benefits. Sensitivity analyses were conducted to explore the influence of various input parameters on costs and outcomes. Using base case parameters, EPA-plus-statin therapy compared with statin monotherapy resulted in cost savings (total 5-year costs $29,393 vs $30,587 per person, respectively) and improved utilities (average 3.627 vs 3.575, respectively). The results were not sensitive to multiple variations in model inputs and consistently identified EPA-plus-statin therapy to be the economically dominant strategy, with both lower costs and better patient utilities over the modeled 5-year period. The model is only an approximation of reality and does not capture all complexities of a real-world scenario without further inputs from ongoing trials. The model may under-estimate the cost-effectiveness of EPA-plus-statin therapy because it allows only a single event per patient. This novel model suggests that combining EPA with statin therapy for secondary prevention of cardiovascular disease in the US may be a cost-saving and more compelling intervention than statin monotherapy.

  11. Free-standing ternary NiWP film for efficient water oxidation reaction

    NASA Astrophysics Data System (ADS)

    Yang, Yunpeng; Zhou, Kuo; Ma, Lili; Liang, Yanqin; Yang, Xianjin; Cui, Zhenduo; Zhu, Shengli; Li, Zhaoyang

    2018-03-01

    High-efficient catalysts for oxygen evolution reaction (OER) is of great concern in improving energy efficiency for water splitting. Here we report a high-performance OER electrocatalyst of nickel-tungsten-phosphorus (NiWP) film prepared by template method. This free-standing ternary electrocatalyst exhibits a remarkable electrocatalytic activity of OER in alkaline medium due to the synergetic effect among these elements and the good electrical conductivity. The reported NiWP composite catalyst has an overpotential of as low as 0.4 V (vs. RHE) at 30 mA cm-2, better than that of the commercial RuO2 catalyst. Moreover, a small charge transfer resistance of 4.06 Ω and a Tafel slope of 68 mV dec-1 demonstrate the outstanding catalytic activity.

  12. Local wall heat flux/temperature meter for convective flow and method of utilizing same

    DOEpatents

    Boyd, Ronald D.; Ekhlassi, Ali; Cofie, Penrose

    2004-11-30

    According to one embodiment of the invention, a method includes providing a conduit having a fluid flowing therethrough, disposing a plurality of temperature measurement devices inside a wall of the conduit, positioning at least some of the temperature measurement devices proximate an inside surface of the wall of the conduit, positioning at least some of the temperature measurement devices at different radial positions at the same circumferential location within the wall, measuring a plurality of temperatures of the wall with respective ones of the temperature measurement devices to obtain a three-dimensional temperature topology of the wall, determining the temperature dependent thermal conductivity of the conduit, and determining a multi-dimensional thermal characteristic of the inside surface of the wall of the conduit based on extrapolation of the three-dimensional temperature topology and the temperature dependent thermal conductivities.

  13. Local wall heat flux/temperature meter for convective flow and method of utilizing same

    NASA Technical Reports Server (NTRS)

    Cofie, Penrose (Inventor); Ekhlassi, Ali (Inventor); Boyd, Ronald D. (Inventor)

    2004-01-01

    According to one embodiment of the invention, a method includes providing a conduit having a fluid flowing therethrough, disposing a plurality of temperature measurement devices inside a wall of the conduit, positioning at least some of the temperature measurement devices proximate an inside surface of the wall of the conduit, positioning at least some of the temperature measurement devices at different radial positions at the same circumferential location within the wall, measuring a plurality of temperatures of the wall with respective ones of the temperature measurement devices to obtain a three-dimensional temperature topology of the wall, determining the temperature dependent thermal conductivity of the conduit, and determining a multi-dimensional thermal characteristic of the inside surface of the wall of the conduit based on extrapolation of the three-dimensional temperature topology and the temperature dependent thermal conductivities.

  14. Simulation-Extrapolation for Estimating Means and Causal Effects with Mismeasured Covariates

    ERIC Educational Resources Information Center

    Lockwood, J. R.; McCaffrey, Daniel F.

    2015-01-01

    Regression, weighting and related approaches to estimating a population mean from a sample with nonrandom missing data often rely on the assumption that conditional on covariates, observed samples can be treated as random. Standard methods using this assumption generally will fail to yield consistent estimators when covariates are measured with…

  15. Measurements of the Absorption by Auditorium SEATING—A Model Study

    NASA Astrophysics Data System (ADS)

    BARRON, M.; COLEMAN, S.

    2001-01-01

    One of several problems with seat absorption is that only small numbers of seats can be tested in standard reverberation chambers. One method proposed for reverberation chamber measurements involves extrapolation when the absorption coefficient results are applied to actual auditoria. Model seat measurements in an effectively large model reverberation chamber have allowed the validity of this extrapolation to be checked. The alternative barrier method for reverberation chamber measurements was also tested and the two methods were compared. The effect on the absorption of row-row spacing as well as absorption by small numbers of seating rows was also investigated with model seats.

  16. An empirical inferential method of estimating nitrogen deposition to Mediterranean-type ecosystems: the San Bernardino Mountains case study.

    PubMed

    Bytnerowicz, A; Johnson, R F; Zhang, L; Jenerette, G D; Fenn, M E; Schilling, S L; Gonzalez-Fernandez, I

    2015-08-01

    The empirical inferential method (EIM) allows for spatially and temporally-dense estimates of atmospheric nitrogen (N) deposition to Mediterranean ecosystems. This method, set within a GIS platform, is based on ambient concentrations of NH3, NO, NO2 and HNO3; surface conductance of NH4(+) and NO3(-); stomatal conductance of NH3, NO, NO2 and HNO3; and satellite-derived LAI. Estimated deposition is based on data collected during 2002-2006 in the San Bernardino Mountains (SBM) of southern California. Approximately 2/3 of dry N deposition was to plant surfaces and 1/3 as stomatal uptake. Summer-season N deposition ranged from <3 kg ha(-1) in the eastern SBM to ∼ 60 kg ha(-1) in the western SBM near the Los Angeles Basin and compared well with the throughfall and big-leaf micrometeorological inferential methods. Extrapolating summertime N deposition estimates to annual values showed large areas of the SBM exceeding critical loads for nutrient N in chaparral and mixed conifer forests. Published by Elsevier Ltd.

  17. Human placental perfusion method in the assessment of transplacental passage of antiepileptic drugs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myllynen, Paeivi; Pienimaeki, Paeivi; Vaehaekangas, Kirsi

    2005-09-01

    Epilepsy is one of the most common neurological diseases, affecting about 0.5 to 1% of pregnant women. It is commonly accepted that older antiepileptic drugs bear teratogenic potential. So far, no agreement has been reached about the safest antiepileptic drug during pregnancy. It is known that nearly all drugs cross the placenta at least to some extent. Nowadays, there is very little information available of the pharmacokinetics of drugs in the feto-placental unit. Detailed information about drug transport across the placenta would be valuable for the development of safe and effective treatments. For reasons of safety, human studies on placentalmore » transfer are restricted to a limited number of drugs. Interspecies differences limit the extrapolation of animal data to humans. Several in vitro methods for the study of placental transfer have been developed over the past decades. The placental perfusion method is the only experimental method that has been used to study human placental transfer of substances in organized placental tissue. The aim of this article is to review human placental perfusion data on antiepileptic drugs. According to perfusion data, it seems that most of the antiepileptic drugs are transferred across the placenta meaning significant fetal exposure.« less

  18. Methods of measurement signal acquisition from the rotational flow meter for frequency analysis

    NASA Astrophysics Data System (ADS)

    Świsulski, Dariusz; Hanus, Robert; Zych, Marcin; Petryka, Leszek

    One of the simplest and commonly used instruments for measuring the flow of homogeneous substances is the rotational flow meter. The main part of such a device is a rotor (vane or screw) rotating at a speed which is the function of the fluid or gas flow rate. A pulse signal with a frequency proportional to the speed of the rotor is obtained at the sensor output. For measurements in dynamic conditions, a variable interval between pulses prohibits the analysis of the measuring signal. Therefore, the authors of the article developed a method involving the determination of measured values on the basis of the last inter-pulse interval preceding the moment designated by the timing generator. For larger changes of the measured value at a predetermined time, the value can be determined by means of extrapolation of the two adjacent interpulse ranges, assuming a linear change in the flow. The proposed methods allow analysis which requires constant spacing between measurements, allowing for an analysis of the dynamics of changes in the test flow, eg. using a Fourier transform. To present the advantages of these methods simulations of flow measurement were carried out with a DRH-1140 rotor flow meter from the company Kobold.

  19. An optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system.

    PubMed

    Shen, L; Levine, S H; Catchen, G L

    1987-07-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.

  20. Analysis of trends in experimental observables: Reconstruction of the implosion dynamics and implications for fusion yield extrapolation for direct-drive cryogenic targets on OMEGA

    DOE PAGES

    Bose, A.; Betti, R.; Mangino, D.; ...

    2018-05-29

    This paper describes a technique for identifying trends in performance degradation for inertial con finement fusion implosion experiments. It is based on reconstruction of the implosion core with a combination of low- and mid-mode asymmetries. This technique was applied to an ensemble of hydro-equivalent deuterium-tritium implosions on OMEGA that achieved inferred hot-spot pressures ≈56 ± 7 Gbar [S. Regan et al., Phys. Rev. Lett. 117, 025001 (2016)]. All the experimental observables pertaining to the core could be reconstructed simultaneously with the same combination of low and mid modes. This suggests that in addition to low modes, that can cause amore » degradation of the stagnation pressure, mid modes are present that reduce the size of the neutron and x-ray producing volume. The systematic analysis shows that asymmetries can cause an overestimation of the total areal density in these implosions. Finally, it is also found that an improvement in implosion symmetry resulting from correction of either the systematic mid or low modes would result in an increase of the hot-spot pressure from 56 Gbar to ≈ 80 Gbar and could produce a burning plasma when the implosion core is extrapolated to an equivalent 1.9 MJ symmetric direct illumination [A. Bose et al., Phys. Rev. E 94, 011201(R) (2016)].« less