Sample records for effects extrapolation models

  1. USING MODELS TO EXTRAPOLATE POPULATION-LEVEL EFFECTS FROM LABORATORY TOXICITY TESTS IN SUPPORT OF POPULATION RISK ASSESSMENTS

    EPA Science Inventory

    Using models to extrapolate population-level effects from laboratory toxicity tests in support of population risk assessments. Munns, W.R., Jr.*, Anne Kuhn, Matt G. Mitro, and Timothy R. Gleason, U.S. EPA ORD NHEERL, Narragansett, RI, USA. Driven in large part by management goa...

  2. Effective orthorhombic anisotropic models for wavefield extrapolation

    NASA Astrophysics Data System (ADS)

    Ibanez-Jacome, Wilson; Alkhalifah, Tariq; Waheed, Umair bin

    2014-09-01

    Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth's subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the first-arrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.

  3. A Comparison of Methods for Computing the Residual Resistivity Ratio of High-Purity Niobium

    PubMed Central

    Splett, J. D.; Vecchia, D. F.; Goodrich, L. F.

    2011-01-01

    We compare methods for estimating the residual resistivity ratio (RRR) of high-purity niobium and investigate the effects of using different functional models. RRR is typically defined as the ratio of the electrical resistances measured at 273 K (the ice point) and 4.2 K (the boiling point of helium at standard atmospheric pressure). However, pure niobium is superconducting below about 9.3 K, so the low-temperature resistance is defined as the normal-state (i.e., non-superconducting state) resistance extrapolated to 4.2 K and zero magnetic field. Thus, the estimated value of RRR depends significantly on the model used for extrapolation. We examine three models for extrapolation based on temperature versus resistance, two models for extrapolation based on magnetic field versus resistance, and a new model based on the Kohler relationship that can be applied to combined temperature and field data. We also investigate the possibility of re-defining RRR so that the quantity is not dependent on extrapolation. PMID:26989580

  4. Inter-model comparison of the landscape determinants of vector-borne disease: implications for epidemiological and entomological risk modeling.

    PubMed

    Lorenz, Alyson; Dhingra, Radhika; Chang, Howard H; Bisanzio, Donal; Liu, Yang; Remais, Justin V

    2014-01-01

    Extrapolating landscape regression models for use in assessing vector-borne disease risk and other applications requires thoughtful evaluation of fundamental model choice issues. To examine implications of such choices, an analysis was conducted to explore the extent to which disparate landscape models agree in their epidemiological and entomological risk predictions when extrapolated to new regions. Agreement between six literature-drawn landscape models was examined by comparing predicted county-level distributions of either Lyme disease or Ixodes scapularis vector using Spearman ranked correlation. AUC analyses and multinomial logistic regression were used to assess the ability of these extrapolated landscape models to predict observed national data. Three models based on measures of vegetation, habitat patch characteristics, and herbaceous landcover emerged as effective predictors of observed disease and vector distribution. An ensemble model containing these three models improved precision and predictive ability over individual models. A priori assessment of qualitative model characteristics effectively identified models that subsequently emerged as better predictors in quantitative analysis. Both a methodology for quantitative model comparison and a checklist for qualitative assessment of candidate models for extrapolation are provided; both tools aim to improve collaboration between those producing models and those interested in applying them to new areas and research questions.

  5. Dispersal and extrapolation on the accuracy of temporal predictions from distribution models for the Darwin's frog.

    PubMed

    Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio

    2017-07-01

    Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as geographical areas subjected to novel climates are expected to arise, they must be reported as they show less accurate predictions under future climate scenarios. Consequently, environmental extrapolation and dispersal processes should be explicitly incorporated to report and reduce uncertainties in temporal predictions of SDMs, respectively. Doing so, we expect to improve the reliability of the information we provide for conservation decision makers under future climate change scenarios. © 2017 by the Ecological Society of America.

  6. Extrapolation of enalapril efficacy from adults to children using pharmacokinetic/pharmacodynamic modelling.

    PubMed

    Kechagia, Irene-Ariadne; Kalantzi, Lida; Dokoumetzidis, Aristides

    2015-11-01

    To extrapolate enalapril efficacy to children 0-6 years old, a pharmacokinetic/pharmacodynamic (PKPD) model was built using literature data, with blood pressure as the PD endpoint. A PK model of enalapril was developed from literature paediatric data up to 16 years old. A PD model of enalapril was fitted to adult literature response vs time data with various doses. The final PKPD model was validated with literature paediatric efficacy observations (diastolic blood pressure (DBP) drop after 2 weeks of treatment) in children of age 6 years and higher. The model was used to predict enalapril efficacy for ages 0-6 years. A two-compartment PK model was chosen with weight, reflecting indirectly age as a covariate on clearance and central volume. An indirect link PD model was chosen to describe drug effect. External validation of the model's capability to predict efficacy in children was successful. Enalapril efficacy was extrapolated to ages 1-11 months and 1-6 years finding the mean DBP drop 11.2 and 11.79 mmHg, respectively. Mathematical modelling was used to extrapolate enalapril efficacy to young children to support a paediatric investigation plan targeting a paediatric-use marketing authorization application. © 2015 Royal Pharmaceutical Society.

  7. Measurements of the Absorption by Auditorium SEATING—A Model Study

    NASA Astrophysics Data System (ADS)

    BARRON, M.; COLEMAN, S.

    2001-01-01

    One of several problems with seat absorption is that only small numbers of seats can be tested in standard reverberation chambers. One method proposed for reverberation chamber measurements involves extrapolation when the absorption coefficient results are applied to actual auditoria. Model seat measurements in an effectively large model reverberation chamber have allowed the validity of this extrapolation to be checked. The alternative barrier method for reverberation chamber measurements was also tested and the two methods were compared. The effect on the absorption of row-row spacing as well as absorption by small numbers of seating rows was also investigated with model seats.

  8. Why do people appear not to extrapolate trajectories during multiple object tracking? A computational investigation

    PubMed Central

    Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I

    2014-01-01

    Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300

  9. State-of-the-Science Workshop Report: Issues and Approaches in Low Dose–Response Extrapolation for Environmental Health Risk Assessment

    EPA Science Inventory

    Low-dose extrapolation model selection for evaluating the health effects of environmental pollutants is a key component of the risk assessment process. At a workshop held in Baltimore, MD, on April 23-24, 2007, and sponsored by U.S. Environmental Protection Agency (EPA) and Johns...

  10. How to Appropriately Extrapolate Costs and Utilities in Cost-Effectiveness Analysis.

    PubMed

    Bojke, Laura; Manca, Andrea; Asaria, Miqdad; Mahon, Ronan; Ren, Shijie; Palmer, Stephen

    2017-08-01

    Costs and utilities are key inputs into any cost-effectiveness analysis. Their estimates are typically derived from individual patient-level data collected as part of clinical studies the follow-up duration of which is often too short to allow a robust quantification of the likely costs and benefits a technology will yield over the patient's entire lifetime. In the absence of long-term data, some form of temporal extrapolation-to project short-term evidence over a longer time horizon-is required. Temporal extrapolation inevitably involves assumptions regarding the behaviour of the quantities of interest beyond the time horizon supported by the clinical evidence. Unfortunately, the implications for decisions made on the basis of evidence derived following this practice and the degree of uncertainty surrounding the validity of any assumptions made are often not fully appreciated. The issue is compounded by the absence of methodological guidance concerning the extrapolation of non-time-to-event outcomes such as costs and utilities. This paper considers current approaches to predict long-term costs and utilities, highlights some of the challenges with the existing methods, and provides recommendations for future applications. It finds that, typically, economic evaluation models employ a simplistic approach to temporal extrapolation of costs and utilities. For instance, their parameters (e.g. mean) are typically assumed to be homogeneous with respect to both time and patients' characteristics. Furthermore, costs and utilities have often been modelled to follow the dynamics of the associated time-to-event outcomes. However, cost and utility estimates may be more nuanced, and it is important to ensure extrapolation is carried out appropriately for these parameters.

  11. A Model Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete Point Linear Models

    DTIC Science & Technology

    2016-04-01

    incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching architecture...Simulation Model, Quasi -Nonlinear, Piloted Simulation, Flight-Test Implications, System Identification, Off-Nominal Loading Extrapolation, Stability...incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching

  12. Simple extrapolation method to predict the electronic structure of conjugated polymers from calculations on oligomers

    DOE PAGES

    Larsen, Ross E.

    2016-04-12

    In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally,more » tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.« less

  13. First moments of nucleon generalized parton distributions

    DOE PAGES

    Wang, P.; Thomas, A. W.

    2010-06-01

    We extrapolate the first moments of the generalized parton distributions using heavy baryon chiral perturbation theory. The calculation is based on the one loop level with the finite range regularization. The description of the lattice data is satisfactory, and the extrapolated moments at physical pion mass are consistent with the results obtained with dimensional regularization, although the extrapolation in the momentum transfer to t=0 does show sensitivity to form factor effects, which lie outside the realm of chiral perturbation theory. We discuss the significance of the results in the light of modern experiments as well as QCD inspired models.

  14. Properties of infrared extrapolations in a harmonic oscillator basis

    DOE PAGES

    Coon, Sidney A.; Kruse, Michael K. G.

    2016-02-22

    Here, the success and utility of effective field theory (EFT) in explaining the structure and reactions of few-nucleon systems has prompted the initiation of EFT-inspired extrapolations to larger model spaces in ab initio methods such as the no-core shell model (NCSM). In this contribution, we review and continue our studies of infrared (ir) and ultraviolet (uv) regulators of NCSM calculations in which the input is phenomenological NN and NNN interactions fitted to data. We extend our previous findings that an extrapolation in the ir cutoff with the uv cutoff above the intrinsic uv scale of the interaction is quite successful,more » not only for the eigenstates of the Hamiltonian but also for expectation values of operators, such as r 2, considered long range. The latter results are obtained with Hamiltonians transformed by the similarity renormalization group (SRG) evolution. On the other hand, a possible extrapolation of ground state energies in the uv cutoff when the ir cutoff is below the intrinsic ir scale is not robust and does not agree with the ir extrapolation of the same data or with independent calculations using other methods.« less

  15. CROSS-SPECIES DOSE EXTRAPOLATION FOR DIESEL EMISSIONS

    EPA Science Inventory

    Models for cross-species (rat to human) dose extrapolation of diesel emission were evaluated for purposes of establishing guidelines for human exposure to diesel emissions (DE) based on DE toxicological data obtained in rats. Ideally, a model for this extrapolation would provide...

  16. Estimating human-equivalent no observed adverse-effect levels for VOCs (volatile organic compounds) based on minimal knowledge of physiological parameters. Technical paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Overton, J.H.; Jarabek, A.M.

    1989-01-01

    The U.S. EPA advocates the assessment of health-effects data and calculation of inhaled reference doses as benchmark values for gauging systemic toxicity to inhaled gases. The assessment often requires an inter- or intra-species dose extrapolation from no observed adverse effect level (NOAEL) exposure concentrations in animals to human equivalent NOAEL exposure concentrations. To achieve this, a dosimetric extrapolation procedure was developed based on the form or type of equations that describe the uptake and disposition of inhaled volatile organic compounds (VOCs) in physiologically-based pharmacokinetic (PB-PK) models. The procedure assumes allometric scaling of most physiological parameters and that the value ofmore » the time-integrated human arterial-blood concentration must be limited to no more than to that of experimental animals. The scaling assumption replaces the need for most parameter values and allows the derivation of a simple formula for dose extrapolation of VOCs that gives equivalent or more-conservative exposure concentrations values than those that would be obtained using a PB-PK model in which scaling was assumed.« less

  17. Mice, myths, and men

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fry, R.J.M.

    The author discusses some examples of how different experimental animal systems have helped to answer questions about the effects of radiation, in particular, carcinogenesis, and to indicate how the new experimental model systems promise an even more exciting future. Entwined in these themes will be observations about susceptibility and extrapolation across species. The hope of developing acceptable methods of extrapolation of estimates of the risk of radiogenic cancer increases as molecular biology reveals the trail of remarkable similarities in the genetic control of many functions common to many species. A major concern about even attempting to extrapolate estimates of risksmore » of radiation-induced cancer across species has been that the mechanisms of carcinogenesis were so different among different species that it would negate the validity of extrapolation. The more that has become known about the genes involved in cancer, especially those related to the initial events in carcinogenesis, the more have the reasons for considering methods of extrapolation across species increased.« less

  18. THE INFLUENCE OF MODEL TIME STEP ON THE RELATIVE SENSITIVITY OF POPULATION GROWTH TO SURVIVAL, GROWTH AND REPRODUCTION

    EPA Science Inventory

    Matrix population models are often used to extrapolate from life stage-specific stressor effects on survival and reproduction to population-level effects. Demographic elasticity analysis of a matrix model allows an evaluation of the relative sensitivity of population growth rate ...

  19. THE INFLUENCE OF MODEL TIME STEP ON THE RELATIVE SENSITIVIY OF POPULATION GROWTH RATE TO REPRODUCTION

    EPA Science Inventory

    In recent years there has been an increasing interest in using population models in environmental assessments. Matrix population models represent a valuable tool for extrapolating from life stage-specific stressor effects on survival and reproduction to effects on finite populati...

  20. The effects of sunspots on solar irradiance

    NASA Technical Reports Server (NTRS)

    Hudson, H. S.; Silva, S.; Woodard, M.; Willson, R. C.

    1982-01-01

    It is pointed out that the darkness of a sunspot on the visible hemisphere of the sun will reduce the solar irradiance on the earth. Approaches are discussed for obtaining a crude estimate of the irradiance deficit produced by sunspots and of the total luminosity reduction for the whole global population of sunspots. Attention is given to a photometric sunspot index, a global measure of spot flux deficit, and models for the compensating flux excess. A model is shown for extrapolating visible-hemisphere spot areas to the invisible hemisphere. As an illustration, this extrapolation is used to calculate a very simple model for the reradiation necessary to balance the flux deficit.

  1. Toward a Quantitative Comparison of Magnetic Field Extrapolations and Observed Coronal Loops

    NASA Astrophysics Data System (ADS)

    Warren, Harry P.; Crump, Nicholas A.; Ugarte-Urra, Ignacio; Sun, Xudong; Aschwanden, Markus J.; Wiegelmann, Thomas

    2018-06-01

    It is widely believed that loops observed in the solar atmosphere trace out magnetic field lines. However, the degree to which magnetic field extrapolations yield field lines that actually do follow loops has yet to be studied systematically. In this paper, we apply three different extrapolation techniques—a simple potential model, a nonlinear force-free (NLFF) model based on photospheric vector data, and an NLFF model based on forward fitting magnetic sources with vertical currents—to 15 active regions that span a wide range of magnetic conditions. We use a distance metric to assess how well each of these models is able to match field lines to the 12202 loops traced in coronal images. These distances are typically 1″–2″. We also compute the misalignment angle between each traced loop and the local magnetic field vector, and find values of 5°–12°. We find that the NLFF models generally outperform the potential extrapolation on these metrics, although the differences between the different extrapolations are relatively small. The methodology that we employ for this study suggests a number of ways that both the extrapolations and loop identification can be improved.

  2. Bounding species distribution models

    USGS Publications Warehouse

    Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.

  3. Bounding Species Distribution Models

    NASA Technical Reports Server (NTRS)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  4. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong

    2018-04-01

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.

  5. Infrared length scale and extrapolations for the no-core shell model

    DOE PAGES

    Wendt, K. A.; Forssén, C.; Papenbrock, T.; ...

    2015-06-03

    In this paper, we precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3(A-1)-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound statesmore » of 4He, 6He, 6Li, and 7Li. Finally, we also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.« less

  6. Measured and Modeled Toxicokinetics in Cultured Fish Cells and Application to In Vitro - In Vivo Toxicity Extrapolation

    PubMed Central

    Stadnicka-Michalak, Julita; Tanneberger, Katrin; Schirmer, Kristin; Ashauer, Roman

    2014-01-01

    Effect concentrations in the toxicity assessment of chemicals with fish and fish cells are generally based on external exposure concentrations. External concentrations as dose metrics, may, however, hamper interpretation and extrapolation of toxicological effects because it is the internal concentration that gives rise to the biological effective dose. Thus, we need to understand the relationship between the external and internal concentrations of chemicals. The objectives of this study were to: (i) elucidate the time-course of the concentration of chemicals with a wide range of physicochemical properties in the compartments of an in vitro test system, (ii) derive a predictive model for toxicokinetics in the in vitro test system, (iii) test the hypothesis that internal effect concentrations in fish (in vivo) and fish cell lines (in vitro) correlate, and (iv) develop a quantitative in vitro to in vivo toxicity extrapolation method for fish acute toxicity. To achieve these goals, time-dependent amounts of organic chemicals were measured in medium, cells (RTgill-W1) and the plastic of exposure wells. Then, the relation between uptake, elimination rate constants, and log KOW was investigated for cells in order to develop a toxicokinetic model. This model was used to predict internal effect concentrations in cells, which were compared with internal effect concentrations in fish gills predicted by a Physiologically Based Toxicokinetic model. Our model could predict concentrations of non-volatile organic chemicals with log KOW between 0.5 and 7 in cells. The correlation of the log ratio of internal effect concentrations in fish gills and the fish gill cell line with the log KOW was significant (r>0.85, p = 0.0008, F-test). This ratio can be predicted from the log KOW of the chemical (77% of variance explained), comprising a promising model to predict lethal effects on fish based on in vitro data. PMID:24647349

  7. Linear prediction data extrapolation superresolution radar imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing

    1993-05-01

    Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.

  8. Approach for extrapolating in vitro metabolism data to refine bioconcentration factor estimates.

    PubMed

    Cowan-Ellsberry, Christina E; Dyer, Scott D; Erhardt, Susan; Bernhard, Mary Jo; Roe, Amy L; Dowty, Martin E; Weisbrod, Annie V

    2008-02-01

    National and international chemical management programs are assessing thousands of chemicals for their persistence, bioaccumulative and environmental toxic properties; however, data for evaluating the bioaccumulation potential for fish are limited. Computer based models that account for the uptake and elimination processes that contribute to bioaccumulation may help to meet the need for reliable estimates. One critical elimination process of chemicals is metabolic transformation. It has been suggested that in vitro metabolic transformation tests using fish liver hepatocytes or S9 fractions can provide rapid and cost-effective measurements of fish metabolic potential, which could be used to refine bioconcentration factor (BCF) computer model estimates. Therefore, recent activity has focused on developing in vitro methods to measure metabolic transformation in cellular and subcellular fish liver fractions. A method to extrapolate in vitro test data to the whole body metabolic transformation rates is presented that could be used to refine BCF computer model estimates. This extrapolation approach is based on concepts used to determine the fate and distribution of drugs within the human body which have successfully supported the development of new pharmaceuticals for years. In addition, this approach has already been applied in physiologically-based toxicokinetic models for fish. The validity of the in vitro to in vivo extrapolation is illustrated using the rate of loss of parent chemical measured in two independent in vitro test systems: (1) subcellular enzymatic test using the trout liver S9 fraction, and (2) primary hepatocytes isolated from the common carp. The test chemicals evaluated have high quality in vivo BCF values and a range of logK(ow) from 3.5 to 6.7. The results show very good agreement between the measured BCF and estimated BCF values when the extrapolated whole body metabolism rates are included, thus suggesting that in vitro biotransformation data could effectively be used to reduce in vivo BCF testing and refine BCF model estimates. However, additional fish physiological data for parameterization and validation for a wider range of chemicals are needed.

  9. Calculation methods study on hot spot stress of new girder structure detail

    NASA Astrophysics Data System (ADS)

    Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing

    2017-10-01

    To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.

  10. Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.

    PubMed

    Sakaino, Hidetomo

    2016-09-01

    Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogen, K T

    A relatively simple, quantitative approach is proposed to address a specific, important gap in the appr approach recommended by the USEPA Guidelines for Cancer Risk Assessment to oach address uncertainty in carcinogenic mode of action of certain chemicals when risk is extrapolated from bioassay data. These Guidelines recognize that some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained usingmore » separate 'linear' (genotoxic) vs. 'nonlinear' (nongenotoxic) approaches to low low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient t to parameterize a biologically based model that reliably o extrapolates risk to low levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach - similar to that used in reference dose procedures for classic toxicity endpoints - can address MOA uncertainty in a way that avoids explicit modeling of low low-dose risk as a function of administere administered or internal dose. Even when a 'nonlinear' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was i illustrated llustrated for a likely DMOA rodent carcinogen naphthalene, specifically to the issue of risk extrapolation from bioassay data on naphthalene naphthalene-induced nasal tumors in rats. Bioassay data, supplemental toxicokinetic data, and related physiologically based p pharmacokinetic and 2 harmacokinetic 2-stage stochastic carcinogenesis modeling results all clearly indicate that naphthalene is a DMOA carcinogen. Plausibility bounds on rat rat-tumor tumor-type specific DMOA DMOA-related uncertainty were obtained using a 2-stage model adapted to reflec reflect the empirical link between genotoxic and cytotoxic effects of t the most potent identified genotoxic naphthalene metabolites, 1,2 1,2- and 1,4 1,4-naphthoquinone. Bound Bound-specific 'adjustment' factors were then used to reduce naphthalene risk estimated by linear ex extrapolation (under the default genotoxic MOA assumption), to account for the DMOA trapolation exhibited by this compound.« less

  12. Analysis of Impulse Load on VEGA SRM Nozzle During Ignition Transient and Effects on TVC Actuators

    NASA Astrophysics Data System (ADS)

    Fotino, Domenico; Leofanti, Jose Luis; Serraglia, Ferruccio

    2012-07-01

    During the VEGA development phase and in particular during the Zefiro 23 (second stage motor) on-ground firing tests, values of impulse load on the actuators very close to the requirement were experienced. As a consequence, an activity for the extrapolation of these loads in the flight configuration (longer nozzle and vacuum conditions) was carried out and a mathematical model has been developed with this aim. After providing an overview on the differences between the ground and flight case from the fluid dynamic point of view, the paper describes the results of the mathematical model both in terms of correlation with respect to ground tests and of extrapolation of the loads to the flight configuration. The main effects of this load on the actuators is also addressed.

  13. Determination of Extrapolation Distance With Pressure Signatures Measured at Two to Twenty Span Lengths From Two Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.; Kuhn, Neil S.

    2006-01-01

    A study was performed to determine a limiting separation distance for the extrapolation of pressure signatures from cruise altitude to the ground. The study was performed at two wind-tunnel facilities with two research low-boom wind-tunnel models designed to generate ground pressure signatures with "flattop" shapes. Data acquired at the first wind-tunnel facility showed that pressure signatures had not achieved the desired low-boom features for extrapolation purposes at separation distances of 2 to 5 span lengths. However, data acquired at the second wind-tunnel facility at separation distances of 5 to 20 span lengths indicated the "limiting extrapolation distance" had been achieved so pressure signatures could be extrapolated with existing codes to obtain credible predictions of ground overpressures.

  14. Improving in vitro to in vivo extrapolation by incorporating toxicokinetic measurements: A case study of lindane-induced neurotoxicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croom, Edward L.; Shafer, Timothy J.; Evans, Marina V.

    Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicity. Lindane cell and media concentrations in vitro, together with in vitro concentration-response data for lindane effects on neuronal network firing rates, were compared to in vivo data and model simulations as an exercise in extrapolation for chemical-induced neurotoxicity in rodents and humans. Time- and concentration-dependent lindane dosimetry was determined in primary cultures of rat cortical neuronsmore » in vitro using “faux” (without electrodes) microelectrode arrays (MEAs). In vivo data were derived from literature values, and physiologically based pharmacokinetic (PBPK) modeling was used to extrapolate from rat to human. The previously determined EC{sub 50} for increased firing rates in primary cultures of cortical neurons was 0.6 μg/ml. Media and cell lindane concentrations at the EC{sub 50} were 0.4 μg/ml and 7.1 μg/ml, respectively, and cellular lindane accumulation was time- and concentration-dependent. Rat blood and brain lindane levels during seizures were 1.7–1.9 μg/ml and 5–11 μg/ml, respectively. Brain lindane levels associated with seizures in rats and those predicted for humans (average = 7 μg/ml) by PBPK modeling were very similar to in vitro concentrations detected in cortical cells at the EC{sub 50} dose. PBPK model predictions matched literature data and timing. These findings indicate that in vitro MEA results are predictive of in vivo responses to lindane and demonstrate a successful modeling approach for IVIVE of rat and human neurotoxicity. - Highlights: • In vitro to in vivo extrapolation for lindane neurotoxicity was performed. • Dosimetry of lindane in a micro-electrode array (MEA) test system was assessed. • Cell concentrations at the MEA EC{sub 50} equaled rat brain levels associated with seizure. • PBPK-predicted human brain levels at seizure also equaled EC{sub 50} cell concentrations. • In vitro MEA results are predictive of lindane in vivo dose–response in rats/humans.« less

  15. Magnetic field extrapolation with MHD relaxation using AWSoM

    NASA Astrophysics Data System (ADS)

    Shi, T.; Manchester, W.; Landi, E.

    2017-12-01

    Coronal mass ejections are known to be the major source of disturbances in the solar wind capable of affecting geomagnetic environments. In order for accurate predictions of such space weather events, a data-driven simulation is needed. The first step towards such a simulation is to extrapolate the magnetic field from the observed field that is only at the solar surface. Here we present results of a new code of magnetic field extrapolation with direct magnetohydrodynamics (MHD) relaxation using the Alfvén Wave Solar Model (AWSoM) in the Space Weather Modeling Framework. The obtained field is self-consistent with our model and can be used later in time-dependent simulations without modifications of the equations. We use the Low and Lou analytical solution to test our results and they reach a good agreement. We also extrapolate the magnetic field from the observed data. We then specify the active region corona field with this extrapolation result in the AWSoM model and self-consistently calculate the temperature of the active region loops with Alfvén wave dissipation. Multi-wavelength images are also synthesized.

  16. Risk estimates for CO exposure in man based on behavioral and physiological responses in rodents

    NASA Technical Reports Server (NTRS)

    Gross, M. K.

    1983-01-01

    An examination of animal response to CO is studied along with potential models for extrapolating animal test data to humans. The best models for extrapolating data were found to be the Probit and Weibull models.

  17. Application of a framework for extrapolating chemical effects ...

    EPA Pesticide Factsheets

    Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetics, life-stage, and pathway similarities/differences. Here we propose a framework using a tiered approach for species extrapolation that enables a transparent weight-of-evidence driven evaluation of pathway conservation (or lack thereof) in the context of adverse outcome pathways. Adverse outcome pathways describe the linkages from a molecular initiating event, defined as the chemical-biomolecule interaction, through subsequent key events leading to an adverse outcome of regulatory concern (e.g., mortality, reproductive dysfunction). Tier 1 of the extrapolation framework employs in silico evaluations of sequence and structural conservation of molecules (e.g., receptors, enzymes) associated with molecular initiating events or upstream key events. Such evaluations make use of available empirical and sequence data to assess taxonomic relevance. Tier 2 uses in vitro bioassays, such as enzyme inhibition/activation, competitive receptor binding, and transcriptional activation assays to explore functional conservation of pathways across taxa. Finally, Tier 3 provides a comparative analysis of in vivo responses between species utilizing well-established model organisms to assess departure from

  18. Equilibrium and Effective Climate Sensitivity

    NASA Astrophysics Data System (ADS)

    Rugenstein, M.; Bloch-Johnson, J.

    2016-12-01

    Atmosphere-ocean general circulation models, as well as the real world, take thousands of years to equilibrate to CO2 induced radiative perturbations. Equilibrium climate sensitivity - a fully equilibrated 2xCO2 perturbation - has been used for decades as a benchmark in model intercomparisons, as a test of our understanding of the climate system and paleo proxies, and to predict or project future climate change. Computational costs and limited time lead to the widespread practice of extrapolating equilibrium conditions from just a few decades of coupled simulations. The most common workaround is the "effective climate sensitivity" - defined through an extrapolation of a 150 year abrupt2xCO2 simulation, including the assumption of linear climate feedbacks. The definitions of effective and equilibrium climate sensitivity are often mixed up and used equivalently, and it is argued that "transient climate sensitivity" is the more relevant measure for predicting the next decades. We present an ongoing model intercomparison, the "LongRunMIP", to study century and millennia time scales of AOGCM equilibration and the linearity assumptions around feedback analysis. As a true ensemble of opportunity, there is no protocol and the only condition to participate is a coupled model simulation of any stabilizing scenario simulating more than 1000 years. Many of the submitted simulations took several years to conduct. As of July 2016 the contribution comprises 27 scenario simulations of 13 different models originating from 7 modeling centers, each between 1000 and 6000 years. To contribute, please contact the authors as soon as possible We present preliminary results, discussing differences between effective and equilibrium climate sensitivity, the usefulness of transient climate sensitivity, extrapolation methods, and the state of the coupled climate system close to equilibrium. Caption for the Figure below: Evolution of temperature anomaly and radiative imbalance of 22 simulations with 12 models (color indicates the model). 20 year moving average.

  19. The design of L1-norm visco-acoustic wavefield extrapolators

    NASA Astrophysics Data System (ADS)

    Salam, Syed Abdul; Mousa, Wail A.

    2018-04-01

    Explicit depth frequency-space (f - x) prestack imaging is an attractive mechanism for seismic imaging. To date, the main focus of this method was data migration assuming an acoustic medium, but until now very little work assumed visco-acoustic media. Real seismic data usually suffer from attenuation and dispersion effects. To compensate for attenuation in a visco-acoustic medium, new operators are required. We propose using the L1-norm minimization technique to design visco-acoustic f - x extrapolators. To show the accuracy and compensation of the operators, prestack depth migration is performed on the challenging Marmousi model for both acoustic and visco-acoustic datasets. The final migrated images show that the proposed L1-norm extrapolation results in practically stable and improved resolution of the images.

  20. Extrapolating Solar Dynamo Models Throughout the Heliosphere

    NASA Astrophysics Data System (ADS)

    Cox, B. T.; Miesch, M. S.; Augustson, K.; Featherstone, N. A.

    2014-12-01

    There are multiple theories that aim to explain the behavior of the solar dynamo, and their associated models have been fiercely contested. The two prevailing theories investigated in this project are the Convective Dynamo model that arises from the pure solving of the magnetohydrodynamic equations, as well as the Babcock-Leighton model that relies on sunspot dissipation and reconnection. Recently, the supercomputer simulations CASH and BASH have formed models of the behavior of the Convective and Babcock-Leighton models, respectively, in the convective zone of the sun. These models show the behavior of the models within the sun, while much less is known about the effects these models may have further away from the solar surface. The goal of this work is to investigate any fundamental differences between the Convective and Babcock-Leighton models of the solar dynamo outside of the sun and extending into the solar system via the use of potential field source surface extrapolations implemented via python code that operates on data from CASH and BASH. The use of real solar data to visualize supergranular flow data in the BASH model is also used to learn more about the behavior of the Babcock-Leighton Dynamo. From the process of these extrapolations it has been determined that the Babcock-Leighton model, as represented by BASH, maintains complex magnetic fields much further into the heliosphere before reverting into a basic dipole field, providing 3D visualisations of the models distant from the sun.

  1. Endangered species toxicity extrapolation using ICE models

    EPA Science Inventory

    The National Research Council’s (NRC) report on assessing pesticide risks to threatened and endangered species (T&E) included the recommendation of using interspecies correlation models (ICE) as an alternative to general safety factors for extrapolating across species. ...

  2. Cross-species extrapolation of chemical effects: Challenges and new insights

    EPA Science Inventory

    One of the greatest uncertainties in chemical risk assessment is extrapolation of effects from tested to untested species. While this undoubtedly is a challenge in the human health arena, species extrapolation is a particularly daunting task in ecological assessments, where it is...

  3. Extrapolating a Dyadic Model to Small Group Methodology: Validation of the Spitzberg and Cupach Model of Communication Competence.

    ERIC Educational Resources Information Center

    Keyton, Joann

    A study assessed the validity of applying the Spitzberg and Cupach dyadic model of communication competence to small group interaction. Twenty-four students, in five task-oriented work groups, completed questionnaires concerning self-competence, alter competence, interaction effectiveness, and other group members' interaction appropriateness. They…

  4. Casting the Coronal Magnetic Field Reconstruction Tools in 3D Using the MHD Bifrost Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleishman, Gregory D.; Loukitcheva, Maria; Anfinogentov, Sergey

    Quantifying the coronal magnetic field remains a central problem in solar physics. Nowadays, the coronal magnetic field is often modeled using nonlinear force-free field (NLFFF) reconstructions, whose accuracy has not yet been comprehensively assessed. Here we perform a detailed casting of the NLFFF reconstruction tools, such as π -disambiguation, photospheric field preprocessing, and volume reconstruction methods, using a 3D snapshot of the publicly available full-fledged radiative MHD model. Specifically, from the MHD model, we know the magnetic field vector in the entire 3D domain, which enables us to perform a “voxel-by-voxel” comparison of the restored and the true magnetic fieldsmore » in the 3D model volume. Our tests show that the available π -disambiguation methods often fail in the quiet-Sun areas dominated by small-scale magnetic elements, while they work well in the active region (AR) photosphere and (even better) chromosphere. The preprocessing of the photospheric magnetic field, although it does produce a more force-free boundary condition, also results in some effective “elevation” of the magnetic field components. This “elevation” height is different for the longitudinal and transverse components, which results in a systematic error in absolute heights in the reconstructed magnetic data cube. The extrapolations performed starting from the actual AR photospheric magnetogram are free from this systematic error, while other metrics are comparable with those for extrapolations from the preprocessed magnetograms. This finding favors the use of extrapolations from the original photospheric magnetogram without preprocessing. Our tests further suggest that extrapolations from a force-free chromospheric boundary produce measurably better results than those from a photospheric boundary.« less

  5. INDIVIDUAL EFFECTS OF ESTROGENS ON A MARINE FISH, CUNNER (TAUTOGOLABRUS ADSPERSUS), EXTRAPOLATED TO POPULATION LEVEL

    EPA Science Inventory

    Endocrine disrupting chemicals (EDCs) in the environment may alter the population dynamics of wildlife by affecting reproductive output. This study describes a matrix modeling approach to link laboratory studies on endocrine disruption with potential ecological effects. The exper...

  6. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    NASA Astrophysics Data System (ADS)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  7. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  8. Challenges of accelerated aging techniques for elastomer lifetime predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillen, Kenneth T.; Bernstein, R.; Celina, M.

    Elastomers are often degraded when exposed to air or high humidity for extended times (years to decades). Lifetime estimates normally involve extrapolating accelerated aging results made at higher than ambient environments. Several potential problems associated with such studies are reviewed, and experimental and theoretical methods to address them are provided. The importance of verifying time–temperature superposition of degradation data is emphasized as evidence that the overall nature of the degradation process remains unchanged versus acceleration temperature. The confounding effects that occur when diffusion-limited oxidation (DLO) contributes under accelerated conditions are described, and it is shown that the DLO magnitude canmore » be modeled by measurements or estimates of the oxygen permeability coefficient (P Ox) and oxygen consumption rate (Φ). P Ox and Φ measurements can be influenced by DLO, and it is demonstrated how confident values can be derived. In addition, several experimental profiling techniques that screen for DLO effects are discussed. Values of Φ taken from high temperature to temperatures approaching ambient can be used to more confidently extrapolate accelerated aging results for air-aged materials, and many studies now show that Arrhenius extrapolations bend to lower activation energies as aging temperatures are lowered. Furthermore, best approaches for accelerated aging extrapolations of humidity-exposed materials are also offered.« less

  9. Challenges of accelerated aging techniques for elastomer lifetime predictions

    DOE PAGES

    Gillen, Kenneth T.; Bernstein, R.; Celina, M.

    2015-03-01

    Elastomers are often degraded when exposed to air or high humidity for extended times (years to decades). Lifetime estimates normally involve extrapolating accelerated aging results made at higher than ambient environments. Several potential problems associated with such studies are reviewed, and experimental and theoretical methods to address them are provided. The importance of verifying time–temperature superposition of degradation data is emphasized as evidence that the overall nature of the degradation process remains unchanged versus acceleration temperature. The confounding effects that occur when diffusion-limited oxidation (DLO) contributes under accelerated conditions are described, and it is shown that the DLO magnitude canmore » be modeled by measurements or estimates of the oxygen permeability coefficient (P Ox) and oxygen consumption rate (Φ). P Ox and Φ measurements can be influenced by DLO, and it is demonstrated how confident values can be derived. In addition, several experimental profiling techniques that screen for DLO effects are discussed. Values of Φ taken from high temperature to temperatures approaching ambient can be used to more confidently extrapolate accelerated aging results for air-aged materials, and many studies now show that Arrhenius extrapolations bend to lower activation energies as aging temperatures are lowered. Furthermore, best approaches for accelerated aging extrapolations of humidity-exposed materials are also offered.« less

  10. Improving In Vitro to In Vivo Extrapolation by Incorporating Toxicokinetic Measurements: A Case Study of Lindane-Induced Neurotoxicity

    EPA Science Inventory

    Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicit...

  11. Simulating Microdosimetry of Environmental Chemicals for EPA’s Virtual Liver

    EPA Science Inventory

    US EPA Virtual Liver (v-Liver) is a cellular systems model of hepatic tissues aimed at predicting chemical-induced adverse effects through agent-based modeling. A primary objective of the project is to extrapolate in vitro data to in vivo outcomes. Agent-based approaches to tissu...

  12. Line-of-sight extrapolation noise in dust polarization

    NASA Astrophysics Data System (ADS)

    Poh, Jason; Dodelson, Scott

    2017-05-01

    The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g. 350 GHz) is due solely to dust and then extrapolate the signal down to a lower frequency (e.g. 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typical Galactic thermal dust temperatures of ˜20 K , these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise on a greybody dust model consistent with Planck and Pan-STARRS observations, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r ≲0.0015 in the greybody dust models considered in this paper.

  13. EVALUATION OF THE EFFICACY OF EXTRAPOLATION POPULATION MODELING TO PREDICT THE DYNAMICS OF AMERICAMYSIS BAHIA POPULATIONS IN THE LABORATORY

    EPA Science Inventory

    An age-classified projection matrix model has been developed to extrapolate the chronic (28-35d) demographic responses of Americamysis bahia (formerly Mysidopsis bahia) to population-level response. This study was conducted to evaluate the efficacy of this model for predicting t...

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perko, Z; Bortfeld, T; Hong, T

    Purpose: The safe use of radiotherapy requires the knowledge of tolerable organ doses. For experimental fractionation schemes (e.g. hypofractionation) these are typically extrapolated from traditional fractionation schedules using the Biologically Effective Dose (BED) model. This work demonstrates that using the mean dose in the standard BED equation may overestimate tolerances, potentially leading to unsafe treatments. Instead, extrapolation of mean dose tolerances should take the spatial dose distribution into account. Methods: A formula has been derived to extrapolate mean physical dose constraints such that they are mean BED equivalent. This formula constitutes a modified BED equation where the influence of themore » spatial dose distribution is summarized in a single parameter, the dose shape factor. To quantify effects we analyzed 14 liver cancer patients previously treated with proton therapy in 5 or 15 fractions, for whom also photon IMRT plans were available. Results: Our work has two main implications. First, in typical clinical plans the dose distribution can have significant effects. When mean dose tolerances are extrapolated from standard fractionation towards hypofractionation they can be overestimated by 10–15%. Second, the shape difference between photon and proton dose distributions can cause 30–40% differences in mean physical dose for plans having the same mean BED. The combined effect when extrapolating proton doses to mean BED equivalent photon doses in traditional 35 fraction regimens resulted in up to 7–8 Gy higher doses than when applying the standard BED formula. This can potentially lead to unsafe treatments (in 1 of the 14 analyzed plans the liver mean dose was above its 32 Gy tolerance). Conclusion: The shape effect should be accounted for to avoid unsafe overestimation of mean dose tolerances, particularly when estimating constraints for hypofractionated regimens. In addition, tolerances established for a given treatment modality cannot necessarily be applied to other modalities with drastically different dose distributions.« less

  15. A DOSIMETRIC ANALYSIS OF THE ACUTE BEHAVIORAL EFFECTS OF INHALED TOLUENE IN RATS

    EPA Science Inventory

    Knowledge of the appropriate metric of dose for a toxic chemical facilitates quantitative extrapolation of toxicity observed in the laboratory to the risk of adverse effects in the human population. Here we utilize a physiologically-based toxicokinetic (PBTK) model for toluene, a...

  16. OPPTS Meeting: Linkage Of Exposure And Effects Using Genomics, Proteomics, And Metabolomics In Small Fish Models

    EPA Science Inventory

    Poster for the OPPTS Science Forum. Knowledge of possible toxic mechanisms/modes of action (MOA) of chemicals can provide valuable insights as to appropriate methods for assessing exposure and effects, thereby reducing uncertainties related to extrapolation across species, endpoi...

  17. Interim methods for development of inhalation reference concentrations. Draft report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blackburn, K.; Dourson, M.; Erdreich, L.

    1990-08-01

    An inhalation reference concentration (RfC) is an estimate of continuous inhalation exposure over a human lifetime that is unlikely to pose significant risk of adverse noncancer health effects and serves as a benchmark value for assisting in risk management decisions. Derivation of an RfC involves dose-response assessment of animal data to determine the exposure levels at which no significant increase in the frequency or severity of adverse effects between the exposed population and its appropriate control exists. The assessment requires an interspecies dose extrapolation from a no-observed-adverse-effect level (NOAEL) exposure concentration of an animal to a human equivalent NOAEL (NOAEL(HBC)).more » The RfC is derived from the NOAEL(HBC) by the application of generally order-of-magnitude uncertainty factors. Intermittent exposure scenarios in animals are extrapolated to chronic continuous human exposures. Relationships between external exposures and internal doses depend upon complex simultaneous and consecutive processes of absorption, distribution, metabolism, storage, detoxification, and elimination. To estimate NOAEL(HBC)s when chemical-specific physiologically-based pharmacokinetic models are not available, a dosimetric extrapolation procedure based on anatomical and physiological parameters of the exposed human and animal and the physical parameters of the toxic chemical has been developed which gives equivalent or more conservative exposure concentrations values than those that would be obtained with a PB-PK model.« less

  18. Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.

    2013-08-01

    In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.

  19. Physiologically based pharmacokinetic model for quinocetone in pigs and extrapolation to mequindox.

    PubMed

    Zhu, Xudong; Huang, Lingli; Xu, Yamei; Xie, Shuyu; Pan, Yuanhu; Chen, Dongmei; Liu, Zhenli; Yuan, Zonghui

    2017-02-01

    Physiologically based pharmacokinetic (PBPK) models are scientific methods used to predict veterinary drug residues that may occur in food-producing animals, and which have powerful extrapolation ability. Quinocetone (QCT) and mequindox (MEQ) are widely used in China for the prevention of bacterial infections and promoting animal growth, but their abuse causes a potential threat to human health. In this study, a flow-limited PBPK model was developed to simulate simultaneously residue depletion of QCT and its marker residue dideoxyquinocetone (DQCT) in pigs. The model included compartments for blood, liver, kidney, muscle and fat and an extra compartment representing the other tissues. Physiological parameters were obtained from the literature. Plasma protein binding rates, renal clearances and tissue/plasma partition coefficients were determined by in vitro and in vivo experiments. The model was calibrated and validated with several pharmacokinetic and residue-depletion datasets from the literature. Sensitivity analysis and Monte Carlo simulations were incorporated into the PBPK model to estimate individual variation of residual concentrations. The PBPK model for MEQ, the congener compound of QCT, was built through cross-compound extrapolation based on the model for QCT. The QCT model accurately predicted the concentrations of QCT and DQCT in various tissues at most time points, especially the later time points. Correlation coefficients between predicted and measured values for all tissues were greater than 0.9. Monte Carlo simulations showed excellent consistency between estimated concentration distributions and measured data points. The extrapolation model also showed good predictive power. The present models contribute to improve the residue monitoring systems of QCT and MEQ, and provide evidence of the usefulness of PBPK model extrapolation for the same kinds of compounds.

  20. Crime prediction modeling

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.

  1. Endocrine disrupting chemicals in fish: developing exposure indicators and predictive models of effects based on mechanism of action

    EPA Science Inventory

    Knowledge of possible toxic mechanisms/modes of action (MOA) of chemicals can provide valuable insights as to appropriate methods for assessing exposure and effects, such as reducing uncertainties related to extrapolation across species, endpoints and chemical structure. However,...

  2. Predicting the acute behavioral effects in rats inhaling toluene (or up to 24 hrs: Inhaled vs. internal dose metrics.

    EPA Science Inventory

    The acute toxicity oftoluene, a model volatile organic compound (VOC), depends on the concentration (C) and duration (t) ofexposure, and guidelines for acute exposures have traditionally used ext relationships to extrapolate protective and/or effective concentrations across durat...

  3. Chiral extrapolation of nucleon axial charge gA in effective field theory

    NASA Astrophysics Data System (ADS)

    Li, Hong-na; Wang, P.

    2016-12-01

    The extrapolation of nucleon axial charge gA is investigated within the framework of heavy baryon chiral effective field theory. The intermediate octet and decuplet baryons are included in the one loop calculation. Finite range regularization is applied to improve the convergence in the quark-mass expansion. The lattice data from three different groups are used for the extrapolation. At physical pion mass, the extrapolated gA are all smaller than the experimental value. Supported by National Natural Science Foundation of China (11475186) and Sino-German CRC 110 (NSFC 11621131001)

  4. Predicting treatment effect from surrogate endpoints and historical trials: an extrapolation involving probabilities of a binary outcome or survival to a specific time

    PubMed Central

    Sargent, Daniel J.; Buyse, Marc; Burzykowski, Tomasz

    2011-01-01

    SUMMARY Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download. PMID:21838732

  5. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  6. Physiologically-based Pharmacokinetic(PBPK) Models Application to Screen Environmental Hazards Related to Adverse Outcome Pathways(AOPs)

    EPA Science Inventory

    PBPK models are useful in estimating exposure levels based on in vitro to in vivo extrapolation (IVIVE) calculations. Linkage of large sets of chemically screened vitro signature effects to in vivo adverse outcomes using IVIVE is central to the concepts of toxicology in the 21st ...

  7. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    DOE PAGES

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; ...

    2016-02-18

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less

  8. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    USGS Publications Warehouse

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.

    2016-01-01

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.

  9. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less

  10. Nowcasting of deep convective clouds and heavy precipitation: Comparison study between NWP model simulation and extrapolation

    NASA Astrophysics Data System (ADS)

    Bližňák, Vojtěch; Sokol, Zbyněk; Zacharov, Petr

    2017-02-01

    An evaluation of convective cloud forecasts performed with the numerical weather prediction (NWP) model COSMO and extrapolation of cloud fields is presented using observed data derived from the geostationary satellite Meteosat Second Generation (MSG). The present study focuses on the nowcasting range (1-5 h) for five severe convective storms in their developing stage that occurred during the warm season in the years 2012-2013. Radar reflectivity and extrapolated radar reflectivity data were assimilated for at least 6 h depending on the time of occurrence of convection. Synthetic satellite imageries were calculated using radiative transfer model RTTOV v10.2, which was implemented into the COSMO model. NWP model simulations of IR10.8 μm and WV06.2 μm brightness temperatures (BTs) with a horizontal resolution of 2.8 km were interpolated into the satellite projection and objectively verified against observations using Root Mean Square Error (RMSE), correlation coefficient (CORR) and Fractions Skill Score (FSS) values. Naturally, the extrapolation of cloud fields yielded an approximately 25% lower RMSE, 20% higher CORR and 15% higher FSS at the beginning of the second forecasted hour compared to the NWP model forecasts. On the other hand, comparable scores were observed for the third hour, whereas the NWP forecasts outperformed the extrapolation by 10% for RMSE, 15% for CORR and up to 15% for FSS during the fourth forecasted hour and 15% for RMSE, 27% for CORR and up to 15% for FSS during the fifth forecasted hour. The analysis was completed by a verification of the precipitation forecasts yielding approximately 8% higher RMSE, 15% higher CORR and up to 45% higher FSS when the NWP model simulation is used compared to the extrapolation for the first hour. Both the methods yielded unsatisfactory level of precipitation forecast accuracy from the fourth forecasted hour onward.

  11. Estimation of Potential Population Level Effects of Contaminants on Wildlife

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loar, J.M.

    2001-06-11

    The objective of this project is to provide DOE with improved methods to assess risks from contaminants to wildlife populations. The current approach for wildlife risk assessment consists of comparison of contaminant exposure estimates for individual animals to literature-derived toxicity test endpoints. These test endpoints are assumed to estimate thresholds for population-level effects. Moreover, species sensitivities to contaminants is one of several criteria to be considered when selecting assessment endpoints (EPA 1997 and 1998), yet data on the sensitivities of many birds and mammals are lacking. The uncertainties associated with this approach are considerable. First, because toxicity data are notmore » available for most potential wildlife endpoint species, extrapolation of toxicity data from test species to the species of interest is required. There is no consensus on the most appropriate extrapolation method. Second, toxicity data are represented as statistical measures (e.g., NOAEL s or LOAELs) that provide no information on the nature or magnitude of effects. The level of effect is an artifact of the replication and dosing regime employed, and does not indicate how effects might increase with increasing exposure. Consequently, slight exceedance of a LOAEL is not distinguished from greatly exceeding it. Third, the relationship of toxic effects on individuals to effects on populations is poorly estimated by existing methods. It is assumed that if the exposure of individuals exceeds levels associated with impaired reproduction, then population level effects are likely. Uncertainty associated with this assumption is large because depending on the reproductive strategy of a given species, comparable levels of reproductive impairment may result in dramatically different population-level responses. This project included several tasks to address these problems: (1) investigation of the validity of the current allometric scaling approach for interspecies extrapolation an d development of new scaling models; (2) development of dose-response models for toxicity data presented in the literature; and (3) development of matrix-based population models that were coupled with dose-response models to provide realistic estimation of population-level effects for individual responses.« less

  12. Potential application of ecological models in the European environmental risk assessment of chemicals. I. Review of protection goals in EU directives and regulations.

    PubMed

    Hommen, Udo; Baveco, J M Hans; Galic, Nika; van den Brink, Paul J

    2010-07-01

    Several European directives and regulations address the environmental risk assessment of chemicals. We used the protection of freshwater ecosystems against plant protection products, biocidal products, human and veterinary pharmaceuticals, and other chemicals and priority substances under the Water Framework Directive as examples to explore the potential of ecological effect models for a refined risk assessment. Our analysis of the directives, regulations, and related guidance documents lead us to distinguish the following 5 areas for the application of ecological models in chemical risk assessment: 1) Extrapolation of organism-level effects to the population level: The protection goals are formulated in general terms, e.g., avoiding "unacceptable effects" or "adverse impact" on the environment or the "viability of exposed species." In contrast, most of the standard ecotoxicological tests provide data only on organism-level endpoints and are thus not directly linked to the protection goals which focus on populations and communities. 2) Extrapolation of effects between different exposure profiles: Especially for plant protection products, exposure profiles can be very variable and impossible to cover in toxicological tests. 3) Extrapolation of recovery processes: As a consequence of the often short-term exposures to plant protection products, the risk assessment is based on the community recovery principle. On the other hand, assessments under the other directives assume a more or less constant exposure and are based on the ecosystem threshold principle. 4) Analysis and prediction of indirect effects: Because effects on 1 or a few taxa might have consequences on other taxa that are not directly affected by the chemical, such indirect effects on communities have to be considered. 5) Prediction of bioaccumulation within food chains: All directives take the possibility of bioaccumulation, and thus secondary poisoning within the food chain, into account. (c) 2010 SETAC.

  13. Developing integral projection models for aquatic ecotoxicology

    EPA Science Inventory

    Extrapolating laboratory measured effects of chemicals to ecologically relevant scales is a fundamental challenge in ecotoxicology. In addition to influencing survival in the wild (e.g., over-winter survival) size has been shown to control onset of reproduction for the toxicologi...

  14. The biotic ligand model approach for addressing effects of exposure water chemistry on aquatic toxicity of metals: Genesis and challenges

    EPA Science Inventory

    A major uncertainty in many aquatic risk assessments for toxic chemicals is the aggregate effect of the physicochemical characteristics of exposure media on toxicity, and how this affects extrapolation of laboratory test results to natural systems. A notable example of this is h...

  15. Predicting the acute behavioral effects in rats inhaling toluene for up to 24 hrs: Inhaled vs, internal dose metrics and tolerance.

    EPA Science Inventory

    The acute toxicity of toluene, a model volatile organic compound (VOC), depends on the concentration (C) and duration (t) of exposure, and guidelines for acute exposures have traditionally used extrelationships to extrapolate protective and/or effective concentrations across dura...

  16. On the assessment of biological life support system operation range

    NASA Astrophysics Data System (ADS)

    Bartsev, Sergey

    Biological life support systems (BLSS) can be used in long-term space missions only if well-thought-out assessment of the allowable operating range is obtained. The range has to account both permissible working parameters of BLSS and the critical level of perturbations of BLSS stationary state. Direct approach to outlining the range by statistical treatment of experimental data on BLSS destruction seems to be not applicable due to ethical, economical, and saving time reasons. Mathematical model is the unique tool for the generalization of experimental data and the extrapolation of the revealed regularities beyond empirical experience. The problem is that the quality of extrapolation depends on the adequacy of corresponding model verification, but good verification requires wide range of experimental data for fitting, which is not achievable for manned experimental BLSS. Possible way to improve the extrapolation quality of inevitably poorly verified models of manned BLSS is to extrapolate general tendency obtained from unmanned LSS theoretical-experiment investigations. Possibilities and limitations of such approach are discussed.

  17. Modeling forest carbon cycle using long-term carbon stock field measurement in the Delaware River Basin

    Treesearch

    Bing Xu; Yude Pan; Alain F. Plante; Kevin McCullough; Richard Birdsey

    2017-01-01

    Process-based models are a powerful approach to test our understanding of biogeochemical processes, to extrapolate ground survey data from limited plots to the landscape scale, and to simulate the effects of climate change, nitrogen deposition, elevated atmospheric CO2, increasing natural disturbances, and land-use change on ecological processes...

  18. The Extrapolation of Elementary Sequences

    NASA Technical Reports Server (NTRS)

    Laird, Philip; Saul, Ronald

    1992-01-01

    We study sequence extrapolation as a stream-learning problem. Input examples are a stream of data elements of the same type (integers, strings, etc.), and the problem is to construct a hypothesis that both explains the observed sequence of examples and extrapolates the rest of the stream. A primary objective -- and one that distinguishes this work from previous extrapolation algorithms -- is that the same algorithm be able to extrapolate sequences over a variety of different types, including integers, strings, and trees. We define a generous family of constructive data types, and define as our learning bias a stream language called elementary stream descriptions. We then give an algorithm that extrapolates elementary descriptions over constructive datatypes and prove that it learns correctly. For freely-generated types, we prove a polynomial time bound on descriptions of bounded complexity. An especially interesting feature of this work is the ability to provide quantitative measures of confidence in competing hypotheses, using a Bayesian model of prediction.

  19. The value of remote sensing techniques in supporting effective extrapolation across multiple marine spatial scales.

    PubMed

    Strong, James Asa; Elliott, Michael

    2017-03-15

    The reporting of ecological phenomena and environmental status routinely required point observations, collected with traditional sampling approaches to be extrapolated to larger reporting scales. This process encompasses difficulties that can quickly entrain significant errors. Remote sensing techniques offer insights and exceptional spatial coverage for observing the marine environment. This review provides guidance on (i) the structures and discontinuities inherent within the extrapolative process, (ii) how to extrapolate effectively across multiple spatial scales, and (iii) remote sensing techniques and data sets that can facilitate this process. This evaluation illustrates that remote sensing techniques are a critical component in extrapolation and likely to underpin the production of high-quality assessments of ecological phenomena and the regional reporting of environmental status. Ultimately, is it hoped that this guidance will aid the production of robust and consistent extrapolations that also make full use of the techniques and data sets that expedite this process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Measured Copper Toxicity to Cnesterodon decemmaculatus (Pisces: Poeciliidae) and Predicted by Biotic Ligand Model in Pilcomayo River Water: A Step for a Cross-Fish-Species Extrapolation

    PubMed Central

    Casares, María Victoria; de Cabo, Laura I.; Seoane, Rafael S.; Natale, Oscar E.; Castro Ríos, Milagros; Weigandt, Cristian; de Iorio, Alicia F.

    2012-01-01

    In order to determine copper toxicity (LC50) to a local species (Cnesterodon decemmaculatus) in the South American Pilcomayo River water and evaluate a cross-fish-species extrapolation of Biotic Ligand Model, a 96 h acute copper toxicity test was performed. The dissolved copper concentrations tested were 0.05, 0.19, 0.39, 0.61, 0.73, 1.01, and 1.42 mg Cu L−1. The 96 h Cu LC50 calculated was 0.655 mg L−1 (0.823 − 0.488). 96-h Cu LC50 predicted by BLM for Pimephales promelas was 0.722 mg L−1. Analysis of the inter-seasonal variation of the main water quality parameters indicates that a higher protective effect of calcium, magnesium, sodium, sulphate, and chloride is expected during the dry season. The very high load of total suspended solids in this river might be a key factor in determining copper distribution between solid and solution phases. A cross-fish-species extrapolation of copper BLM is valid within the water quality parameters and experimental conditions of this toxicity test. PMID:22523491

  1. Application of a framework for extrapolating chemical effects across species in pathways controlled by estrogen receptor-á

    EPA Science Inventory

    Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetic...

  2. Height extrapolation of wind data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikhail, A.S.

    1982-11-01

    Hourly average data for a period of 1 year from three tall meteorological towers - the Erie tower in Colorado, the Goodnoe Hills tower in Washington and the WKY-TV tower in Oklahoma - were used to analyze the wind shear exponent variabiilty with various parameters such as thermal stability, anemometer level wind speed, projection height and surface roughness. Different proposed models for prediction of height variability of short-term average wind speeds were discussed. Other models that predict the height dependence of Weilbull distribution parameters were tested. The observed power law exponent for all three towers showed strong dependence on themore » anemometer level wind speed and stability (nighttime and daytime). It also exhibited a high degree of dependence on extrapolation height with respect to anemometer height. These dependences became less severe as the anemometer level wind speeds were increased due to the turbulent mixing of the atmospheric boundary layer. The three models used for Weibull distribution parameter extrapolation were he velocity-dependent power law model (Justus), the velocity, surface roughness, and height-dependent model (Mikhail) and the velocity and surface roughness-dependent model (NASA). The models projected the scale parameter C fairly accurately for the Goodnoe Hills and WKY-TV towers and were less accurate for the Erie tower. However, all models overestimated the C value. The maximum error for the Mikhail model was less than 2% for Goodnoe Hills, 6% for WKY-TV and 28% for Erie. The error associated with the prediction of the shape factor (K) was similar for the NASA, Mikhail and Justus models. It ranged from 20 to 25%. The effect of the misestimation of hub-height distribution parameters (C and K) on average power output is briefly discussed.« less

  3. New method of extrapolation of the resistance of a model planing boat to full size

    NASA Technical Reports Server (NTRS)

    Sottorf, W

    1942-01-01

    The previously employed method of extrapolating the total resistance to full size with lambda(exp 3) (model scale) and thereby foregoing a separate appraisal of the frictional resistance, was permissible for large models and floats of normal size. But faced with the ever increasing size of aircraft a reexamination of the problem of extrapolation to full size is called for. A method is described by means of which, on the basis of an analysis of tests on planing surfaces, the variation of the wetted surface over the take-off range is analytically obtained. The friction coefficients are read from Prandtl's curve for turbulent boundary layer with laminar approach. With these two values a correction for friction is obtainable.

  4. If you try to stop smoking, should we pay for it? The cost-utility of reimbursing smoking cessation support in the Netherlands.

    PubMed

    Vemer, Pepijn; Rutten-van Mölken, Maureen P M H; Kaper, Janneke; Hoogenveen, Rudolf T; van Schayck, C P; Feenstra, Talitha L

    2010-06-01

    Smoking cessation can be encouraged by reimbursing the costs of smoking cessation support (SCS). The short-term efficiency of reimbursement has been evaluated previously. However, a thorough estimate of the long-term cost-utility is lacking. To evaluate long-term effects of reimbursement of SCS. Results from a randomized controlled trial were extrapolated to long-term outcomes in terms of health care costs and (quality adjusted) life years (QALY) gained, using the Chronic Disease Model. Our first scenario was no reimbursement. In a second scenario, the short-term cessation rates from the trial were extrapolated directly. Sensitivity analyses were based on the trial's confidence intervals. In the third scenario the additional use of SCS as found in the trial was combined with cessation rates from international meta-analyses. Intervention costs per QALY gained compared to the reference scenario were approximately euro1200 extrapolating the trial effects directly, and euro4200 when combining the trial's use of SCS with the cessation rates from the literature. Taking all health care effects into account, even costs in life years gained, resulted in an estimated incremental cost-utility of euro4500 and euro7400, respectively. In both scenarios costs per QALY remained below euro16 000 in sensitivity analyses using a life-time horizon. Extrapolating the higher use of SCS due to reimbursement led to more successful quitters and a gain in life years and QALYs. Accounting for overheads, administration costs and the costs of SCS, these health gains could be obtained at relatively low cost, even when including costs in life years gained. Hence, reimbursement of SCS seems to be cost-effective from a health care perspective.

  5. Estimation of Survival Probabilities for Use in Cost-effectiveness Analyses: A Comparison of a Multi-state Modeling Survival Analysis Approach with Partitioned Survival and Markov Decision-Analytic Modeling

    PubMed Central

    Williams, Claire; Lewsey, James D.; Mackay, Daniel F.; Briggs, Andrew H.

    2016-01-01

    Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results. PMID:27698003

  6. Estimation of Survival Probabilities for Use in Cost-effectiveness Analyses: A Comparison of a Multi-state Modeling Survival Analysis Approach with Partitioned Survival and Markov Decision-Analytic Modeling.

    PubMed

    Williams, Claire; Lewsey, James D; Mackay, Daniel F; Briggs, Andrew H

    2017-05-01

    Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results.

  7. EXTRAPOLATION MODELING: ADVANCEMENTS AND RESEARCH ISSUES IN LUNG DOSIMETRY

    EPA Science Inventory

    Many of the environmental pollutants to which humans are exposed are increasing rapidly, in terms of number, complexity, and concentration. ne of the great challenges in environmental medicine is to define more accurately the adverse health effects likely to be encountered by exp...

  8. The Atomization Energy of Mg4

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Arnold, James O. (Technical Monitor)

    1999-01-01

    The atomization energy of Mg4 is determined using the MP2 and CCSD(T) levels of theory. Basis set incompleteness, basis set extrapolation, and core-valence effects are discussed. Our best atomization energy, including the zero-point energy and scalar relativistic effects, is 24.6+/-1.6 kcal per mol. Our computed and extrapolated values are compared with previous results, where it is observed that our extrapolated MP2 value is good agreement with the MP2-R12 value. The CCSD(T) and MP2 core effects are found to have the opposite signs.

  9. Using physiologically based pharmacokinetic modeling to address nonlinear kinetics and changes in rodent physiology and metabolism due to aging and adaptation in deriving reference values for propylene glycol methyl ether and propylene glycol methyl ether acetate.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirman, C R.; Sweeney, Lisa M.; Corley, Rick A.

    2005-04-01

    Reference values, including an oral reference dose (RfD) and an inhalation reference concentration (RfC), were derived for propylene glycol methyl ether (PGME), and an oral RfD was derived for its acetate (PGMEA). These values were based upon transient sedation observed in F344 rats and B6C3F1 mice during a two-year inhalation study. The dose-response relationship for sedation was characterized using internal dose measures as predicted by a physiologically based pharmacokinetic (PBPK) model for PGME and its acetate. PBPK modeling was used to account for changes in rodent physiology and metabolism due to aging and adaptation, based on data collected during weeksmore » 1, 2, 26, 52, and 78 of a chronic inhalation study. The peak concentration of PGME in richly perfused tissues was selected as the most appropriate internal dose measure based upon a consideration of the mode of action for sedation and similarities in tissue partitioning between brain and other richly perfused tissues. Internal doses (peak tissue concentrations of PGME) were designated as either no-observed-adverse-effect levels (NOAELs) or lowest-observed-adverse-effect levels (LOAELs) based upon the presence or absence of sedation at each time-point, species, and sex in the two year study. Distributions of the NOAEL and LOAEL values expressed in terms of internal dose were characterized using an arithmetic mean and standard deviation, with the mean internal NOAEL serving as the basis for the reference values, which was then divided by appropriate uncertainty factors. Where data were permitting, chemical-specific adjustment factors were derived to replace default uncertainty factor values of ten. Nonlinear kinetics are were predicted by the model in all species at PGME concentrations exceeding 100 ppm, which complicates interspecies and low-dose extrapolations. To address this complication, reference values were derived using two approaches which differ with respect to the order in which these extrapolations were performed: (1) uncertainty factor application followed by interspecies extrapolation (PBPK modeling); and (2) interspecies extrapolation followed by uncertainty factor application. The resulting reference values for these two approaches are substantially different, with values from the former approach being 7-fold higher than those from the latter approach. Such a striking difference between the two approaches reveals an underlying issue that has received little attention in the literature regarding the application of uncertainty factors and interspecies extrapolations to compounds where saturable kinetics occur in the range of the NOAEL. Until such discussions have taken place, reference values based on the latter approach are recommended for risk assessments involving human exposures to PGME and PGMEA.« less

  10. Exposition of humans to low doses and low dose rate irradiation: an urgent need for new markers and new models.

    PubMed

    Chenal, C; Legue, F; Nourgalieva, K; Brouazin-Jousseaume, V; Durel, S; Guitton, N

    2000-01-01

    In human radiation protection, the shape of the dose effects curve for low doses irradiation (LDI) is assumed to be linear, extrapolated from the clinical consequences of Hiroshima and Nagasaki nuclear explosions. This extrapolation probably overestimates the risk below 200 mSv. In many circumstances, the living species and cells can develop some mechanisms of adaptation. Classical epidemiological studies will not be able to answer the question and there is a need to assess more sensitive biological markers of the effects of LDI. The researches should be focused on DNA effects (strand breaks), radioinduced expression of new genes and proteins involved in the response to oxidative stress and DNA repair mechanisms. New experimental biomolecular techniques should be developed in parallel with more conventional ones. Such studies would permit to assess new biological markers of radiosensitivity, which could be of great interest in radiation protection and radio-oncology.

  11. Identification of the viscoelastic properties of soft materials at low frequency: performance, ill-conditioning and extrapolation capabilities of fractional and exponential models.

    PubMed

    Ciambella, J; Paolone, A; Vidoli, S

    2014-09-01

    We report about the experimental identification of viscoelastic constitutive models for frequencies ranging within 0-10Hz. Dynamic moduli data are fitted forseveral materials of interest to medical applications: liver tissue (Chatelin et al., 2011), bioadhesive gel (Andrews et al., 2005), spleen tissue (Nicolle et al., 2012) and synthetic elastomer (Osanaiye, 1996). These materials actually represent a rather wide class of soft viscoelastic materials which are usually subjected to low frequencies deformations. We also provide prescriptions for the correct extrapolation of the material behavior at higher frequencies. Indeed, while experimental tests are more easily carried out at low frequency, the identified viscoelastic models are often used outside the frequency range of the actual test. We consider two different classes of models according to their relaxation function: Debye models, whose kernel decays exponentially fast, and fractional models, including Cole-Cole, Davidson-Cole, Nutting and Havriliak-Negami, characterized by a slower decay rate of the material memory. Candidate constitutive models are hence rated according to the accurateness of the identification and to their robustness to extrapolation. It is shown that all kernels whose decay rate is too fast lead to a poor fitting and high errors when the material behavior is extrapolated to broader frequency ranges. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  12. EXPOSURE-DOSE-RESPONSE MODELING OF THE NEUROTOXIC EFFECTS OF ORGANIC SOLVENTS.

    EPA Science Inventory

    Risk assessments based on exposure to volatile organic compounds (VOCs) are hampered by the complexities of exposure scenarios, a lack of data regarding the mode of action of the VOCs, and uncertainties about extrapolating from animal data to human health risk. We are developing ...

  13. Structural analysis of cylindrical thrust chambers, volume 1

    NASA Technical Reports Server (NTRS)

    Armstrong, W. H.

    1979-01-01

    Life predictions of regeneratively cooled rocket thrust chambers are normally derived from classical material fatigue principles. The failures observed in experimental thrust chambers do not appear to be due entirely to material fatigue. The chamber coolant walls in the failed areas exhibit progressive bulging and thinning during cyclic firings until the wall stress finally exceeds the material rupture stress and failure occurs. A preliminary analysis of an oxygen free high conductivity (OFHC) copper cylindrical thrust chamber demonstrated that the inclusion of cumulative cyclic plastic effects enables the observed coolant wall thinout to be predicted. The thinout curve constructed from the referent analysis of 10 firing cycles was extrapolated from the tenth cycle to the 200th cycle. The preliminary OFHC copper chamber 10-cycle analysis was extended so that the extrapolated thinout curve could be established by performing cyclic analysis of deformed configurations at 100 and 200 cycles. Thus the original range of extrapolation was reduced and the thinout curve was adjusted by using calculated thinout rates at 100 and 100 cycles. An analysis of the same underformed chamber model constructed of half-hard Amzirc to study the effect of material properties on the thinout curve is included.

  14. Black swans, power laws, and dragon-kings: Earthquakes, volcanic eruptions, landslides, wildfires, floods, and SOC models

    NASA Astrophysics Data System (ADS)

    Sachs, M. K.; Yoder, M. R.; Turcotte, D. L.; Rundle, J. B.; Malamud, B. D.

    2012-05-01

    Extreme events that change global society have been characterized as black swans. The frequency-size distributions of many natural phenomena are often well approximated by power-law (fractal) distributions. An important question is whether the probability of extreme events can be estimated by extrapolating the power-law distributions. Events that exceed these extrapolations have been characterized as dragon-kings. In this paper we consider extreme events for earthquakes, volcanic eruptions, wildfires, landslides and floods. We also consider the extreme event behavior of three models that exhibit self-organized criticality (SOC): the slider-block, forest-fire, and sand-pile models. Since extrapolations using power-laws are widely used in probabilistic hazard assessment, the occurrence of dragon-king events have important practical implications.

  15. Adsorption of pharmaceuticals onto activated carbon fiber cloths - Modeling and extrapolation of adsorption isotherms at very low concentrations.

    PubMed

    Fallou, Hélène; Cimetière, Nicolas; Giraudet, Sylvain; Wolbert, Dominique; Le Cloirec, Pierre

    2016-01-15

    Activated carbon fiber cloths (ACFC) have shown promising results when applied to water treatment, especially for removing organic micropollutants such as pharmaceutical compounds. Nevertheless, further investigations are required, especially considering trace concentrations, which are found in current water treatment. Until now, most studies have been carried out at relatively high concentrations (mg L(-1)), since the experimental and analytical methodologies are more difficult and more expensive when dealing with lower concentrations (ng L(-1)). Therefore, the objective of this study was to validate an extrapolation procedure from high to low concentrations, for four compounds (Carbamazepine, Diclofenac, Caffeine and Acetaminophen). For this purpose, the reliability of the usual adsorption isotherm models, when extrapolated from high (mg L(-1)) to low concentrations (ng L(-1)), was assessed as well as the influence of numerous error functions. Some isotherm models (Freundlich, Toth) and error functions (RSS, ARE) show weaknesses to be used as an adsorption isotherms at low concentrations. However, from these results, the pairing of the Langmuir-Freundlich isotherm model with Marquardt's percent standard of deviation was evidenced as the best combination model, enabling the extrapolation of adsorption capacities by orders of magnitude. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Comparison of the effectiveness of some common animal data scaling techniques in estimating human radiation dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sparks, R.B.; Aydogan, B.

    In the development of new radiopharmaceuticals, animal studies are typically performed to get a first approximation of the expected radiation dose in humans. This study evaluates the performance of some commonly used data extrapolation techniques to predict residence times in humans using data collected from animals. Residence times were calculated using animal and human data, and distributions of ratios of the animal results to human results were constructed for each extrapolation method. Four methods using animal data to predict human residence times were examined: (1) using no extrapolation, (2) using relative organ mass extrapolation, (3) using physiological time extrapolation, andmore » (4) using a combination of the mass and time methods. The residence time ratios were found to be log normally distributed for the nonextrapolated and extrapolated data sets. The use of relative organ mass extrapolation yielded no statistically significant change in the geometric mean or variance of the residence time ratios as compared to using no extrapolation. Physiologic time extrapolation yielded a statistically significant improvement (p < 0.01, paired t test) in the geometric mean of the residence time ratio from 0.5 to 0.8. Combining mass and time methods did not significantly improve the results of using time extrapolation alone. 63 refs., 4 figs., 3 tabs.« less

  17. Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer.

    PubMed

    Le, Guigao; Oulaid, Othmane; Zhang, Junfeng

    2015-03-01

    In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.

  18. Extrapolation of bulk rock elastic moduli of different rock types to high pressure conditions and comparison with texture-derived elastic moduli

    NASA Astrophysics Data System (ADS)

    Ullemeyer, Klaus; Lokajíček, Tomás; Vasin, Roman N.; Keppler, Ruth; Behrmann, Jan H.

    2018-02-01

    In this study elastic moduli of three different rock types of simple (calcite marble) and more complex (amphibolite, micaschist) mineralogical compositions were determined by modeling of elastic moduli using texture (crystallographic preferred orientation; CPO) data, experimental investigation and extrapolation. 3D models were calculated using single crystal elastic moduli, and CPO measured using time-of-flight neutron diffraction at the SKAT diffractometer in Dubna (Russia) and subsequently analyzed using Rietveld Texture Analysis. To define extrinsic factors influencing elastic behaviour, P-wave and S-wave velocity anisotropies were experimentally determined at 200, 400 and 600 MPa confining pressure. Functions describing variations of the elastic moduli with confining pressure were then used to predict elastic properties at 1000 MPa, revealing anisotropies in a supposedly crack-free medium. In the calcite marble elastic anisotropy is dominated by the CPO. Velocities continuously increase, while anisotropies decrease from measured, over extrapolated to CPO derived data. Differences in velocity patterns with sample orientation suggest that the foliation forms an important mechanical anisotropy. The amphibolite sample shows similar magnitudes of extrapolated and CPO derived velocities, however the pattern of CPO derived velocity is closer to that measured at 200 MPa. Anisotropy decreases from the extrapolated to the CPO derived data. In the micaschist, velocities are higher and anisotropies are lower in the extrapolated data, in comparison to the data from measurements at lower pressures. Generally our results show that predictions for the elastic behavior of rocks at great depths are possible based on experimental data and those computed from CPO. The elastic properties of the lower crust can, thus, be characterized with an improved degree of confidence using extrapolations. Anisotropically distributed spherical micro-pores are likely to be preserved, affecting seismic velocity distributions. Compositional variations in the polyphase rock samples do not significantly change the velocity patterns, allowing the use of RTA-derived volume percentages for the modeling of elastic moduli.

  19. A linear and non-linear polynomial neural network modeling of dissolved oxygen content in surface water: Inter- and extrapolation performance with inputs' significance analysis.

    PubMed

    Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor

    2018-01-01

    Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over pH and BOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Kinetic Monte Carlo simulations of water ice porosity: extrapolations of deposition parameters from the laboratory to interstellar space

    NASA Astrophysics Data System (ADS)

    Clements, Aspen R.; Berk, Brandon; Cooke, Ilsa R.; Garrod, Robin T.

    2018-02-01

    Using an off-lattice kinetic Monte Carlo model we reproduce experimental laboratory trends in the density of amorphous solid water (ASW) for varied deposition angle, rate and surface temperature. Extrapolation of the model to conditions appropriate to protoplanetary disks and interstellar dark clouds indicate that these ices may be less porous than laboratory ices.

  1. An analysis of shock coalescence including three-dimensional effects with application to sonic boom extrapolation. Ph.D. Thesis - George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1984-01-01

    A method for analyzing shock coalescence which includes three dimensional effects was developed. The method is based on an extension of the axisymmetric solution, with asymmetric effects introduced through an additional set of governing equations, derived by taking the second circumferential derivative of the standard shock equations in the plane of symmetry. The coalescence method is consistent with and has been combined with a nonlinear sonic boom extrapolation program which is based on the method of characteristics. The extrapolation program, is able to extrapolate pressure signatures which include embedded shocks from an initial data line in the plane of symmetry at approximately one body length from the axis of the aircraft to the ground. The axisymmetric shock coalescence solution, the asymmetric shock coalescence solution, the method of incorporating these solutions into the extrapolation program, and the methods used to determine spatial derivatives needed in the coalescence solution are described. Results of the method are shown for a body of revolution at a small, positive angle of attack.

  2. 42 CFR 81.11 - Use of uncertainty analysis in NIOSH-IREP.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... uncertainties in estimating: radiation dose incurred by the covered employee; the radiation dose-cancer relationship (statistical uncertainty in the specific cancer risk model); the extrapolation of risk (risk transfer) from the Japanese to the U.S. population; differences in the amount of cancer effect caused by...

  3. 42 CFR 81.11 - Use of uncertainty analysis in NIOSH-IREP.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... uncertainties in estimating: radiation dose incurred by the covered employee; the radiation dose-cancer relationship (statistical uncertainty in the specific cancer risk model); the extrapolation of risk (risk transfer) from the Japanese to the U.S. population; differences in the amount of cancer effect caused by...

  4. Molecular Modeling for Screening Environmental Chemicals for Estrogenicity: Use of the Toxicant-Target Approach

    EPA Science Inventory

    There is a paucity of relevant experimental information available for the evaluation of the potential health and environmental effects of many man made chemicals. Knowledge of the potential pathways for activity provides a rational basis for the extrapolations inherent in the pre...

  5. FROM ORGANISMS TO POPULATIONS: MODELING AQUATIC TOXICITY DATA ACROSS TWO LEVELS OF BIOLOGICAL ORGANIZATION.

    EPA Science Inventory

    A critical step in estimating the ecological effects of a toxicant is extrapolating organism-level response data across higher levels of biological organization. In the present study, the organism-to-population link is made for the mysid, Americamysis bahia, exposed to a range of...

  6. Toxicity testing in the 21st Century: Extrapolation from in vitro to in vivo using xenoestrogens as a model

    EPA Science Inventory

    Slides associated with the following abstract: Numerous sources contribute to widespread contamination of drinking water sources with both natural and synthetic estrogens, which is a concern for potential ecological and human health effects. In vitro screening assays are valuabl...

  7. MEETING IN PORTUGAL: LINKAGE OF EXPOSURE AND EFFECTS USING GENOMICS, PROTEOMICS AND METABOLOMICS IN SMALL FISH MODELS

    EPA Science Inventory

    With an interdisciplinary team of scientists from U.S. Government Agencies and Universities, we are utilizing zebrafish and fathead minnow to develop techniques for extrapolation of chemical stressor impacts across species, chemicals and endpoints. The linkage of responses acros...

  8. 42 CFR 81.11 - Use of uncertainty analysis in NIOSH-IREP.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... uncertainties in estimating: radiation dose incurred by the covered employee; the radiation dose-cancer relationship (statistical uncertainty in the specific cancer risk model); the extrapolation of risk (risk transfer) from the Japanese to the U.S. population; differences in the amount of cancer effect caused by...

  9. 42 CFR 81.11 - Use of uncertainty analysis in NIOSH-IREP.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... uncertainties in estimating: radiation dose incurred by the covered employee; the radiation dose-cancer relationship (statistical uncertainty in the specific cancer risk model); the extrapolation of risk (risk transfer) from the Japanese to the U.S. population; differences in the amount of cancer effect caused by...

  10. 42 CFR 81.11 - Use of uncertainty analysis in NIOSH-IREP.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... uncertainties in estimating: radiation dose incurred by the covered employee; the radiation dose-cancer relationship (statistical uncertainty in the specific cancer risk model); the extrapolation of risk (risk transfer) from the Japanese to the U.S. population; differences in the amount of cancer effect caused by...

  11. Nonlinear cancer response at ultralow dose: a 40800-animal ED(001) tumor and biomarker study.

    PubMed

    Bailey, George S; Reddy, Ashok P; Pereira, Clifford B; Harttig, Ulrich; Baird, William; Spitsbergen, Jan M; Hendricks, Jerry D; Orner, Gayle A; Williams, David E; Swenberg, James A

    2009-07-01

    Assessment of human cancer risk from animal carcinogen studies is severely limited by inadequate experimental data at environmentally relevant exposures and by procedures requiring modeled extrapolations many orders of magnitude below observable data. We used rainbow trout, an animal model well-suited to ultralow-dose carcinogenesis research, to explore dose-response down to a targeted 10 excess liver tumors per 10000 animals (ED(001)). A total of 40800 trout were fed 0-225 ppm dibenzo[a,l]pyrene (DBP) for 4 weeks, sampled for biomarker analyses, and returned to control diet for 9 months prior to gross and histologic examination. Suspect tumors were confirmed by pathology, and resulting incidences were modeled and compared to the default EPA LED(10) linear extrapolation method. The study provided observed incidence data down to two above-background liver tumors per 10000 animals at the lowest dose (that is, an unmodeled ED(0002) measurement). Among nine statistical models explored, three were determined to fit the liver data well-linear probit, quadratic logit, and Ryzin-Rai. None of these fitted models is compatible with the LED(10) default assumption, and all fell increasingly below the default extrapolation with decreasing DBP dose. Low-dose tumor response was also not predictable from hepatic DBP-DNA adduct biomarkers, which accumulated as a power function of dose (adducts = 100 x DBP(1.31)). Two-order extrapolations below the modeled tumor data predicted DBP doses producing one excess cancer per million individuals (ED(10)(-6)) that were 500-1500-fold higher than that predicted by the five-order LED(10) extrapolation. These results are considered specific to the animal model, carcinogen, and protocol used. They provide the first experimental estimation in any model of the degree of conservatism that may exist for the EPA default linear assumption for a genotoxic carcinogen.

  12. Electron density extrapolation above F2 peak by the linear Vary-Chap model supporting new Global Navigation Satellite Systems-LEO occultation missions

    NASA Astrophysics Data System (ADS)

    Hernández-Pajares, Manuel; Garcia-Fernández, Miquel; Rius, Antonio; Notarpietro, Riccardo; von Engeln, Axel; Olivares-Pulido, Germán.; Aragón-Àngel, Àngela; García-Rigo, Alberto

    2017-08-01

    The new radio-occultation (RO) instrument on board the future EUMETSAT Polar System-Second Generation (EPS-SG) satellites, flying at a height of 820 km, is primarily focusing on neutral atmospheric profiling. It will also provide an opportunity for RO ionospheric sounding, but only below impact heights of 500 km, in order to guarantee a full data gathering of the neutral part. This will leave a gap of 320 km, which impedes the application of the direct inversion techniques to retrieve the electron density profile. To overcome this challenge, we have looked for new ways (accurate and simple) of extrapolating the electron density (also applicable to other low-Earth orbiting, LEO, missions like CHAMP): a new Vary-Chap Extrapolation Technique (VCET). VCET is based on the scale height behavior, linearly dependent on the altitude above hmF2. This allows extrapolating the electron density profile for impact heights above its peak height (this is the case for EPS-SG), up to the satellite orbital height. VCET has been assessed with more than 3700 complete electron density profiles obtained in four representative scenarios of the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) in the United States and the Formosa Satellite Mission 3 (FORMOSAT-3) in Taiwan, in solar maximum and minimum conditions, and geomagnetically disturbed conditions, by applying an updated Improved Abel Transform Inversion technique to dual-frequency GPS measurements. It is shown that VCET performs much better than other classical Chapman models, with 60% of occultations showing relative extrapolation errors below 20%, in contrast with conventional Chapman model extrapolation approaches with 10% or less of the profiles with relative error below 20%.

  13. Radiological Dispersal Devices: Select Issues in Consequence Management

    DTIC Science & Technology

    2004-03-10

    goals, following which medical treatment of the radiation effects can be provided.10 Post- exposure medical therapy is designed to treat the consequences ...the approach that radiation related health effects can be extrapolated, i.e. the damage caused by radiation exposure CRS-3 8 For example, see Health...effort to determine the validity of these models, the federal government funds research into the health effects of radiation exposure through the

  14. SNSEDextend: SuperNova Spectral Energy Distributions extrapolation toolkit

    NASA Astrophysics Data System (ADS)

    Pierel, Justin D. R.; Rodney, Steven A.; Avelino, Arturo; Bianco, Federica; Foley, Ryan J.; Friedman, Andrew; Hicken, Malcolm; Hounsell, Rebekah; Jha, Saurabh W.; Kessler, Richard; Kirshner, Robert; Mandel, Kaisey; Narayan, Gautham; Filippenko, Alexei V.; Scolnic, Daniel; Strolger, Louis-Gregory

    2018-05-01

    SNSEDextend extrapolates core-collapse and Type Ia Spectral Energy Distributions (SEDs) into the UV and IR for use in simulations and photometric classifications. The user provides a library of existing SED templates (such as those in the authors' SN SED Repository) along with new photometric constraints in the UV and/or NIR wavelength ranges. The software then extends the existing template SEDs so their colors match the input data at all phases. SNSEDextend can also extend the SALT2 spectral time-series model for Type Ia SN for a "first-order" extrapolation of the SALT2 model components, suitable for use in survey simulations and photometric classification tools; as the code does not do a rigorous re-training of the SALT2 model, the results should not be relied on for precision applications such as light curve fitting for cosmology.

  15. A stabilized MFE reduced-order extrapolation model based on POD for the 2D unsteady conduction-convection problem.

    PubMed

    Xia, Hong; Luo, Zhendong

    2017-01-01

    In this study, we devote ourselves to establishing a stabilized mixed finite element (MFE) reduced-order extrapolation (SMFEROE) model holding seldom unknowns for the two-dimensional (2D) unsteady conduction-convection problem via the proper orthogonal decomposition (POD) technique, analyzing the existence and uniqueness and the stability as well as the convergence of the SMFEROE solutions and validating the correctness and dependability of the SMFEROE model by means of numerical simulations.

  16. Approximate simulation model for analysis and optimization in engineering system design

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    Computational support of the engineering design process routinely requires mathematical models of behavior to inform designers of the system response to external stimuli. However, designers also need to know the effect of the changes in design variable values on the system behavior. For large engineering systems, the conventional way of evaluating these effects by repetitive simulation of behavior for perturbed variables is impractical because of excessive cost and inadequate accuracy. An alternative is described based on recently developed system sensitivity analysis that is combined with extrapolation to form a model of design. This design model is complementary to the model of behavior and capable of direct simulation of the effects of design variable changes.

  17. The potential influence of rain on airfoil performance

    NASA Technical Reports Server (NTRS)

    Dunham, R. Earl, Jr.

    1987-01-01

    The potential influence of heavy rain on airfoil performance is discussed. Experimental methods for evaluating rain effects are reviewed. Important scaling considerations for extrapolating model data are presented. It is shown that considerable additional effort, both analytical and experimental, is necessary to understand the degree of hazard associated with flight operations in rain.

  18. MOVING FROM EXTERNAL EXPOSURE CONCENTRATION TO INTERNAL DOSE: DURATION EXTRAPOLATION BASED ON PHYSIOLOGICALLY-BASED PHARMACOKINETIC-MODEL DERIVED ESTIMATES OF INTERNAL DOSE

    EPA Science Inventory

    The potential human health risk(s) from exposure to chemicals under conditions for which adequate human or animal data are not available must frequently be assessed. Exposure scenario is particularly important for the acute neurotoxic effects of volatile organic compounds (VOCs)...

  19. Laparoscopic fundoplication compared with medical management for gastro-oesophageal reflux disease: cost effectiveness study.

    PubMed

    Epstein, David; Bojke, Laura; Sculpher, Mark J

    2009-07-14

    To describe the long term costs, health benefits, and cost effectiveness of laparoscopic surgery compared with those of continued medical management for patients with gastro-oesophageal reflux disease (GORD). We estimated resource use and costs for the first year on the basis of data from the REFLUX trial. A Markov model was used to extrapolate cost and health benefit over a lifetime using data collected in the REFLUX trial and other sources. The model compared laparoscopic surgery and continued proton pump inhibitors in male patients aged 45 and stable on GORD medication. Laparoscopic surgery versus continued medical management. We estimated quality adjusted life years and GORD related costs to the health service over a lifetime. Sensitivity analyses considered other plausible scenarios, in particular size and duration of treatment effect and the GORD symptoms of patients in whom surgery is unsuccessful. Main results The base case model indicated that surgery is likely to be considered cost effective on average with an incremental cost effectiveness ratio of pound2648 (euro3110; US$4385) per quality adjusted life year and that the probability that surgery is cost effective is 0.94 at a threshold incremental cost effectiveness ratio of pound20 000. The results were sensitive to some assumptions within the extrapolation modelling. Surgery seems to be more cost effective on average than medical management in many of the scenarios examined in this study. Surgery might not be cost effective if the treatment effect does not persist over the long term, if patients who return to medical management have poor health related quality of life, or if proton pump inhibitors were cheaper. Further follow-up of patients from the REFLUX trial may be valuable. ISRCTN15517081.

  20. THE IMPACT OF SCALING FACTOR VARIABILITY ON RISK-RELEVANT TOXICOKINETIC OUTCOMES IN CHILDREN: A CASE STUDY USING BROMODICHLOROMETHANE (BDCM)

    EPA Science Inventory

    Biotransformation rates (Vmax) extrapolated from in vitro data are used increasingly in human physiologically based pharmacokinetic (PBPK) models. Extrapolation of Vmax from in vitro data requires use of scaling factors, including mg of microsomal protein/g liver (MPPGL), nmol of...

  1. Improving toxicity extrapolation using molecular sequence similarity: A case study of pyrethroids and the sodium ion channel

    EPA Science Inventory

    A significant challenge in ecotoxicology has been determining chemical hazards to species with limited or no toxicity data. Currently, extrapolation tools like U.S. EPA’s Web-based Interspecies Correlation Estimation (Web-ICE; www3.epa.gov/webice) models categorize toxicity...

  2. Methodology Of PACS Effectiveness Evaluation As Part Of A Technology Assessment. The Dutch PACS Project Extrapolated.

    NASA Astrophysics Data System (ADS)

    Andriessen, J. H. T. H.; van der Horst-Bruinsma, I. E.; ter Haar Romeny, B. M.

    1989-05-01

    The present phase of the clinical evaluation within the Dutch PACS project mainly focuses on the development and evaluation of a PACSystem for a few departments in the Utrecht University hospital (UUH). A report on the first clinical experiences and a detailed cost/savings analysis of the PACSystem in the UUH are presented elsewhere. However, an assessment of the wider fmancial and organizational implications for hospitals and for the health sector is also needed. To this end a model for (financial) cost assessment of PACSystems is being developed by BAZIS. Learning from the actual pilot implementation in UUH we realized that general Technology Assessment (TA) also calls for an extra-polation of the medical and organizational effects. After a short excursion into the various approaches towards TA, this paper discusses the (inter) organizational dimensions relevant to the development of the necessary exttapolationmodels.

  3. Aquatic effects assessment: needs and tools.

    PubMed

    Marchini, Silvia

    2002-01-01

    In the assessment of the adverse effects pollutants can produce on exposed ecosystems, different approaches can be followed depending on the quality and quantity of information available, whose advantages and limits are discussed with reference to the aquatic compartment. When experimental data are lacking, a predictive approach can be pursued by making use of validated quantitative structure-activity relationships (QSARs), which provide reliable ecotoxicity estimates only if appropriate models are applied. The experimental approach is central to any environmental hazard assessment procedure, although many uncertainties underlying the extrapolation from a limited set of single species laboratory data to the complexity of the ecosystem (e.g., the limitations of common summary statistics, the variability of species sensitivity, the need to consider alterations at higher level of integration) make the task difficult. When adequate toxicity information are available, the statistical extrapolation approach can be used to predict environmental compatible concentrations.

  4. What goes up must . . . Keep going up? Cultural differences in cognitive styles influence evaluations of dynamic performance.

    PubMed

    Ferris, D Lance; Reb, Jochen; Lian, Huiwen; Sim, Samantha; Ang, Dionysius

    2018-03-01

    Past research on dynamic workplace performance evaluation has taken as axiomatic that temporal performance trends produce naïve extrapolation effects on performance ratings. That is, we naïvely assume that an individual whose performance has trended upward over time will continue to improve, and rate that individual more positively than an individual whose performance has trended downward over time-even if, on average, the 2 individuals have performed at an equivalent level. However, we argue that such naïve extrapolation effects are more pronounced in Western countries than Eastern countries, owing to Eastern countries having a more holistic cognitive style. To test our hypotheses, we examined the effect of performance trend on expectations of future performance and ratings of past performance across 2 studies: Study 1 compares the magnitude of naïve extrapolation effects among Singaporeans primed with either a more or less holistic cognitive style, and Study 2 examines holistic cognitive style as a mediating mechanism accounting for differences in the magnitude of naïve extrapolation effects between American and Chinese raters. Across both studies, we found support for our predictions that dynamic performance trends have less impact on the ratings of more holistic thinkers. Implications for the dynamic performance and naïve extrapolation literatures are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. Toxicokinetic Model Development for the Insensitive Munitions Component 3-Nitro-1,2,4-Triazol-5-One.

    PubMed

    Sweeney, Lisa M; Phillips, Elizabeth A; Goodwin, Michelle R; Bannon, Desmond I

    2015-01-01

    3-Nitro-1,2,4-triazol-5-one (NTO) is a component of insensitive munitions that are potential replacements for conventional explosives. Toxicokinetic data can aid in the interpretation of toxicity studies and interspecies extrapolation, but only limited data on the toxicokinetics and metabolism of NTO are available. To supplement these limited data, further in vivo studies of NTO in rats were conducted and blood concentrations were measured, tissue distribution of NTO was estimated using an in silico method, and physiologically based pharmacokinetic models of the disposition of NTO in rats and macaques were developed and extrapolated to humans. The model predictions can be used to extrapolate from designated points of departure identified from rat toxicology studies to provide a scientific basis for estimates of acceptable human exposure levels for NTO. © The Author(s) 2015.

  6. An Examination of the Quality of Wind Observations with Smartphones

    NASA Astrophysics Data System (ADS)

    Hintz, Kasper; Vedel, Henrik; Muñoz-Gomez, Juan; Woetmann, Niels

    2017-04-01

    Over the last years, the number of devices connected to the internet has increased significantly making it possible for internal and external sensors to communicate via the internet, opening up many possibilities for additional data for use in the atmospheric sciences. Vaavud has manufactured small anemometer devices which can measure wind speed and wind direction when connected to a smartphone. This work examines the quality of such crowdsourced Handheld Wind Observations (HWO). In order to examine the quality of the HWO, multiple idealised measurement sessions were performed at different sites in different atmospheric conditions. In these sessions, a high-precision ultrasonic anemometer was installed to work as a reference measurement. The HWO are extrapolated to 10 m in order to compare these to the reference observations. This allows us to examine the effect of stability correction in the surface layer and the quality of height extrapolated HWO. The height extrapolation is done using the logarithmic wind profile law with and without stability correction. Furthermore, this study examines the optimal ways of using traditional observations and numerical models to validate HWO. In order to do so, a series of numerical reanalysis have been run for a period of 5 months to quantise the effect of including crowdsourced HWO in a traditional observation dataset.

  7. Extrapolation of vertical target motion through a brief visual occlusion.

    PubMed

    Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco

    2010-03-01

    It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects.

  8. Kinetic Monte Carlo simulations of water ice porosity: extrapolations of deposition parameters from the laboratory to interstellar space.

    PubMed

    Clements, Aspen R; Berk, Brandon; Cooke, Ilsa R; Garrod, Robin T

    2018-02-21

    Dust grains in cold, dense interstellar clouds build up appreciable ice mantles through the accretion and subsequent surface chemistry of atoms and molecules from the gas. These mantles, of thicknesses on the order of 100 monolayers, are primarily composed of H 2 O, CO, and CO 2 . Laboratory experiments using interstellar ice analogues have shown that porosity could be present and can facilitate diffusion of molecules along the inner pore surfaces. However, the movement of molecules within and upon the ice is poorly described by current chemical kinetics models, making it difficult either to reproduce the formation of experimental porous ice structures or to extrapolate generalized laboratory results to interstellar conditions. Here we use the off-lattice Monte Carlo kinetics model MIMICK to investigate the effects that various deposition parameters have on laboratory ice structures. The model treats molecules as isotropic spheres of a uniform size, using a Lennard-Jones potential. We reproduce experimental trends in the density of amorphous solid water (ASW) for varied deposition angle, rate and surface temperature; ice density decreases when the incident angle or deposition rate is increased, while increasing temperature results in a more-compact water ice. The models indicate that the density behaviour at higher temperatures (≥80 K) is dependent on molecular rearrangement resulting from thermal diffusion. To reproduce trends at lower temperatures, it is necessary to take account of non-thermal diffusion by newly-adsorbed molecules, which bring kinetic energy both from the gas phase and from their acceleration into a surface binding site. Extrapolation of the model to conditions appropriate to protoplanetary disks, in which direct accretion of water from the gas-phase may be the dominant ice formation mechanism, indicate that these ices may be less porous than laboratory ices.

  9. Mode-of-Action Uncertainty for Dual-Mode Carcinogens:Lower Bounds for Naphthalene-Induced Nasal Tumors in Rats Implied byPBPK and 2-Stage Stochastic Cancer Risk Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogen, K T

    2007-01-30

    As reflected in the 2005 USEPA Guidelines for Cancer Risk Assessment, some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained using separate ''linear'' (genotoxic) vs. ''nonlinear'' (nongenotoxic) approaches to low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient to parameterize a biologically based model that reliably extrapolates risk to lowmore » levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach--similar to that used in reference dose procedures for classic toxicity endpoints--can address MOA uncertainty in a way that avoids explicit modeling of low-dose risk as a function of administered or internal dose. Even when a ''nonlinear'' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was illustrated for the rodent carcinogen naphthalene. Bioassay data, supplemental toxicokinetic data, and related physiologically based pharmacokinetic and 2-stage stochastic carcinogenesis modeling results all clearly indicate that naphthalene is a DMOA carcinogen. Plausibility bounds on rat-tumor-type specific DMOA-related uncertainty were obtained using a 2-stage model adapted to reflect the empirical link between genotoxic and cytotoxic effects of the most potent identified genotoxic naphthalene metabolites, 1,2- and 1,4-naphthoquinone. Resulting bounds each provided the basis for a corresponding ''uncertainty'' factor <1 appropriate to apply to estimates of naphthalene risk obtained by linear extrapolation under a default genotoxic MOA assumption. This procedure is proposed as scientifically credible method to address MOA uncertainty for DMOA carcinogens.« less

  10. Designing a Pediatric Study for an Antimalarial Drug by Using Information from Adults

    PubMed Central

    Jullien, Vincent; Samson, Adeline; Guedj, Jérémie; Kiechel, Jean-René; Zohar, Sarah; Comets, Emmanuelle

    2015-01-01

    The objectives of this study were to design a pharmacokinetic (PK) study by using information about adults and evaluate the robustness of the recommended design through a case study of mefloquine. PK data about adults and children were available from two different randomized studies of the treatment of malaria with the same artesunate-mefloquine combination regimen. A recommended design for pediatric studies of mefloquine was optimized on the basis of an extrapolated model built from adult data through the following approach. (i) An adult PK model was built, and parameters were estimated by using the stochastic approximation expectation-maximization algorithm. (ii) Pediatric PK parameters were then obtained by adding allometry and maturation to the adult model. (iii) A D-optimal design for children was obtained with PFIM by assuming the extrapolated design. Finally, the robustness of the recommended design was evaluated in terms of the relative bias and relative standard errors (RSE) of the parameters in a simulation study with four different models and was compared to the empirical design used for the pediatric study. Combining PK modeling, extrapolation, and design optimization led to a design for children with five sampling times. PK parameters were well estimated by this design with few RSE. Although the extrapolated model did not predict the observed mefloquine concentrations in children very accurately, it allowed precise and unbiased estimates across various model assumptions, contrary to the empirical design. Using information from adult studies combined with allometry and maturation can help provide robust designs for pediatric studies. PMID:26711749

  11. A novel cost-effectiveness model of prescription eicosapentaenoic acid extrapolated to secondary prevention of cardiovascular diseases in the United States.

    PubMed

    Philip, Sephy; Chowdhury, Sumita; Nelson, John R; Benjamin Everett, P; Hulme-Lowe, Carolyn K; Schmier, Jordana K

    2016-10-01

    Given the substantial economic and health burden of cardiovascular disease and the residual cardiovascular risk that remains despite statin therapy, adjunctive therapies are needed. The purpose of this model was to estimate the cost-effectiveness of high-purity prescription eicosapentaenoic acid (EPA) omega-3 fatty acid intervention in secondary prevention of cardiovascular diseases in statin-treated patient populations extrapolated to the US. The deterministic model utilized inputs for cardiovascular events, costs, and utilities from published sources. Expert opinion was used when assumptions were required. The model takes the perspective of a US commercial, third-party payer with costs presented in 2014 US dollars. The model extends to 5 years and applies a 3% discount rate to costs and benefits. Sensitivity analyses were conducted to explore the influence of various input parameters on costs and outcomes. Using base case parameters, EPA-plus-statin therapy compared with statin monotherapy resulted in cost savings (total 5-year costs $29,393 vs $30,587 per person, respectively) and improved utilities (average 3.627 vs 3.575, respectively). The results were not sensitive to multiple variations in model inputs and consistently identified EPA-plus-statin therapy to be the economically dominant strategy, with both lower costs and better patient utilities over the modeled 5-year period. The model is only an approximation of reality and does not capture all complexities of a real-world scenario without further inputs from ongoing trials. The model may under-estimate the cost-effectiveness of EPA-plus-statin therapy because it allows only a single event per patient. This novel model suggests that combining EPA with statin therapy for secondary prevention of cardiovascular disease in the US may be a cost-saving and more compelling intervention than statin monotherapy.

  12. A study of alternative schemes for extrapolation of secular variation at observatories

    USGS Publications Warehouse

    Alldredge, L.R.

    1976-01-01

    The geomagnetic secular variation is not well known. This limits the useful life of geomagnetic models. The secular variation is usually assumed to be linear with time. It is found that attenative schemes that employ quasiperiodic variations from internal and external sources can improve the extrapolation of secular variation at high-quality observatories. Although the schemes discussed are not yet fully applicable in worldwide model making, they do suggest some basic ideas that may be developed into useful tools in future model work. ?? 1976.

  13. Biological Networks for Predicting Chemical Hepatocarcinogenicity Using Gene Expression Data from Treated Mice and Relevance across Human and Rat Species

    PubMed Central

    Thomas, Reuben; Thomas, Russell S.; Auerbach, Scott S.; Portier, Christopher J.

    2013-01-01

    Background Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. Objectives To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Methods Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Results Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Conclusions Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species. PMID:23737943

  14. Biological networks for predicting chemical hepatocarcinogenicity using gene expression data from treated mice and relevance across human and rat species.

    PubMed

    Thomas, Reuben; Thomas, Russell S; Auerbach, Scott S; Portier, Christopher J

    2013-01-01

    Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species.

  15. EVALUATING TOOLS AND MODELS USED FOR QUANTITATIVE EXTRAPOLATION OF IN VITRO TO IN VIVO DATA FOR NEUROTOXICANTS*

    EPA Science Inventory

    There are a number of risk management decisions, which range from prioritization for testing to quantitative risk assessments. The utility of in vitro studies in these decisions depends on how well the results of such data can be qualitatively and quantitatively extrapolated to i...

  16. A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.

    2017-03-01

    There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.

  17. Simulating dryland water availability and spring wheat production under various management practices in the Northern Great Plains

    USDA-ARS?s Scientific Manuscript database

    Agricultural system models are useful tools to synthesize field experimental data and to extrapolate the results to longer periods of weather and other cropping systems. The objectives of this study were: 1) to quantify the effects of planting date, seeding rate, and tillage on spring wheat producti...

  18. Spatial extrapolation of lysimeter results using thermal infrared imaging

    NASA Astrophysics Data System (ADS)

    Voortman, B. R.; Bosveld, F. C.; Bartholomeus, R. P.; Witte, J. P. M.

    2016-12-01

    Measuring evaporation (E) with lysimeters is costly and prone to numerous errors. By comparing the energy balance and the remotely sensed surface temperature of lysimeters with those of the undisturbed surroundings, we were able to assess the representativeness of lysimeter measurements and to quantify differences in evaporation caused by spatial variations in soil moisture content. We used an algorithm (the so called 3T model) to spatially extrapolate the measured E of a reference lysimeter based on differences in surface temperature, net radiation and soil heat flux. We tested the performance of the 3T model on measurements with multiple lysimeters (47.5 cm inner diameter) and micro lysimeters (19.2 cm inner diameter) installed in bare sand, moss and natural dry grass. We developed different scaling procedures using in situ measurements and remotely sensed surface temperatures to derive spatially distributed estimates of Rn and G and explored the physical soundness of the 3T model. Scaling of Rn and G considerably improved the performance of the 3T model for the bare sand and moss experiments (Nash-Sutcliffe efficiency (NSE) increasing from 0.45 to 0.89 and from 0.81 to 0.94, respectively). For the grass surface, the scaling procedures resulted in a poorer performance of the 3T model (NSE decreasing from 0.74 to 0.70), which was attributed to effects of shading and the difficulty to correct for differences in emissivity between dead and living biomass. The 3T model is physically unsound if the field scale average air temperature, measured at an arbitrarily chosen reference height, is used as input to the model. The proposed measurement system is relatively cheap, since it uses a zero tension (freely draining) lysimeter which results are extrapolated by the 3T model to the unaffected surroundings. The system is promising for bridging the gap between ground observations and satellite based estimates of E.

  19. Projecting species’ vulnerability to climate change: Which uncertainty sources matter most and extrapolate best?

    USGS Publications Warehouse

    Steen, Valerie; Sofaer, Helen R.; Skagen, Susan K.; Ray, Andrea J.; Noon, Barry R

    2017-01-01

    Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water availability may be a more proximal driver than precipitation. However, because cross-validation results were correlated with extrapolation results, the use of cross-validation performance metrics to guide modeling choices where knowledge is limited was supported.

  20. Projecting species' vulnerability to climate change: Which uncertainty sources matter most and extrapolate best?

    PubMed

    Steen, Valerie; Sofaer, Helen R; Skagen, Susan K; Ray, Andrea J; Noon, Barry R

    2017-11-01

    Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water availability may be a more proximal driver than precipitation. However, because cross-validation results were correlated with extrapolation results, the use of cross-validation performance metrics to guide modeling choices where knowledge is limited was supported.

  1. Hand interception of occluded motion in humans: a test of model-based vs. on-line control

    PubMed Central

    Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    Two control schemes have been hypothesized for the manual interception of fast visual targets. In the model-free on-line control, extrapolation of target motion is based on continuous visual information, without resorting to physical models. In the model-based control, instead, a prior model of target motion predicts the future spatiotemporal trajectory. To distinguish between the two hypotheses in the case of projectile motion, we asked participants to hit a ball that rolled down an incline at 0.2 g and then fell in air at 1 g along a parabola. By varying starting position, ball velocity and trajectory differed between trials. Motion on the incline was always visible, whereas parabolic motion was either visible or occluded. We found that participants were equally successful at hitting the falling ball in both visible and occluded conditions. Moreover, in different trials the intersection points were distributed along the parabolic trajectories of the ball, indicating that subjects were able to extrapolate an extended segment of the target trajectory. Remarkably, this trend was observed even at the very first repetition of movements. These results are consistent with the hypothesis of model-based control, but not with on-line control. Indeed, ball path and speed during the occlusion could not be extrapolated solely from the kinematic information obtained during the preceding visible phase. The only way to extrapolate ball motion correctly during the occlusion was to assume that the ball would fall under gravity and air drag when hidden from view. Such an assumption had to be derived from prior experience. PMID:26133803

  2. Casting the Coronal Magnetic Field Reconstructions with Magnetic Field Constraints above the Photosphere in 3D Using MHD Bifrost Model

    NASA Astrophysics Data System (ADS)

    Fleishman, G. D.; Anfinogentov, S.; Loukitcheva, M.; Mysh'yakov, I.; Stupishin, A.

    2017-12-01

    Measuring and modeling coronal magnetic field, especially above active regions (ARs), remains one of the central problems of solar physics given that the solar coronal magnetism is the key driver of all solar activity. Nowadays the coronal magnetic field is often modelled using methods of nonlinear force-free field reconstruction, whose accuracy has not yet been comprehensively assessed. Given that the coronal magnetic probing is routinely unavailable, only morphological tests have been applied to evaluate performance of the reconstruction methods and a few direct tests using available semi-analytical force-free field solution. Here we report a detailed casting of various tools used for the nonlinear force-free field reconstruction, such as disambiguation methods, photospheric field preprocessing methods, and volume reconstruction methods in a 3D domain using a 3D snapshot of the publicly available full-fledged radiative MHD model. We take advantage of the fact that from the realistic MHD model we know the magnetic field vector distribution in the entire 3D domain, which enables us to perform "voxel-by-voxel" comparison of the restored magnetic field and the true magnetic field in the 3D model volume. Our tests show that the available disambiguation methods often fail at the quiet sun areas, where the magnetic structure is dominated by small-scale magnetic elements, while they work really well at the AR photosphere and (even better) chromosphere. The preprocessing of the photospheric magnetic field, although does produce a more force-free boundary condition, also results in some effective `elevation' of the magnetic field components. The effective `elevation' height turns out to be different for the longitudinal and transverse components of the magnetic field, which results in a systematic error in absolute heights in the reconstructed magnetic data cube. The extrapolation performed starting from actual AR photospheric magnetogram (i.e., without preprocessing) are free from this systematic error, while have other metrics either comparable or only marginally worse than those estimated for extrapolations from the preprocessed magnetograms. This finding favors the use of extrapolations from the original photospheric magnetogram without preprocessing.

  3. Cosmogony as an extrapolation of magnetospheric research

    NASA Technical Reports Server (NTRS)

    Alfven, H.

    1984-01-01

    A theory of the origin and evolution of the Solar System which considered electromagnetic forces and plasma effects is revised in light of information supplied by space research. In situ measurements in the magnetospheres and solar wind can be extrapolated outwards in space, to interstellar clouds, and backwards in time, to the formation of the solar system. The first extrapolation leads to a revision of cloud properties essential for the early phases in the formation of stars and solar nebulae. The latter extrapolation facilitates analysis of the cosmogonic processes by extrapolation of magnetospheric phenomena. Pioneer-Voyager observations of the Saturnian rings indicate that essential parts of their structure are fossils from cosmogonic times. By using detailed information from these space missions, it is possible to reconstruct events 4 to 5 billion years ago with an accuracy of a few percent.

  4. Measurement accuracies in band-limited extrapolation

    NASA Technical Reports Server (NTRS)

    Kritikos, H. N.

    1982-01-01

    The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.

  5. Present constraints on the H-dibaryon at the physical point from Lattice QCD

    DOE PAGES

    Beane, S. R.; Chang, E.; Detmold, W.; ...

    2011-11-10

    The current constraints from Lattice QCD on the existence of the H-dibaryon are discussed. With only two significant Lattice QCD calculations of the H-dibaryon binding energy at approximately the same lattice spacing, the form of the chiral and continuum extrapolations to the physical point are not determined. In this brief report, an extrapolation that is quadratic in the pion mass, motivated by low-energy effective field theory, is considered. An extrapolation that is linear in the pion mass is also considered, a form that has no basis in the effective field theory, but is found to describe the light-quark mass dependencemore » observed in Lattice QCD calculations of the octet baryon masses. In both cases, the extrapolation to the physical pion mass allows for a bound H-dibaryon or a near-threshold scattering state.« less

  6. Extraterrestrial cold chemistry. A need for a specific database.

    NASA Astrophysics Data System (ADS)

    Pernot, P.; Carrasco, N.; Dobrijevic, M.; Hébrard, E.; Plessis, S.; Wakelam, V.

    2008-09-01

    The major resource databases for building chemical models for photochemistry in cold environments are mainly based on those designed for Earth atmospheric chemistry or combustion, in which reaction rates are reported for temperatures typically above 300 K [1,2]. Kinetic data measured at low temperatures are very sparse; for instance, in stateoftheart photochemical models of Titan atmosphere, less than 10% of the rates have been measured in the relevant temperature range (100200 K) [35]. In consequence, photochemical models rely mostly on lowT extrapolations by Arrheniustype laws. There is more and more evidence that this is often inappropriate [6], and low T extrapolations are hindered by very high uncertainty [3] (Fig.1). The predictions of models based on those extrapolations are expected to be very inaccurate [4,7]. We argue that there is not much sense in increasing the complexity of the present models as long as this predictivity issue has not been resolved. Fig. 1 Uncertainty of low temperature extrapolation for the N(2D) +C2H4 reaction rate, from measurements in the range 225 292 K [10], assuming an Arrhenius law (blue line). The sample of rate laws is generated by Monte Carlo uncertainty propagation after a Bayesian Data reAnalysis (BDA) of experimental data. A dialogue between modellers and experimentalists is necessary to improve this situation. Considering the heavy costs of low temperature reaction kinetics experiments, the identification of key reactions has to be based on an optimal strategy to improve the predictivity of photochemical models. This can be achieved by global sensitivity analysis, as illustrated on Titan atmospheric chemistry [8]. The main difficulty of this scheme is that it requires a lot of inputs, mainly the evaluation of uncertainty for extrapolated reaction rates. Although a large part has already been achieved by Hébrard et al. [3], extension and validation requires a group of experts. A new generation of collaborative kinetic database is needed to implement efficiently this scheme. The KIDA project [9], initiated by V. Wakelam for astrochemistry, has been joined by planetologists with similar prospects. EuroPlaNet will contribute to this effort through the organization of comities of experts on specific processes in atmospheric photochemistry.

  7. Prediction of UT1-UTC, LOD and AAM χ3 by combination of least-squares and multivariate stochastic methods

    NASA Astrophysics Data System (ADS)

    Niedzielski, Tomasz; Kosek, Wiesław

    2008-02-01

    This article presents the application of a multivariate prediction technique for predicting universal time (UT1-UTC), length of day (LOD) and the axial component of atmospheric angular momentum (AAM χ 3). The multivariate predictions of LOD and UT1-UTC are generated by means of the combination of (1) least-squares (LS) extrapolation of models for annual, semiannual, 18.6-year, 9.3-year oscillations and for the linear trend, and (2) multivariate autoregressive (MAR) stochastic prediction of LS residuals (LS + MAR). The MAR technique enables the use of the AAM χ 3 time-series as the explanatory variable for the computation of LOD or UT1-UTC predictions. In order to evaluate the performance of this approach, two other prediction schemes are also applied: (1) LS extrapolation, (2) combination of LS extrapolation and univariate autoregressive (AR) prediction of LS residuals (LS + AR). The multivariate predictions of AAM χ 3 data, however, are computed as a combination of the extrapolation of the LS model for annual and semiannual oscillations and the LS + MAR. The AAM χ 3 predictions are also compared with LS extrapolation and LS + AR prediction. It is shown that the predictions of LOD and UT1-UTC based on LS + MAR taking into account the axial component of AAM are more accurate than the predictions of LOD and UT1-UTC based on LS extrapolation or on LS + AR. In particular, the UT1-UTC predictions based on LS + MAR during El Niño/La Niña events exhibit considerably smaller prediction errors than those calculated by means of LS or LS + AR. The AAM χ 3 time-series is predicted using LS + MAR with higher accuracy than applying LS extrapolation itself in the case of medium-term predictions (up to 100 days in the future). However, the predictions of AAM χ 3 reveal the best accuracy for LS + AR.

  8. Blonanserin – A Novel Antianxiety and Antidepressant Drug? An Experimental Study

    PubMed Central

    Limaye, Ramchandra Prabhakar; Patil, Aditi Nitin

    2016-01-01

    Introduction Many psychiatric disorders show signs and symptoms of anxiety and depression. A drug with both, effects and lesser adverse effects is always desired. Blonanserin is a novel drug with postulated effect on anxiety and depression. Aim The study was aimed to evaluate the effect of Blonanserin on anxiety and depression in animal models. Materials and Methods By using elevated plus maze test and forced swimming test, the antianxiety and antidepressant effects were evaluated. Animal ethics protocols were followed strictly. Total 50 rats (10 rats per group) were used for each test. As a control drug diazepam and imipramine were used in elevated plus maze and forced swimming test respectively. Blonanserin was tested for 3 doses 0.075, 0.2 and 0.8mg. These doses were selected from previous references as well as by extrapolating human doses. Results This study showed an antianxiety effect of Blonanserin comparable to diazepam, which was statistically significant. Optimal effect was observed with 0.075mg, followed by 0.2 and 0.8mg. It also showed an antidepressant effect which was statistically significant. Optimal effect was observed at 0.2mg dose. Conclusion The results showed that at a dose range of 0.075 and 0.2mg Blonanserin has potential to exert an adjuvant antianxiety and antidepressant activity in animal models. In order to extrapolate this in patient, longer clinical studies with comparable doses should be planned. The present study underlines potential of Blonanserin as a novel drug for such studies. PMID:27790460

  9. Blonanserin - A Novel Antianxiety and Antidepressant Drug? An Experimental Study.

    PubMed

    Limaye, Ramchandra Prabhakar; Patil, Aditi Nitin

    2016-09-01

    Many psychiatric disorders show signs and symptoms of anxiety and depression. A drug with both, effects and lesser adverse effects is always desired. Blonanserin is a novel drug with postulated effect on anxiety and depression. The study was aimed to evaluate the effect of Blonanserin on anxiety and depression in animal models. By using elevated plus maze test and forced swimming test, the antianxiety and antidepressant effects were evaluated. Animal ethics protocols were followed strictly. Total 50 rats (10 rats per group) were used for each test. As a control drug diazepam and imipramine were used in elevated plus maze and forced swimming test respectively. Blonanserin was tested for 3 doses 0.075, 0.2 and 0.8mg. These doses were selected from previous references as well as by extrapolating human doses. This study showed an antianxiety effect of Blonanserin comparable to diazepam, which was statistically significant. Optimal effect was observed with 0.075mg, followed by 0.2 and 0.8mg. It also showed an antidepressant effect which was statistically significant. Optimal effect was observed at 0.2mg dose. The results showed that at a dose range of 0.075 and 0.2mg Blonanserin has potential to exert an adjuvant antianxiety and antidepressant activity in animal models. In order to extrapolate this in patient, longer clinical studies with comparable doses should be planned. The present study underlines potential of Blonanserin as a novel drug for such studies.

  10. Extrapolating intensified forest inventory data to the surrounding landscape using landsat

    Treesearch

    Evan B. Brooks; John W. Coulston; Valerie A. Thomas; Randolph H. Wynne

    2015-01-01

    In 2011, a collection of spatially intensified plots was established on three of the Experimental Forests and Ranges (EFRs) sites with the intent of facilitating FIA program objectives for regional extrapolation. Characteristic coefficients from harmonic regression (HR) analysis of associated Landsat stacks are used as inputs into a conditional random forests model to...

  11. Endocrine disrupting chemicals in fish: developing exposure indicators and predictive models of effects based on mechanism of action.

    PubMed

    Ankley, Gerald T; Bencic, David C; Breen, Michael S; Collette, Timothy W; Conolly, Rory B; Denslow, Nancy D; Edwards, Stephen W; Ekman, Drew R; Garcia-Reyero, Natalia; Jensen, Kathleen M; Lazorchak, James M; Martinović, Dalma; Miller, David H; Perkins, Edward J; Orlando, Edward F; Villeneuve, Daniel L; Wang, Rong-Lin; Watanabe, Karen H

    2009-05-05

    Knowledge of possible toxic mechanisms (or modes) of action (MOA) of chemicals can provide valuable insights as to appropriate methods for assessing exposure and effects, thereby reducing uncertainties related to extrapolation across species, endpoints and chemical structure. However, MOA-based testing seldom has been used for assessing the ecological risk of chemicals. This is in part because past regulatory mandates have focused more on adverse effects of chemicals (reductions in survival, growth or reproduction) than the pathways through which these effects are elicited. A recent departure from this involves endocrine-disrupting chemicals (EDCs), where there is a need to understand both MOA and adverse outcomes. To achieve this understanding, advances in predictive approaches are required whereby mechanistic changes caused by chemicals at the molecular level can be translated into apical responses meaningful to ecological risk assessment. In this paper we provide an overview and illustrative results from a large, integrated project that assesses the effects of EDCs on two small fish models, the fathead minnow (Pimephales promelas) and zebrafish (Danio rerio). For this work a systems-based approach is being used to delineate toxicity pathways for 12 model EDCs with different known or hypothesized toxic MOA. The studies employ a combination of state-of-the-art genomic (transcriptomic, proteomic, metabolomic), bioinformatic and modeling approaches, in conjunction with whole animal testing, to develop response linkages across biological levels of organization. This understanding forms the basis for predictive approaches for species, endpoint and chemical extrapolation. Although our project is focused specifically on EDCs in fish, we believe that the basic conceptual approach has utility for systematically assessing exposure and effects of chemicals with other MOA across a variety of biological systems.

  12. Lessons Learned from Assimilating Altimeter Data into a Coupled General Circulation Model with the GMAO Augmented Ensemble Kalman Filter

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian; Vernieres, Guillaume; Rienecker, Michele; Jacob, Jossy; Kovach, Robin

    2011-01-01

    Satellite altimetry measurements have provided global, evenly distributed observations of the ocean surface since 1993. However, the difficulties introduced by the presence of model biases and the requirement that data assimilation systems extrapolate the sea surface height (SSH) information to the subsurface in order to estimate the temperature, salinity and currents make it difficult to optimally exploit these measurements. This talk investigates the potential of the altimetry data assimilation once the biases are accounted for with an ad hoc bias estimation scheme. Either steady-state or state-dependent multivariate background-error covariances from an ensemble of model integrations are used to address the problem of extrapolating the information to the sub-surface. The GMAO ocean data assimilation system applied to an ensemble of coupled model instances using the GEOS-5 AGCM coupled to MOM4 is used in the investigation. To model the background error covariances, the system relies on a hybrid ensemble approach in which a small number of dynamically evolved model trajectories is augmented on the one hand with past instances of the state vector along each trajectory and, on the other, with a steady state ensemble of error estimates from a time series of short-term model forecasts. A state-dependent adaptive error-covariance localization and inflation algorithm controls how the SSH information is extrapolated to the sub-surface. A two-step predictor corrector approach is used to assimilate future information. Independent (not-assimilated) temperature and salinity observations from Argo floats are used to validate the assimilation. A two-step projection method in which the system first calculates a SSH increment and then projects this increment vertically onto the temperature, salt and current fields is found to be most effective in reconstructing the sub-surface information. The performance of the system in reconstructing the sub-surface fields is particularly impressive for temperature, but not as satisfactory for salt.

  13. Quantitative Cross-Species Extrapolation between Humans and Fish: The Case of the Anti-Depressant Fluoxetine

    PubMed Central

    Margiotta-Casaluci, Luigi; Owen, Stewart F.; Cumming, Rob I.; de Polo, Anna; Winter, Matthew J.; Panter, Grace H.; Rand-Weaver, Mariann; Sumpter, John P.

    2014-01-01

    Fish are an important model for the pharmacological and toxicological characterization of human pharmaceuticals in drug discovery, drug safety assessment and environmental toxicology. However, do fish respond to pharmaceuticals as humans do? To address this question, we provide a novel quantitative cross-species extrapolation approach (qCSE) based on the hypothesis that similar plasma concentrations of pharmaceuticals cause comparable target-mediated effects in both humans and fish at similar level of biological organization (Read-Across Hypothesis). To validate this hypothesis, the behavioural effects of the anti-depressant drug fluoxetine on the fish model fathead minnow (Pimephales promelas) were used as test case. Fish were exposed for 28 days to a range of measured water concentrations of fluoxetine (0.1, 1.0, 8.0, 16, 32, 64 µg/L) to produce plasma concentrations below, equal and above the range of Human Therapeutic Plasma Concentrations (HTPCs). Fluoxetine and its metabolite, norfluoxetine, were quantified in the plasma of individual fish and linked to behavioural anxiety-related endpoints. The minimum drug plasma concentrations that elicited anxiolytic responses in fish were above the upper value of the HTPC range, whereas no effects were observed at plasma concentrations below the HTPCs. In vivo metabolism of fluoxetine in humans and fish was similar, and displayed bi-phasic concentration-dependent kinetics driven by the auto-inhibitory dynamics and saturation of the enzymes that convert fluoxetine into norfluoxetine. The sensitivity of fish to fluoxetine was not so dissimilar from that of patients affected by general anxiety disorders. These results represent the first direct evidence of measured internal dose response effect of a pharmaceutical in fish, hence validating the Read-Across hypothesis applied to fluoxetine. Overall, this study demonstrates that the qCSE approach, anchored to internal drug concentrations, is a powerful tool to guide the assessment of the sensitivity of fish to pharmaceuticals, and strengthens the translational power of the cross-species extrapolation. PMID:25338069

  14. MMOC- MODIFIED METHOD OF CHARACTERISTICS SONIC BOOM EXTRAPOLATION

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1994-01-01

    The Modified Method of Characteristics Sonic Boom Extrapolation program (MMOC) is a sonic boom propagation method which includes shock coalescence and incorporates the effects of asymmetry due to volume and lift. MMOC numerically integrates nonlinear equations from data at a finite distance from an airplane configuration at flight altitude to yield the sonic boom pressure signature at ground level. MMOC accounts for variations in entropy, enthalpy, and gravity for nonlinear effects near the aircraft, allowing extrapolation to begin nearer the body than in previous methods. This feature permits wind tunnel sonic boom models of up to three feet in length, enabling more detailed, realistic models than the previous six-inch sizes. It has been shown that elongated airplanes flying at high altitude and high Mach numbers can produce an acceptably low sonic boom. Shock coalescence in MMOC includes three-dimensional effects. The method is based on an axisymmetric solution with asymmetric effects determined by circumferential derivatives of the standard shock equations. Bow shocks and embedded shocks can be included in the near-field. The method of characteristics approach in MMOC allows large computational steps in the radial direction without loss of accuracy. MMOC is a propagation method rather than a predictive program. Thus input data (the flow field on a cylindrical surface at approximately one body length from the axis) must be supplied from calculations or experimental results. The MMOC package contains a uniform atmosphere pressure field program and interpolation routines for computing the required flow field data. Other user supplied input to MMOC includes Mach number, flow angles, and temperature. MMOC output tabulates locations of bow shocks and embedded shocks. When the calculations reach ground level, the overpressure and distance are printed, allowing the user to plot the pressure signature. MMOC is written in FORTRAN IV for batch execution and has been implemented on a CDC 170 series computer operating under NOS with a central memory requirement of approximately 223K of 60 bit words. This program was developed in 1983.

  15. Guided wave tomography in anisotropic media using recursive extrapolation operators

    NASA Astrophysics Data System (ADS)

    Volker, Arno

    2018-04-01

    Guided wave tomography is an advanced technology for quantitative wall thickness mapping to image wall loss due to corrosion or erosion. An inversion approach is used to match the measured phase (time) at a specific frequency to a model. The accuracy of the model determines the sizing accuracy. Particularly for seam welded pipes there is a measurable amount of anisotropy. Moreover, for small defects a ray-tracing based modelling approach is no longer accurate. Both issues are solved by applying a recursive wave field extrapolation operator assuming vertical transverse anisotropy. The inversion scheme is extended by not only estimating the wall loss profile but also the anisotropy, local material changes and transducer ring alignment errors. This makes the approach more robust. The approach will be demonstrated experimentally on different defect sizes, and a comparison will be made between this new approach and an isotropic ray-tracing approach. An example is given in Fig. 1 for a 75 mm wide, 5 mm deep defect. The wave field extrapolation based tomography clearly provides superior results.

  16. Localized time-lapse elastic waveform inversion using wavefield injection and extrapolation: 2-D parametric studies

    NASA Astrophysics Data System (ADS)

    Yuan, Shihao; Fuji, Nobuaki; Singh, Satish; Borisov, Dmitry

    2017-06-01

    We present a methodology to invert seismic data for a localized area by combining source-side wavefield injection and receiver-side extrapolation method. Despite the high resolving power of seismic full waveform inversion, the computational cost for practical scale elastic or viscoelastic waveform inversion remains a heavy burden. This can be much more severe for time-lapse surveys, which require real-time seismic imaging on a daily or weekly basis. Besides, changes of the structure during time-lapse surveys are likely to occur in a small area rather than the whole region of seismic experiments, such as oil and gas reservoir or CO2 injection wells. We thus propose an approach that allows to image effectively and quantitatively the localized structure changes far deep from both source and receiver arrays. In our method, we perform both forward and back propagation only inside the target region. First, we look for the equivalent source expression enclosing the region of interest by using the wavefield injection method. Second, we extrapolate wavefield from physical receivers located near the Earth's surface or on the ocean bottom to an array of virtual receivers in the subsurface by using correlation-type representation theorem. In this study, we present various 2-D elastic numerical examples of the proposed method and quantitatively evaluate errors in obtained models, in comparison to those of conventional full-model inversions. The results show that the proposed localized waveform inversion is not only efficient and robust but also accurate even under the existence of errors in both initial models and observed data.

  17. Improvement of forecast skill for severe weather by merging radar-based extrapolation and storm-scale NWP corrected forecast

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming

    2015-03-01

    The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.

  18. In vitro to in vivo extrapolation of biotransformation rates for assessing bioaccumulation of hydrophobic organic chemicals in mammals.

    PubMed

    Lee, Yung-Shan; Lo, Justin C; Otton, S Victoria; Moore, Margo M; Kennedy, Chris J; Gobas, Frank A P C

    2017-07-01

    Incorporating biotransformation in bioaccumulation assessments of hydrophobic chemicals in both aquatic and terrestrial organisms in a simple, rapid, and cost-effective manner is urgently needed to improve bioaccumulation assessments of potentially bioaccumulative substances. One approach to estimate whole-animal biotransformation rate constants is to combine in vitro measurements of hepatic biotransformation kinetics with in vitro to in vivo extrapolation (IVIVE) and bioaccumulation modeling. An established IVIVE modeling approach exists for pharmaceuticals (referred to in the present study as IVIVE-Ph) and has recently been adapted for chemical bioaccumulation assessments in fish. The present study proposes and tests an alternative IVIVE-B technique to support bioaccumulation assessment of hydrophobic chemicals with a log octanol-water partition coefficient (K OW ) ≥ 4 in mammals. The IVIVE-B approach requires fewer physiological and physiochemical parameters than the IVIVE-Ph approach and does not involve interconversions between clearance and rate constants in the extrapolation. Using in vitro depletion rates, the results show that the IVIVE-B and IVIVE-Ph models yield similar estimates of rat whole-organism biotransformation rate constants for hypothetical chemicals with log K OW  ≥ 4. The IVIVE-B approach generated in vivo biotransformation rate constants and biomagnification factors (BMFs) for benzo[a]pyrene that are within the range of empirical observations. The proposed IVIVE-B technique may be a useful tool for assessing BMFs of hydrophobic organic chemicals in mammals. Environ Toxicol Chem 2017;36:1934-1946. © 2016 SETAC. © 2016 SETAC.

  19. Addressing Early Life Sensitivity Using Physiologically Based Pharmacokinetic Modeling and In Vitro to In Vivo Extrapolation

    PubMed Central

    Yoon, Miyoung; Clewell, Harvey J.

    2016-01-01

    Physiologically based pharmacokinetic (PBPK) modeling can provide an effective way to utilize in vitro and in silico based information in modern risk assessment for children and other potentially sensitive populations. In this review, we describe the process of in vitro to in vivo extrapolation (IVIVE) to develop PBPK models for a chemical in different ages in order to predict the target tissue exposure at the age of concern in humans. We present our on-going studies on pyrethroids as a proof of concept to guide the readers through the IVIVE steps using the metabolism data collected either from age-specific liver donors or expressed enzymes in conjunction with enzyme ontogeny information to provide age-appropriate metabolism parameters in the PBPK model in the rat and human, respectively. The approach we present here is readily applicable to not just to other pyrethroids, but also to other environmental chemicals and drugs. Establishment of an in vitro and in silico-based evaluation strategy in conjunction with relevant exposure information in humans is of great importance in risk assessment for potentially vulnerable populations like early ages where the necessary information for decision making is limited. PMID:26977255

  20. Addressing Early Life Sensitivity Using Physiologically Based Pharmacokinetic Modeling and In Vitro to In Vivo Extrapolation.

    PubMed

    Yoon, Miyoung; Clewell, Harvey J

    2016-01-01

    Physiologically based pharmacokinetic (PBPK) modeling can provide an effective way to utilize in vitro and in silico based information in modern risk assessment for children and other potentially sensitive populations. In this review, we describe the process of in vitro to in vivo extrapolation (IVIVE) to develop PBPK models for a chemical in different ages in order to predict the target tissue exposure at the age of concern in humans. We present our on-going studies on pyrethroids as a proof of concept to guide the readers through the IVIVE steps using the metabolism data collected either from age-specific liver donors or expressed enzymes in conjunction with enzyme ontogeny information to provide age-appropriate metabolism parameters in the PBPK model in the rat and human, respectively. The approach we present here is readily applicable to not just to other pyrethroids, but also to other environmental chemicals and drugs. Establishment of an in vitro and in silico-based evaluation strategy in conjunction with relevant exposure information in humans is of great importance in risk assessment for potentially vulnerable populations like early ages where the necessary information for decision making is limited.

  1. Narrowing the error in electron correlation calculations by basis set re-hierarchization and use of the unified singlet and triplet electron-pair extrapolation scheme: Application to a test set of 106 systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.

    2014-12-14

    A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less

  2. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    NASA Astrophysics Data System (ADS)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  3. Basic antenna transmitting characteristics using an extrapolation range measurement technique at a millimeter-wave band at NMIJ/AIST.

    PubMed

    Yamamoto, Tetsuya

    2007-06-01

    A novel test fixture operating at a millimeter-wave band using an extrapolation range measurement technique was developed at the National Metrology Institute of Japan (NMIJ). Here I describe the measurement system using a Q-band test fixture. I measured the relative insertion loss as a function of antenna separation distance and observed the effects of multiple reflections between the antennas. I also evaluated the antenna gain at 33 GHz using the extrapolation technique.

  4. Hand interception of occluded motion in humans: a test of model-based vs. on-line control.

    PubMed

    La Scaleia, Barbara; Zago, Myrka; Lacquaniti, Francesco

    2015-09-01

    Two control schemes have been hypothesized for the manual interception of fast visual targets. In the model-free on-line control, extrapolation of target motion is based on continuous visual information, without resorting to physical models. In the model-based control, instead, a prior model of target motion predicts the future spatiotemporal trajectory. To distinguish between the two hypotheses in the case of projectile motion, we asked participants to hit a ball that rolled down an incline at 0.2 g and then fell in air at 1 g along a parabola. By varying starting position, ball velocity and trajectory differed between trials. Motion on the incline was always visible, whereas parabolic motion was either visible or occluded. We found that participants were equally successful at hitting the falling ball in both visible and occluded conditions. Moreover, in different trials the intersection points were distributed along the parabolic trajectories of the ball, indicating that subjects were able to extrapolate an extended segment of the target trajectory. Remarkably, this trend was observed even at the very first repetition of movements. These results are consistent with the hypothesis of model-based control, but not with on-line control. Indeed, ball path and speed during the occlusion could not be extrapolated solely from the kinematic information obtained during the preceding visible phase. The only way to extrapolate ball motion correctly during the occlusion was to assume that the ball would fall under gravity and air drag when hidden from view. Such an assumption had to be derived from prior experience. Copyright © 2015 the American Physiological Society.

  5. The Effect of Format and Organization on Extrapolation and Interpolation with Multiple Trend Displays.

    ERIC Educational Resources Information Center

    Wolfe, Mary L.; Martuza, Victor R.

    The major purpose of this experiment was to examine the effects of format (bar graphs vs. tables) and organization (by year vs. by brand) on the speed and accuracy of extrapolation and interpolation with multiple, nonlinear trend displays. Fifty-six undergraduates enrolled in the College of Education at the University of Delaware served as the…

  6. Natural Hazards characterisation in industrial practice

    NASA Astrophysics Data System (ADS)

    Bernardara, Pietro

    2017-04-01

    The definition of rare hydroclimatic extremes (up to 10-4 annual probability of occurrence) is of the utmost importance for the design of high value industrial infrastructures, such as grids, power plants, offshore platforms. The underestimation as well as the overestimation of the risk may lead to huge costs (ex. mid-life expensive works or overdesign) which may even prevent the project to happen. Nevertheless, the uncertainty associated to the extrapolation towards the rare frequencies are huge and manifold. They are mainly due to the scarcity of observations, the lack of quality on the extreme value records and on the arbitrary choice of the models used for extrapolations. This often put the design engineers in uncomfortable situations when they must choose the design values to use. Providentially, the recent progresses in the earth observation techniques, information technology, historical data collection and weather and ocean modelling are making huge datasets available. A careful use of big datasets of observations and modelled data are leading towards a better understanding of the physics of the underlying phenomena, the complex interactions between them and thus of the extreme events frequency extrapolations. This will move the engineering practice from the single site, small sample, application of statistical analysis to a more spatially coherent, physically driven extrapolation of extreme values. Few examples, from the EDF industrial practice are given to illustrate these progresses and their potential impact on the design approaches.

  7. Expert elicitation of population-level effects of disturbance

    USGS Publications Warehouse

    Fleishman, Erica; Burgman, Mark; Runge, Michael C.; Schick, Robert S; Krauss, Scott; Popper, Arthur N.; Hawkins, Anthony

    2016-01-01

    Expert elicitation is a rigorous method for synthesizing expert knowledge to inform decision making and is reliable and practical when field data are limited. We evaluated the feasibility of applying expert elicitation to estimate population-level effects of disturbance on marine mammals. Diverse experts estimated parameters related to mortality and sublethal injury of North Atlantic right whales (Eubalaena glacialis). We are now eliciting expert knowledge on the movement of right whales among geographic regions to parameterize a spatial model of health. Expert elicitation complements methods such as simulation models or extrapolations from other species, sometimes with greater accuracy and less uncertainty.

  8. Static and wind tunnel near-field/far-field jet noise measurements from model scale single-flow baseline and suppressor nozzles. Volume 1: Noise source locations and extrapolation of static free-field jet noise data

    NASA Technical Reports Server (NTRS)

    Jaeck, C. L.

    1976-01-01

    A test was conducted in the Boeing Large Anechoic Chamber to determine static jet noise source locations of six baseline and suppressor nozzle models, and establish a technique for extrapolating near field data into the far field. The test covered nozzle pressure ratios from 1.44 to 2.25 and jet velocities from 412 to 594 m/s at a total temperature of 844 K.

  9. Extrapolation to Nonequilibrium from Coarse-Grained Response Theory

    NASA Astrophysics Data System (ADS)

    Basu, Urna; Helden, Laurent; Krüger, Matthias

    2018-05-01

    Nonlinear response theory, in contrast to linear cases, involves (dynamical) details, and this makes application to many-body systems challenging. From the microscopic starting point we obtain an exact response theory for a small number of coarse-grained degrees of freedom. With it, an extrapolation scheme uses near-equilibrium measurements to predict far-from-equilibrium properties (here, second order responses). Because it does not involve system details, this approach can be applied to many-body systems. It is illustrated in a four-state model and in the near critical Ising model.

  10. Modelling of human transplacental transport as performed in Copenhagen, Denmark.

    PubMed

    Mathiesen, Line; Mørck, Thit Aarøe; Zuri, Giuseppina; Andersen, Maria Helena; Pehrson, Caroline; Frederiksen, Marie; Mose, Tina; Rytting, Erik; Poulsen, Marie S; Nielsen, Jeanette K S; Knudsen, Lisbeth E

    2014-07-01

    Placenta perfusion models are very effective when studying the placental mechanisms in order to extrapolate to real-life situations. The models are most often used to investigate the transport of substances between mother and foetus, including the potential metabolism of these. We have studied the relationships between maternal and foetal exposures to various compounds including pollutants such as polychlorinated biphenyls, polybrominated flame retardants, nanoparticles as well as recombinant human antibodies. The compounds have been studied in the human placenta perfusion model and to some extent in vitro with an established human monolayer trophoblast cell culture model. Results from our studies distinguish placental transport of substances by physicochemical properties, adsorption to placental tissue, binding to transport and receptor proteins and metabolism. We have collected data from different classes of chemicals and nanoparticles for comparisons across chemical structures as well as different test systems. Our test systems are based on human material to bypass the extrapolation from animal data. By combining data from our two test systems, we are able to rank and compare the transport of different classes of substances according to their transport ability. Ultimately, human data including measurements in cord blood contribute to the study of placental transport. © 2014 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  11. STRESSOR-RESPONSE RELATIONSHIPS: THE FOUNDATION FOR CHARACTERIZING EFFECTS

    EPA Science Inventory

    This research has 4 main components. The first focuses on developing the scientific information needed to extrapolate data from one or a few tested species to species of primary concern, e.g., the need to extrapolate data from domesticated birds to piscivorous avian species when...

  12. Extrapolating the Acute Behavioral Effects of Toluene from 1-Hour to 24-Hour Exposures in Rats: Roles of Dose Metric, and Metabolic and Behavioral Tolerance.

    EPA Science Inventory

    Recent research on the acute effects of volatile organic compounds (VQCs) suggests that extrapolation from short (~ 1 h) to long durations (up to 4 h) may be improved by using estimates of brain toluene concentration (Br[Tol]) instead of cumulative inhaled dose (C x t) as a metri...

  13. Flash-lag effect: complicating motion extrapolation of the moving reference-stimulus paradoxically augments the effect.

    PubMed

    Bachmann, Talis; Murd, Carolina; Põder, Endel

    2012-09-01

    One fundamental property of the perceptual and cognitive systems is their capacity for prediction in the dynamic environment; the flash-lag effect has been considered as a particularly suggestive example of this capacity (Nijhawan in nature 370:256-257, 1994, Behav brain sci 31:179-239, 2008). Thus, because of involvement of the mechanisms of extrapolation and visual prediction, the moving object is perceived ahead of the simultaneously flashed static object objectively aligned with the moving one. In the present study we introduce a new method and report experimental results inconsistent with at least some versions of the prediction/extrapolation theory. We show that a stimulus moving in the opposite direction to the reference stimulus by approaching it before the flash does not diminish the flash-lag effect, but rather augments it. In addition, alternative theories (in)capable of explaining this paradoxical result are discussed.

  14. EVALUATION OF MINIMUM DATA REQUIREMENTS FOR ACUTE TOXICITY VALUE EXTRAPOLATION WITH AQUATIC ORGANISMS

    EPA Science Inventory

    Buckler, Denny R., Foster L. Mayer, Mark R. Ellersieck and Amha Asfaw. 2003. Evaluation of Minimum Data Requirements for Acute Toxicity Value Extrapolation with Aquatic Organisms. EPA/600/R-03/104. U.S. Environmental Protection Agency, National Health and Environmental Effects Re...

  15. Molecular Target Homology as a Basis for Species Extrapolation to Assess the Ecological Risk of Veterinary Drugs

    EPA Science Inventory

    Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...

  16. EXTRAPOLATION OF THE SOLAR CORONAL MAGNETIC FIELD FROM SDO/HMI MAGNETOGRAM BY A CESE-MHD-NLFFF CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang Chaowei; Feng Xueshang, E-mail: cwjiang@spaceweather.ac.cn, E-mail: fengx@spaceweather.ac.cn

    Due to the absence of direct measurement, the magnetic field in the solar corona is usually extrapolated from the photosphere in a numerical way. At the moment, the nonlinear force-free field (NLFFF) model dominates the physical models for field extrapolation in the low corona. Recently, we have developed a new NLFFF model with MHD relaxation to reconstruct the coronal magnetic field. This method is based on CESE-MHD model with the conservation-element/solution-element (CESE) spacetime scheme. In this paper, we report the application of the CESE-MHD-NLFFF code to Solar Dynamics Observatory/Helioseismic and Magnetic Imager (SDO/HMI) data with magnetograms sampled for two activemore » regions (ARs), NOAA AR 11158 and 11283, both of which were very non-potential, producing X-class flares and eruptions. The raw magnetograms are preprocessed to remove the force and then inputted into the extrapolation code. Qualitative comparison of the results with the SDO/AIA images shows that our code can reconstruct magnetic field lines resembling the EUV-observed coronal loops. Most important structures of the ARs are reproduced excellently, like the highly sheared field lines that suspend filaments in AR 11158 and twisted flux rope which corresponds to a sigmoid in AR 11283. Quantitative assessment of the results shows that the force-free constraint is fulfilled very well in the strong-field regions but apparently not that well in the weak-field regions because of data noise and numerical errors in the small currents.« less

  17. Surface tensions of inorganic multicomponent aqueous electrolyte solutions and melts.

    PubMed

    Dutcher, Cari S; Wexler, Anthony S; Clegg, Simon L

    2010-11-25

    A semiempirical model is presented that predicts surface tensions (σ) of aqueous electrolyte solutions and their mixtures, for concentrations ranging from infinitely dilute solution to molten salt. The model requires, at most, only two temperature-dependent terms to represent surface tensions of either pure aqueous solutions, or aqueous or molten mixtures, over the entire composition range. A relationship was found for the coefficients of the equation σ = c(1) + c(2)T (where T (K) is temperature) for molten salts in terms of ion valency and radius, melting temperature, and salt molar volume. Hypothetical liquid surface tensions can thus be estimated for electrolytes for which there are no data, or which do not exist in molten form. Surface tensions of molten (single) salts, when extrapolated to normal temperatures, were found to be consistent with data for aqueous solutions. This allowed surface tensions of very concentrated, supersaturated, aqueous solutions to be estimated. The model has been applied to the following single electrolytes over the entire concentration range, using data for aqueous solutions over the temperature range 233-523 K, and extrapolated surface tensions of molten salts and pure liquid electrolytes: HCl, HNO(3), H(2)SO(4), NaCl, NaNO(3), Na(2)SO(4), NaHSO(4), Na(2)CO(3), NaHCO(3), NaOH, NH(4)Cl, NH(4)NO(3), (NH(4))(2)SO(4), NH(4)HCO(3), NH(4)OH, KCl, KNO(3), K(2)SO(4), K(2)CO(3), KHCO(3), KOH, CaCl(2), Ca(NO(3))(2), MgCl(2), Mg(NO(3))(2), and MgSO(4). The average absolute percentage error between calculated and experimental surface tensions is 0.80% (for 2389 data points). The model extrapolates smoothly to temperatures as low as 150 K. Also, the model successfully predicts surface tensions of ternary aqueous mixtures; the effect of salt-salt interactions in these calculations was explored.

  18. Use of Physiologically Based Pharmacokinetic (PBPK) Models ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk Final Report for Cooperative Agreement. This report describes and demonstrates techniques necessary to extrapolate and incorporate in vitro derived metabolic rate constants in PBPK models. It also includes two case study examples designed to demonstrate the applicability of such data for health risk assessment and addresses the quantification, extrapolation and interpretation of advanced biochemical information on human interindividual variability of chemical metabolism for risk assessment application. It comprises five chapters; topics and results covered in the first four chapters have been published in the peer reviewed scientific literature. Topics covered include: Data Quality ObjectivesExperimental FrameworkRequired DataTwo example case studies that develop and incorporate in vitro metabolic rate constants in PBPK models designed to quantify human interindividual variability to better direct the choice of uncertainty factors for health risk assessment. This report is intended to serve as a reference document for risk assors to use when quantifying, extrapolating, and interpretating advanced biochemical information about human interindividual variability of chemical metabolism.

  19. Opportunities and Challenges in Employing In Vitro-In Vivo Extrapolation (IVIVE) to the Tox21 Dataset

    EPA Science Inventory

    In vitro-in vivo extrapolation (IVIVE), or the process of using in vitro data to predict in vivo phenomena, provides key opportunities to bridge the disconnect between high-throughput screening data and real-world human exposures and potential health effects. Strategies utilizing...

  20. Effects of Inventory Bias on Landslide Susceptibility Calculations

    NASA Technical Reports Server (NTRS)

    Stanley, T. A.; Kirschbaum, D. B.

    2017-01-01

    Many landslide inventories are known to be biased, especially inventories for large regions such as Oregon's SLIDO or NASA's Global Landslide Catalog. These biases must affect the results of empirically derived susceptibility models to some degree. We evaluated the strength of the susceptibility model distortion from postulated biases by truncating an unbiased inventory. We generated a synthetic inventory from an existing landslide susceptibility map of Oregon, then removed landslides from this inventory to simulate the effects of reporting biases likely to affect inventories in this region, namely population and infrastructure effects. Logistic regression models were fitted to the modified inventories. Then the process of biasing a susceptibility model was repeated with SLIDO data. We evaluated each susceptibility model with qualitative and quantitative methods. Results suggest that the effects of landslide inventory bias on empirical models should not be ignored, even if those models are, in some cases, useful. We suggest fitting models in well-documented areas and extrapolating across the study region as a possible approach to modeling landslide susceptibility with heavily biased inventories.

  1. Effects of Inventory Bias on Landslide Susceptibility Calculations

    NASA Technical Reports Server (NTRS)

    Stanley, Thomas; Kirschbaum, Dalia B.

    2017-01-01

    Many landslide inventories are known to be biased, especially inventories for large regions such as Oregons SLIDO or NASAs Global Landslide Catalog. These biases must affect the results of empirically derived susceptibility models to some degree. We evaluated the strength of the susceptibility model distortion from postulated biases by truncating an unbiased inventory. We generated a synthetic inventory from an existing landslide susceptibility map of Oregon, then removed landslides from this inventory to simulate the effects of reporting biases likely to affect inventories in this region, namely population and infrastructure effects. Logistic regression models were fitted to the modified inventories. Then the process of biasing a susceptibility model was repeated with SLIDO data. We evaluated each susceptibility model with qualitative and quantitative methods. Results suggest that the effects of landslide inventory bias on empirical models should not be ignored, even if those models are, in some cases, useful. We suggest fitting models in well-documented areas and extrapolating across the study region as a possible approach to modelling landslide susceptibility with heavily biased inventories.

  2. Optical spectroscopic studies of animal skin used in modeling of human cutaneous tissue

    NASA Astrophysics Data System (ADS)

    Drakaki, E.; Makropoulou, M.; Serafetinides, A. A.; Borisova, E.; Avramov, L.; Sianoudis, J. A.

    2007-03-01

    Optical spectroscopy and in particular laser-induced autofluorescence spectroscopy (LIAFS) and diffuse reflectance spectroscopy (DRS), provide excellent possibilities for real-time, noninvasive diagnosis of different skin tissue pathologies. However, the introduction of optical spectroscopy in routine medical practice demands a statistically important data collection, independent from the laser sources and detectors used. The scientists collect databases either from patients, in vivo, or they study different animal models to obtain objective information for the optical properties of various types of normal and diseased tissue. In the present work, the optical properties (fluorescence and reflectance) of two animal skin models are investigated. The aim of using animal models in optical spectroscopy investigations is to examine the statistics of the light induced effects firstly on animals, before any extrapolation effort to humans. A nitrogen laser (λ=337.1 nm) was used as an excitation source for the autofluorescence measurements, while a tungsten-halogen lamp was used for the reflectance measurements. Samples of chicken and pig skin were measured in vitro and were compared with results obtained from measurements of normal human skin in vivo. The specific features of the measured reflectance and fluorescence spectra are discussed, while the limits of data extrapolation for each skin type are also depicted.

  3. Tools and techniques for estimating high intensity RF effects

    NASA Astrophysics Data System (ADS)

    Zacharias, Richard L.; Pennock, Steve T.; Poggio, Andrew J.; Ray, Scott L.

    1992-01-01

    Tools and techniques for estimating and measuring coupling and component disturbance for avionics and electronic controls are described. A finite-difference-time-domain (FD-TD) modeling code, TSAR, used to predict coupling is described. This code can quickly generate a mesh model to represent the test object. Some recent applications as well as the advantages and limitations of using such a code are described. Facilities and techniques for making low-power coupling measurements and for making direct injection test measurements of device disturbance are also described. Some scaling laws for coupling and device effects are presented. A method for extrapolating these low-power test results to high-power full-system effects are presented.

  4. Re-evaluation of temperature at the updip limit of locked portion of Nankai megasplay inferred from IODP Site C0002 temperature observatory

    NASA Astrophysics Data System (ADS)

    Sugihara, Takamitsu; Kinoshita, Masataka; Araki, Eichiro; Kimura, Toshinori; Kyo, Masanori; Namba, Yasuhiro; Kido, Yukari; Sanada, Yoshinori; Thu, Moe Kyaw

    2014-12-01

    In 2010, the first long-term borehole monitoring system was deployed at approximately 900 m below the sea floor (mbsf) and was assumed to be situated above the updip limit of the seismogenic zone in the Nankai Trough off Kumano (Site C0002). Four temperature records show that the effect of drilling diminished in less than 2 years. Based on in situ temperatures and thermal conductivities measured on core samples, the temperature measurements and heat flow at 900 mbsf are estimated to be 37.9°C and 56 ± 1 mW/m2, respectively. This heat flow value is in excellent agreement with that from the shallow borehole temperature corrected for rapid sedimentation in the Kumano Basin. We use these values in the present study to extrapolate the temperature below 900 mbsf for a megasplay fault at approximately 5,200 mbsf and a plate boundary fault at approximately 7,000 mbsf. To extrapolate the temperature downward, we use logging-while-drilling (LWD) bit resistivity data as a proxy for porosity and estimate thermal conductivity from this porosity using a geometrical mean model. The one-dimensional (1-D) thermal conduction model used for the extrapolation includes radioactive heat and frictional heat production at the plate boundary fault. The estimated temperature at the megasplay ranges from 132°C to 149°C, depending on the assumed thermal conductivity and radioactive heat production values. These values are significantly higher, by up to 40°C, than some of previous two-dimensional (2-D) numerical model predictions that can account for the high heat flow seaward of the deformation front, including a hydrothermal circulation within the subducted igneous oceanic crust. However, our results are in good agreement with those of the 2-D model, which does not include the advection cooling effect. The results imply that 2-D geometrical effects as well as the influence of the advective cooling may be critical and should be evaluated more quantitatively. Revision of 2-D simulation by introducing our new boundary conditions (37.9°C of in situ temperature at 900 mbsf and approximately 56 mW/m2 heat flow) will be essential. Ultimately, in situ temperature measurements at the megasplay fault are required to understand seismogenesis in the Nankai subduction zone.

  5. Finite volume effects on the electric polarizability of neutral hadrons in lattice QCD

    NASA Astrophysics Data System (ADS)

    Lujan, M.; Alexandru, A.; Freeman, W.; Lee, F. X.

    2016-10-01

    We study the finite volume effects on the electric polarizability for the neutron, neutral pion, and neutral kaon using eight dynamically generated two-flavor nHYP-clover ensembles at two different pion masses: 306(1) and 227(2) MeV. An infinite volume extrapolation is performed for each hadron at both pion masses. For the neutral kaon, finite volume effects are relatively mild. The dependence on the quark mass is also mild, and a reliable chiral extrapolation can be performed along with the infinite volume extrapolation. Our result is αK0 phys=0.356 (74 )(46 )×10-4 fm3 . In contrast, for neutron, the electric polarizability depends strongly on the volume. After removing the finite volume corrections, our neutron polarizability results are in good agreement with chiral perturbation theory. For the connected part of the neutral pion polarizability, the negative trend persists, and it is not due to finite volume effects but likely sea quark charging effects.

  6. Dead time corrections using the backward extrapolation method

    NASA Astrophysics Data System (ADS)

    Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.

    2017-05-01

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.

  7. Neutrinoless double beta decay and QCD running at low energy scales

    NASA Astrophysics Data System (ADS)

    González, M.; Hirsch, M.; Kovalenko, S. G.

    2018-06-01

    There is a common belief that the main uncertainties in the theoretical analysis of neutrinoless double beta (0 ν β β ) decay originate from the nuclear matrix elements. Here, we uncover another previously overlooked source of potentially large uncertainties stemming from nonperturbative QCD effects. Recently perturbative QCD corrections have been calculated for all dimension 6 and 9 effective operators describing 0 ν β β -decay and their importance for a reliable treatment of 0 ν β β -decay has been demonstrated. However, these perturbative results are valid at energy scales above ˜1 GeV , while the typical 0 ν β β scale is about ˜100 MeV . In view of this fact we examine the possibility of extrapolating the perturbative results towards sub-GeV nonperturbative scales on the basis of the QCD coupling constant "freezing" behavior using background perturbation theory. Our analysis suggests that such an infrared extrapolation does modify the perturbative results for both short-range and long-range mechanisms of 0 ν β β -decay in general only moderately. We also discuss that the tensor⊗tensor effective operator cannot appear alone in the low energy limit of any renormalizable high-scale model and then demonstrate that all five linearly independent combinations of the scalar and tensor operators, which can appear in renormalizable models, are infrared stable.

  8. Robust approaches to quantification of margin and uncertainty for sparse data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin

    Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less

  9. Spread of large LNG pools on the sea.

    PubMed

    Fay, J A

    2007-02-20

    A review of the standard model of LNG pool spreading on water, comparing it with the model and experiments on oil pool spread from which the LNG model is extrapolated, raises questions about the validity of the former as applied to spills from marine tankers. These questions arise from the difference in fluid density ratios, in the multi-dimensional flow at the pool edge, in the effects of LNG pool boiling at the LNG-water interface, and in the model and experimental initial conditions compared with the inflow conditions from a marine tanker spill. An alternate supercritical flow model is proposed that avoids these difficulties; it predicts significant increase in the maximum pool radius compared with the standard model and is partially corroborated by tests of LNG pool fires on water. Wind driven ocean wave interaction has little effect on either spread model.

  10. Foundations of anticipatory logic in biology and physics.

    PubMed

    Bettinger, Jesse S; Eastman, Timothy E

    2017-12-01

    Recent advances in modern physics and biology reveal several scenarios in which top-down effects (Ellis, 2016) and anticipatory systems (Rosen, 1980) indicate processes at work enabling active modeling and inference such that anticipated effects project onto potential causes. We extrapolate a broad landscape of anticipatory systems in the natural sciences extending to computational neuroscience of perception in the capacity of Bayesian inferential models of predictive processing. This line of reasoning also comes with philosophical foundations, which we develop in terms of counterfactual reasoning and possibility space, Whitehead's process thought, and correlations with Eastern wisdom traditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Validation and Application of Pharmacokinetic Models for Interspecies Extrapolations in Toxicity Risk Assessments of Volatile Organics

    DTIC Science & Technology

    1988-08-30

    Ai _.. ;:: -- I. OVERALL OBJECTIVE AND STATEMENT OF WORK The overall objective of the proposed project is to investigate the scientific basis...development and inter-species correlations with toxicity. A second series of tissue disposition experiments will be conducted to determine what ...elimination of halocarbons is hepatic metabolism. If metabolism plays a significant role in the disposition and subsequent neurobehavioral effects of

  12. Accounting for measurement error in log regression models with applications to accelerated testing.

    PubMed

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  13. Improved effective-one-body model of spinning, nonprecessing binary black holes for the era of gravitational-wave astrophysics with advanced detectors

    NASA Astrophysics Data System (ADS)

    Bohé, Alejandro; Shao, Lijing; Taracchini, Andrea; Buonanno, Alessandra; Babak, Stanislav; Harry, Ian W.; Hinder, Ian; Ossokine, Serguei; Pürrer, Michael; Raymond, Vivien; Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2017-02-01

    We improve the accuracy of the effective-one-body (EOB) waveforms that were employed during the first observing run of Advanced LIGO for binaries of spinning, nonprecessing black holes by calibrating them to a set of 141 numerical-relativity (NR) waveforms. The NR simulations expand the domain of calibration toward larger mass ratios and spins, as compared to the previous EOBNR model. Merger-ringdown waveforms computed in black-hole perturbation theory for Kerr spins close to extremal provide additional inputs to the calibration. For the inspiral-plunge phase, we use a Markov-chain Monte Carlo algorithm to efficiently explore the calibration space. For the merger-ringdown phase, we fit the NR signals with phenomenological formulae. After extrapolation of the calibrated model to arbitrary mass ratios and spins, the (dominant-mode) EOBNR waveforms have faithfulness—at design Advanced-LIGO sensitivity—above 99% against all the NR waveforms, including 16 additional waveforms used for validation, when maximizing only on initial phase and time. This implies a negligible loss in event rate due to modeling for these binary configurations. We find that future NR simulations at mass ratios ≳4 and double spin ≳0.8 will be crucial to resolving discrepancies between different ways of extrapolating waveform models. We also find that some of the NR simulations that already exist in such region of parameter space are too short to constrain the low-frequency portion of the models. Finally, we build a reduced-order version of the EOBNR model to speed up waveform generation by orders of magnitude, thus enabling intensive data-analysis applications during the upcoming observation runs of Advanced LIGO.

  14. Halo effective field theory constrains the solar 7Be + p → 8B + γ rate

    DOE PAGES

    Zhang, Xilin; Nollett, Kenneth M.; Phillips, D. R.

    2015-11-06

    In this study, we report an improved low-energy extrapolation of the cross section for the process 7Be(p,γ) 8B, which determines the 8B neutrino flux from the Sun. Our extrapolant is derived from Halo Effective Field Theory (EFT) at next-to-leading order. We apply Bayesian methods to determine the EFT parameters and the low-energy S-factor, using measured cross sections and scattering lengths as inputs. Asymptotic normalization coefficients of 8B are tightly constrained by existing radiative capture data, and contributions to the cross section beyond external direct capture are detected in the data at E < 0.5 MeV. Most importantly, the S-factor atmore » zero energy is constrained to be S(0) = 21.3 ± 0.7 eV b, which is an uncertainty smaller by a factor of two than previously recommended. That recommendation was based on the full range for S(0) obtained among a discrete set of models judged to be reasonable. In contrast, Halo EFT subsumes all models into a controlled low-energy approximant, where they are characterized by nine parameters at next-to-leading order. These are fit to data, and marginalized over via Monte Carlo integration to produce the improved prediction for S(E).« less

  15. Genomic instability, bystander effect, cytoplasmic irradiation and other phenomena that may achieve fame without fortune.

    PubMed

    Hall, E J

    2001-01-01

    The possible risk of induced malignancies in astronauts, as a consequence of the radiation environment in space, is a factor of concern for long term missions. Cancer risk estimates for high doses of low LET radiation are available from the epidemiological studies of the A-bomb survivors. Cancer risks at lower doses cannot be detected in epidemiological studies and must be inferred by extrapolation from the high dose risks. The standard setting bodies, such as the ICRP recommend a linear, no-threshold extrapolation of risks from high to low doses, but this is controversial. A study of mechanisms of carcinogenesis may shed some light on the validity of a linear extrapolation. The multi-step nature of carcinogenesis suggests that the role of radiation may be to induce a mutation leading to a mutator phenotype. High energy Fe ions, such as those encountered in space are highly effective in inducing genomic instability. Experiments involving the single particle microbeam have demonstrated a "bystander effect", ie a biological effect in cells not themselves hit, but in close proximity to those that are, as well as the induction of mutations in cells where only the cytoplasm, and not the nucleus, have been traversed by a charged particle. These recent experiments cast doubt on the validity of a simple linear extrapolation, but the data are so far fragmentary and conflicting. More studies are necessary. While mechanistic studies cannot replace epidemiology as a source of quantitative risk estimates, they may shed some light on the shape of the dose response relationship and therefore on the limitations of a linear extrapolation to low doses.

  16. Genomic instability, bystander effect, cytoplasmic irradiation and other phenomena that may achieve fame without fortune

    NASA Technical Reports Server (NTRS)

    Hall, E. J.

    2001-01-01

    The possible risk of induced malignancies in astronauts, as a consequence of the radiation environment in space, is a factor of concern for long term missions. Cancer risk estimates for high doses of low LET radiation are available from the epidemiological studies of the A-bomb survivors. Cancer risks at lower doses cannot be detected in epidemiological studies and must be inferred by extrapolation from the high dose risks. The standard setting bodies, such as the ICRP recommend a linear, no-threshold extrapolation of risks from high to low doses, but this is controversial. A study of mechanisms of carcinogenesis may shed some light on the validity of a linear extrapolation. The multi-step nature of carcinogenesis suggests that the role of radiation may be to induce a mutation leading to a mutator phenotype. High energy Fe ions, such as those encountered in space are highly effective in inducing genomic instability. Experiments involving the single particle microbeam have demonstrated a "bystander effect", ie a biological effect in cells not themselves hit, but in close proximity to those that are, as well as the induction of mutations in cells where only the cytoplasm, and not the nucleus, have been traversed by a charged particle. These recent experiments cast doubt on the validity of a simple linear extrapolation, but the data are so far fragmentary and conflicting. More studies are necessary. While mechanistic studies cannot replace epidemiology as a source of quantitative risk estimates, they may shed some light on the shape of the dose response relationship and therefore on the limitations of a linear extrapolation to low doses.

  17. Short-range stabilizing potential for computing energies and lifetimes of temporary anions with extrapolation methods.

    PubMed

    Sommerfeld, Thomas; Ehara, Masahiro

    2015-01-21

    The energy of a temporary anion can be computed by adding a stabilizing potential to the molecular Hamiltonian, increasing the stabilization until the temporary state is turned into a bound state, and then further increasing the stabilization until enough bound state energies have been collected so that these can be extrapolated back to vanishing stabilization. The lifetime can be obtained from the same data, but only if the extrapolation is done through analytic continuation of the momentum as a function of the square root of a shifted stabilizing parameter. This method is known as analytic continuation of the coupling constant, and it requires--at least in principle--that the bound-state input data are computed with a short-range stabilizing potential. In the context of molecules and ab initio packages, long-range Coulomb stabilizing potentials are, however, far more convenient and have been used in the past with some success, although the error introduced by the long-rang nature of the stabilizing potential remains unknown. Here, we introduce a soft-Voronoi box potential that can serve as a short-range stabilizing potential. The difference between a Coulomb and the new stabilization is analyzed in detail for a one-dimensional model system as well as for the (2)Πu resonance of CO2(-), and in both cases, the extrapolation results are compared to independently computed resonance parameters, from complex scaling for the model, and from complex absorbing potential calculations for CO2(-). It is important to emphasize that for both the model and for CO2(-), all three sets of results have, respectively, been obtained with the same electronic structure method and basis set so that the theoretical description of the continuum can be directly compared. The new soft-Voronoi-box-based extrapolation is then used to study the influence of the size of diffuse and the valence basis sets on the computed resonance parameters.

  18. Molecular Sieve Bench Testing and Computer Modeling

    NASA Technical Reports Server (NTRS)

    Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.

    1995-01-01

    The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.

  19. An analysis of the nucleon spectrum from lattice partially-quenched QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. Armour; Allton, C. R.; Leinweber, Derek B.

    2010-09-01

    The chiral extrapolation of the nucleon mass, Mn, is investigated using data coming from 2-flavour partially-quenched lattice simulations. The leading one-loop corrections to the nucleon mass are derived for partially-quenched QCD. A large sample of lattice results from the CP-PACS Collaboration is analysed, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite-volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value ofmore » Mn in agreement with experiment. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.« less

  20. A novel evaluation method for extrapolated retention factor in determination of n-octanol/water partition coefficient of halogenated organic pollutants by reversed-phase high performance liquid chromatography.

    PubMed

    Han, Shu-ying; Liang, Chao; Qiao, Jun-qin; Lian, Hong-zhen; Ge, Xin; Chen, Hong-yuan

    2012-02-03

    The retention factor corresponding to pure water in reversed-phase high performance liquid chromatography (RP-HPLC), k(w), was commonly obtained by extrapolation of retention factor (k) in a mixture of organic modifier and water as mobile phase in tedious experiments. In this paper, a relationship between logk(w) and logk for directly determining k(w) has been proposed for the first time. With a satisfactory validation, the approach was confirmed to enable easy and accurate evaluation of k(w) for compounds in question with similar structure to model compounds. Eight PCB congeners with different degree of chlorination were selected as a training set for modeling the logk(w)-logk correlation on both silica-based C(8) and C(18) stationary phases to evaluate logk(w) of sample compounds including seven PCB, six PBB and eight PBDE congeners. These eight model PCBs were subsequently combined with seven structure-similar benzene derivatives possessing reliable experimental K(ow) values as a whole training set for logK(ow)-logk(w) regressions on the two stationary phases. Consequently, the evaluated logk(w) values of sample compounds were used to determine their logK(ow) by the derived logK(ow)-logk(w) models. The logK(ow) values obtained by these evaluated logk(w) were well comparable with those obtained by experimental-extrapolated logk(w), demonstrating that the proposed method for logk(w) evaluation in this present study could be an effective means in lipophilicity study of environmental contaminants with numerous congeners. As a result, logK(ow) data of many PCBs, PBBs and PBDEs could be offered. These contaminants are considered to widely exist in the environment, but there have been no reliable experimental K(ow) data available yet. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Patient-bounded extrapolation using low-dose priors for volume-of-interest imaging in C-arm CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Y.; Maier, A.; Berger, M.

    2015-04-15

    Purpose: Three-dimensional (3D) volume-of-interest (VOI) imaging with C-arm systems provides anatomical information in a predefined 3D target region at a considerably low x-ray dose. However, VOI imaging involves laterally truncated projections from which conventional reconstruction algorithms generally yield images with severe truncation artifacts. Heuristic based extrapolation methods, e.g., water cylinder extrapolation, typically rely on techniques that complete the truncated data by means of a continuity assumption and thus appear to be ad-hoc. It is our goal to improve the image quality of VOI imaging by exploiting existing patient-specific prior information in the workflow. Methods: A necessary initial step prior tomore » a 3D acquisition is to isocenter the patient with respect to the target to be scanned. To this end, low-dose fluoroscopic x-ray acquisitions are usually applied from anterior–posterior (AP) and medio-lateral (ML) views. Based on this, the patient is isocentered by repositioning the table. In this work, we present a patient-bounded extrapolation method that makes use of these noncollimated fluoroscopic images to improve image quality in 3D VOI reconstruction. The algorithm first extracts the 2D patient contours from the noncollimated AP and ML fluoroscopic images. These 2D contours are then combined to estimate a volumetric model of the patient. Forward-projecting the shape of the model at the eventually acquired C-arm rotation views gives the patient boundary information in the projection domain. In this manner, we are in the position to substantially improve image quality by enforcing the extrapolated line profiles to end at the known patient boundaries, derived from the 3D shape model estimate. Results: The proposed method was evaluated on eight clinical datasets with different degrees of truncation. The proposed algorithm achieved a relative root mean square error (rRMSE) of about 1.0% with respect to the reference reconstruction on nontruncated data, even in the presence of severe truncation, compared to a rRMSE of 8.0% when applying a state-of-the-art heuristic extrapolation technique. Conclusions: The method we proposed in this paper leads to a major improvement in image quality for 3D C-arm based VOI imaging. It involves no additional radiation when using fluoroscopic images that are acquired during the patient isocentering process. The model estimation can be readily integrated into the existing interventional workflow without additional hardware.« less

  2. Fuzzy logic and causal reasoning with an 'n' of 1 for diagnosis and treatment of the stroke patient.

    PubMed

    Helgason, Cathy M; Jobe, Thomas H

    2004-03-01

    The current scientific model for clinical decision-making is founded on binary or Aristotelian logic, classical set theory and probability-based statistics. Evidence-based medicine has been established as the basis for clinical recommendations. There is a problem with this scientific model when the physician must diagnose and treat the individual patient. The problem is a paradox, which is that the scientific model of evidence-based medicine is based upon a hypothesis aimed at the group and therefore, any conclusions cannot be extrapolated but to a degree to the individual patient. This extrapolation is dependent upon the expertise of the physician. A fuzzy logic multivalued-based scientific model allows this expertise to be numerically represented and solves the clinical paradox of evidence-based medicine.

  3. Extrapolation of Functions of Many Variables by Means of Metric Analysis

    NASA Astrophysics Data System (ADS)

    Kryanev, Alexandr; Ivanov, Victor; Romanova, Anastasiya; Sevastianov, Leonid; Udumyan, David

    2018-02-01

    The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.

  4. Chiral extrapolation of the leading hadronic contribution to the muon anomalous magnetic moment

    NASA Astrophysics Data System (ADS)

    Golterman, Maarten; Maltman, Kim; Peris, Santiago

    2017-04-01

    A lattice computation of the leading-order hadronic contribution to the muon anomalous magnetic moment can potentially help reduce the error on the Standard Model prediction for this quantity, if sufficient control of all systematic errors affecting such a computation can be achieved. One of these systematic errors is that associated with the extrapolation to the physical pion mass from values on the lattice larger than the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 200 to 400 MeV with the help of two-loop chiral perturbation theory, and we find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various tricks to improve the reliability of the chiral extrapolation employed in the literature are taken into account. In addition, while chiral perturbation theory also predicts the dependence on the pion mass of the leading-order hadronic contribution to the muon anomalous magnetic moment as the chiral limit is approached, this prediction turns out to be of no practical use because the physical pion mass is larger than the muon mass that sets the scale for the onset of this behavior.

  5. Neuritogenesis: A model for space radiation effects on the central nervous system

    NASA Technical Reports Server (NTRS)

    Vazquez, M. E.; Broglio, T. M.; Worgul, B. V.; Benton, E. V.

    1994-01-01

    Pivotal to the astronauts' functional integrity and survival during long space flights are the strategies to deal with space radiations. The majority of the cellular studies in this area emphasize simple endpoints such as growth related events which, although useful to understand the nature of primary cell injury, have poor predictive value for extrapolation to more complex tissues such as the central nervous system (CNS). In order to assess the radiation damage on neural cell populations, we developed an in vitro model in which neuronal differentiation, neurite extension, and synaptogenesis occur under controlled conditions. The model exploits chick embryo neural explants to study the effects of radiations on neuritogenesis. In addition, neurobiological problems associated with long-term space flights are discussed.

  6. Bridging the Gap Between In Vitro Dissolution and the Time Course of Ibuprofen-Mediating Pain Relief.

    PubMed

    Cristofoletti, Rodrigo; Dressman, Jennifer B

    2016-12-01

    In vitro-in vivo extrapolation techniques combined with physiologically based pharmacokinetic models represent a feasible approach to establishing links between critical quality attributes and the time course of drug concentrations in vivo. By further integrating the results with pharmacodynamic (PD) models, scientists can also explore the time course of drug effect. The aim of this study was to assess whether differences in dissolution rates would affect the onset, magnitude, and duration of the time course of ibuprofen-mediating pain relief. An integrated in vitro-in vivo extrapolation-physiologically based pharmacokinetic/PD model was used to simulate pharmacokinetic and PD profiles for ibuprofen free acid (IBU-H) and its salts. Two elements of the pharmacokinetic profile, the peak of exposure (C max ) and the time to peak concentration (T max ), were sensitive to dissolution rate, whereas only 1 element of the pharmacodynamic profile was affected, namely the onset of drug action. The C max differences between IBU-H and its salts seem to be mitigated in the (hypothetical) effect compartment because of the concurrent distribution and elimination processes. Furthermore, the predicted maximum concentration in the effect compartment exceeded the EC 80 value, which marks the plateau phase of the PD concentration-response curve, regardless of whether IBU-H or its salts were administered. Understanding the target site distribution kinetics and the potential nonlinearities between exposure and response will assist in setting criteria that are more scientifically based for the demonstration of therapeutic equivalence. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  7. Dielectric relaxation spectrum of undiluted poly(4-chlorostyrene), T≳Tg

    NASA Astrophysics Data System (ADS)

    Yoshihara, M.; Work, R. N.

    1980-06-01

    Dielectric relaxation characteristics of undiluted, atactic poly(4-chlorostyrene), P4CS, have been determined at temperatures 406 K⩽T⩽446 K from measurements made at frequencies 0.2 Hz⩽f⩽0.2 MHz. After effects of electrical conductivity are subtracted, it is found that the normalized complex dielectric constant K*=K'-i K″ can be represented quantitatively by the Havriliak-Negami (H-N) equation K*=[1+(iωτ0)1-α]-β, 0⩽α, β⩽1, except for a small, high frequency tail that appears in measurements made near the glass transition temperature, Tg. The parameter β is nearly constant, and α depends linearly on log τ0, where τ0 is a characteristic relaxation time. The parameters α and β extrapolate through values obtained from published data from P4CS solutions, and extrapolation to α=0 yields a value of τ0 which compares favorably with a published value for crankshaft motions of an equivalent isolated chain segment. These observations suggest that β may characterize effects of chain connectivity and α may describe effects of interactions of the surroundings with the chain. Experimental results are compared with alternative empirical and model-based representations of dielectric relaxation in polymers.

  8. Regionalisation of parameters of a large-scale water quality model in Lithuania using PAIC-SWAT

    NASA Astrophysics Data System (ADS)

    Zarrineh, Nina; van Griensven, Ann; Sennikovs, Juris; Bekere, Liga; Plunge, Svajunas

    2015-04-01

    To comply with the EU Water Framework Directive, all water bodies need to achieve good ecological status. To reach these goals, the Environmental Protection Agency (AAA) has to elaborate river basin districts management plans and programmes of measures for all catchments in Lithuania. For this purpose, a Soil and Water Assessment Tool (SWAT) model was set up for all Lithuanian catchments using the most recent version of SWAT2012 rev627 implemented and imbedded in a Python workflow by the Center of Processes Analysis and Research (PAIC). The model was calibrated and evaluated using all monitoring data of river discharge, nitrogen and phosphorous concentrations and load. A regionalisation strategy has been set up by identifying 13 hydrological regions according to the runoff formation and hydrological conditions. In each region, a representative catchment was selected and calibrated using a combination of manual and automated calibration techniques. After final parameterization and fulfilling of calibrating and validating evaluation criteria, the same parameters sets have been extrapolated to other catchments within the same hydrological region. Multi variable cal/val strategy was implemented for the following variables: river flow and in-stream NO3, Total Nitrogen, PO4 and Total Phosphorous concentrations. The criteria used for calibration, validation and extrapolation are: Nash-Sutcliffe Efficiency (NSE) for flow and R-squared for water quality variables and PBIAS (percentage bias) for all variables. For the hydrological calibration, NSE values greater than 0.5 should be achieved, while for validation and extrapolation the threshold is respectively 0.4 and 0.3. PBIAS errors have to be less than 20% for calibration and for validation and extrapolation less than 25% and 30%, respectively. In water quality calibration, R-squared should be achieved to 0.5 for calibration and for validation and extrapolation to 0.4 and 0.3 respectively for nitrogen variables. Besides PBIAS error should be less than 40% for calibration, and less than 70% for validation and extrapolation for all mentioned water quality variables. For the flow calibration, daily discharge data for 62 stations were provided for the period 1997-2012. For more than 500 stations, water quality data was provided and 135 data-rich stations was pre-processed in a database containing all observations from 1997-2012. Finally by implementing this regionalisation strategy, the model could satisfactorily predict the selected variables so that in the hydrological part more than 90% of stations fulfilled the criteria and in the water quality part more than 95% of stations fulfilled the criteria. Keywords: Water Quality Modelling, Regionalisation, Parameterization, Nitrogen and Phosphorus Prediction, Calibration, PAIC-SWAT.

  9. [Application of Markov model in post-marketing pharmacoeconomic evaluation of traditional Chinese medicine].

    PubMed

    Wang, Xin; Su, Xia; Sun, Wentao; Xie, Yanming; Wang, Yongyan

    2011-10-01

    In post-marketing study of traditional Chinese medicine (TCM), pharmacoeconomic evaluation has an important applied significance. However, the economic literatures of TCM have been unable to fully and accurately reflect the unique overall outcomes of treatment with TCM. For the special nature of TCM itself, we recommend that Markov model could be introduced into post-marketing pharmacoeconomic evaluation of TCM, and also explore the feasibility of model application. Markov model can extrapolate the study time horizon, suit with effectiveness indicators of TCM, and provide measurable comprehensive outcome. In addition, Markov model can promote the development of TCM quality of life scale and the methodology of post-marketing pharmacoeconomic evaluation.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jager, Yetta; Efroymson, Rebecca Ann; Sublette, K.

    Quantitative tools are needed to evaluate the ecological effects of increasing petroleum production. In this article, we describe two stochastic models for simulating the spatial distribution of brine spills on a landscape. One model uses general assumptions about the spatial arrangement of spills and their sizes; the second model distributes spills by siting rectangular well complexes and conditioning spill probabilities on the configuration of pipes. We present maps of landscapes with spills produced by the two methods and compare the ability of the models to reproduce a specified spill area. A strength of the models presented here is their abilitymore » to extrapolate from the existing landscape to simulate landscapes with a higher (or lower) density of oil wells.« less

  11. Smooth extrapolation of unknown anatomy via statistical shape models

    NASA Astrophysics Data System (ADS)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  12. Health effects of gasoline exposure. I. Exposure assessment for U.S. distribution workers.

    PubMed Central

    Smith, T J; Hammond, S K; Wong, O

    1993-01-01

    Personal exposures were estimated for a large cohort of workers in the U.S. domestic system for distributing gasoline by trucks and marine vessels. This assessment included development of a rationale and methodology for extrapolating vapor exposures prior to the availability of measurement data, analysis of existing measurement data to estimate task and job exposures during 1975-1985, and extrapolation of truck and marine job exposures before 1975. A worker's vapor exposure was extrapolated from three sets of factors: the tasks in his or her job associated with vapor sources, the characteristics of vapor sources (equipment and other facilities) at the work site, and the composition of petroleum products producing vapors. Historical data were collected on the tasks in job definitions, on work-site facilities, and on product composition. These data were used in a model to estimate the overall time-weighted-average vapor exposure for jobs based on estimates of task exposures and their duration. Task exposures were highest during tank filling in trucks and marine vessels. Measured average annual, full-shift exposures during 1975-1985 ranged from 9 to 14 ppm of total hydrocarbon vapor for truck drivers and 2 to 35 ppm for marine workers on inland waterways. Extrapolated past average exposures in truck operations were highest for truck drivers before 1965 (range 140-220 ppm). Other jobs in truck operations resulted in much lower exposures. Because there were few changes in marine operations before 1979, exposures were assumed to be the same as those measured during 1975-1985. Well-defined exposure gradients were found across jobs within time periods, which were suitable for epidemiologic analyses. PMID:8020436

  13. FY17 Status Report on the Micromechanical Finite Element Modeling of Creep Fracture of Grade 91 Steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messner, M. C.; Truster, T. J.; Cochran, K. B.

    Advanced reactors designed to operate at higher temperatures than current light water reactors require structural materials with high creep strength and creep-fatigue resistance to achieve long design lives. Grade 91 is a ferritic/martensitic steel designed for long creep life at elevated temperatures. It has been selected as a candidate material for sodium fast reactor intermediate heat exchangers and other advanced reactor structural components. This report focuses on the creep deformation and rupture life of Grade 91 steel. The time required to complete an experiment limits the availability of long-life creep data for Grade 91 and other structural materials. Design methodsmore » often extrapolate the available shorter-term experimental data to longer design lives. However, extrapolation methods tacitly assume the underlying material mechanisms causing creep for long-life/low-stress conditions are the same as the mechanisms controlling creep in the short-life/high-stress experiments. A change in mechanism for long-term creep could cause design methods based on extrapolation to be non-conservative. The goal for physically-based microstructural models is to accurately predict material response in experimentally-inaccessible regions of design space. An accurate physically-based model for creep represents all the material mechanisms that contribute to creep deformation and damage and predicts the relative influence of each mechanism, which changes with loading conditions. Ideally, the individual mechanism models adhere to the material physics and not an empirical calibration to experimental data and so the model remains predictive for a wider range of loading conditions. This report describes such a physically-based microstructural model for Grade 91 at 600° C. The model explicitly represents competing dislocation and diffusional mechanisms in both the grain bulk and grain boundaries. The model accurately recovers the available experimental creep curves at higher stresses and the limited experimental data at lower stresses, predominately primary creep rates. The current model considers only one temperature. However, because the model parameters are, for the most part, directly related to the physics of fundamental material processes, the temperature dependence of the properties are known. Therefore, temperature dependence can be included in the model with limited additional effort. The model predicts a mechanism shift for 600° C at approximately 100 MPa from a dislocation- dominated regime at higher stress to a diffusion-dominated regime at lower stress. This mechanism shift impacts the creep life, notch-sensitivity, and, likely, creep ductility of Grade 91. In particular, the model predicts existing extrapolation methods for creep life may be non-conservative when attempting to extrapolate data for higher stress creep tests to low stress, long-life conditions. Furthermore, the model predicts a transition from notchstrengthening behavior at high stress to notch-weakening behavior at lower stresses. Both behaviors may affect the conservatism of existing design methods.« less

  14. Data-based discharge extrapolation: estimating annual discharge for a partially gauged large river basin from its small sub-basins

    NASA Astrophysics Data System (ADS)

    Gong, L.

    2013-12-01

    Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.

  15. Intelligent modelling of bioprocesses: a comparison of structured and unstructured approaches.

    PubMed

    Hodgson, Benjamin J; Taylor, Christopher N; Ushio, Misti; Leigh, J R; Kalganova, Tatiana; Baganz, Frank

    2004-12-01

    This contribution moves in the direction of answering some general questions about the most effective and useful ways of modelling bioprocesses. We investigate the characteristics of models that are good at extrapolating. We trained three fully predictive models with different representational structures (differential equations, differential equations with inheritance of rates and a network of reactions) on Saccharopolyspora erythraea shake flask fermentation data using genetic programming. The models were then tested on unseen data outside the range of the training data and the resulting performances were compared. It was found that constrained models with mathematical forms analogous to internal mass balancing and stoichiometric relations were superior to flexible unconstrained models, even though no a priori knowledge of this fermentation was used.

  16. Temperature extrapolation of multicomponent grand canonical free energy landscapes

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-08-01

    We derive a method for extrapolating the grand canonical free energy landscape of a multicomponent fluid system from one temperature to another. Previously, we introduced this statistical mechanical framework for the case where kinetic energy contributions to the classical partition function were neglected for simplicity [N. A. Mahynski et al., J. Chem. Phys. 146, 074101 (2017)]. Here, we generalize the derivation to admit these contributions in order to explicitly illustrate the differences that result. Specifically, we show how factoring out kinetic energy effects a priori, in order to consider only the configurational partition function, leads to simpler mathematical expressions that tend to produce more accurate extrapolations than when these effects are included. We demonstrate this by comparing and contrasting these two approaches for the simple cases of an ideal gas and a non-ideal, square-well fluid.

  17. The role of compressional viscoelasticity in the lubrication of rolling contacts.

    NASA Technical Reports Server (NTRS)

    Harrison, G.; Trachman, E. G.

    1972-01-01

    A simple model for the time-dependent volume response of a liquid to an applied pressure step is used to calculate the variation with rolling speed of the traction coefficient in a rolling contact system. Good agreement with experimental results is obtained at rolling speeds above 50 in/sec. At lower rolling speeds a very rapid change in the effective viscosity of the lubricant is predicted. This behavior, in conjunction with shear rate effects, is shown to lead to large errors when experimental data are extrapolated to zero rolling speed.

  18. Development and application of the adverse outcome pathway framework for understanding and predicting chronic toxicity: I. Challenges and research needs in ecotoxicology.

    PubMed

    Groh, Ksenia J; Carvalho, Raquel N; Chipman, James K; Denslow, Nancy D; Halder, Marlies; Murphy, Cheryl A; Roelofs, Dick; Rolaki, Alexandra; Schirmer, Kristin; Watanabe, Karen H

    2015-02-01

    To elucidate the effects of chemicals on populations of different species in the environment, efficient testing and modeling approaches are needed that consider multiple stressors and allow reliable extrapolation of responses across species. An adverse outcome pathway (AOP) is a concept that provides a framework for organizing knowledge about the progression of toxicity events across scales of biological organization that lead to adverse outcomes relevant for risk assessment. In this paper, we focus on exploring how the AOP concept can be used to guide research aimed at improving both our understanding of chronic toxicity, including delayed toxicity as well as epigenetic and transgenerational effects of chemicals, and our ability to predict adverse outcomes. A better understanding of the influence of subtle toxicity on individual and population fitness would support a broader integration of sublethal endpoints into risk assessment frameworks. Detailed mechanistic knowledge would facilitate the development of alternative testing methods as well as help prioritize higher tier toxicity testing. We argue that targeted development of AOPs supports both of these aspects by promoting the elucidation of molecular mechanisms and their contribution to relevant toxicity outcomes across biological scales. We further discuss information requirements and challenges in application of AOPs for chemical- and site-specific risk assessment and for extrapolation across species. We provide recommendations for potential extension of the AOP framework to incorporate information on exposure, toxicokinetics and situation-specific ecological contexts, and discuss common interfaces that can be employed to couple AOPs with computational modeling approaches and with evolutionary life history theory. The extended AOP framework can serve as a venue for integration of knowledge derived from various sources, including empirical data as well as molecular, quantitative and evolutionary-based models describing species responses to toxicants. This will allow a more efficient application of AOP knowledge for quantitative chemical- and site-specific risk assessment as well as for extrapolation across species in the future. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Cd and proton adsorption onto bacterial consortia grown from industrial wastes and contaminated geologic settings.

    PubMed

    Borrok, David M; Fein, Jeremy B; Kulpa, Charles F

    2004-11-01

    To model the effects of bacterial metal adsorption in contaminated environments, results from metal adsorption experiments involving individual pure stains of bacteria must be extrapolated to systems in which potentially dozens of bacterial species are present. This extrapolation may be made easier because bacterial consortia from natural environments appear to exhibit similar metal binding properties. However, bacteria that thrive in highly perturbed contaminated environments may exhibit significantly different adsorptive behavior. Here we measure proton and Cd adsorption onto a range of bacterial consortia grown from heavily contaminated industrial wastes, groundwater, and soils. We model the results using a discrete site surface complexation approach to determine binding constants and site densities for each consortium. The results demonstrate that bacterial consortia from different contaminated environments exhibit a range of total site densities (approximately a 3-fold difference) and Cd-binding constants (approximately a 10-fold difference). These ranges for Cd binding constants may be small enough to suggest that bacteria-metal adsorption in contaminated environments can be described using relatively few "averaged" bacteria-metal binding constants (in conjunction with the necessary binding constants for competing surfaces and ligands). However, if additional precision is necessary, modeling parameters must be developed separately for each contaminated environment of interest.

  20. Integrated Idl Tool For 3d Modeling And Imaging Data Analysis

    NASA Astrophysics Data System (ADS)

    Nita, Gelu M.; Fleishman, G. D.; Gary, D. E.; Kuznetsov, A. A.; Kontar, E. P.

    2012-05-01

    Addressing many key problems in solar physics requires detailed analysis of non-simultaneous imaging data obtained in various wavelength domains with different spatial resolution and their comparison with each other supplied by advanced 3D physical models. To facilitate achieving this goal, we have undertaken a major enhancement and improvements of IDL-based simulation tools developed earlier for modeling microwave and X-ray emission. The greatly enhanced object-based architecture provides interactive graphic user interface that allows the user i) to import photospheric magnetic field maps and perform magnetic field extrapolations to almost instantly generate 3D magnetic field models, ii) to investigate the magnetic topology of these models by interactively creating magnetic field lines and associated magnetic field tubes, iii) to populate them with user-defined nonuniform thermal plasma and anisotropic nonuniform nonthermal electron distributions; and iv) to calculate the spatial and spectral properties of radio and X-ray emission. The application integrates DLL and Shared Libraries containing fast gyrosynchrotron emission codes developed in FORTRAN and C++, soft and hard X-ray codes developed in IDL, and a potential field extrapolation DLL produced based on original FORTRAN code developed by V. Abramenko and V. Yurchishin. The interactive interface allows users to add any user-defined IDL or external callable radiation code, as well as user-defined magnetic field extrapolation routines. To illustrate the tool capabilities, we present a step-by-step live computation of microwave and X-ray images from realistic magnetic structures obtained from a magnetic field extrapolation preceding a real event, and compare them with the actual imaging data produced by NORH and RHESSI instruments. This work was supported in part by NSF grants AGS-0961867, AST-0908344, AGS-0969761, and NASA grants NNX10AF27G and NNX11AB49G to New Jersey Institute of Technology, by a UK STFC rolling grant, the Leverhulme Trust, UK, and by the European Commission through the Radiosun and HESPE Networks.

  1. Extrapolating regional probability of drying of headwater streams using discrete observations and gauging networks

    NASA Astrophysics Data System (ADS)

    Beaufort, Aurélien; Lamouroux, Nicolas; Pella, Hervé; Datry, Thibault; Sauquet, Eric

    2018-05-01

    Headwater streams represent a substantial proportion of river systems and many of them have intermittent flows due to their upstream position in the network. These intermittent rivers and ephemeral streams have recently seen a marked increase in interest, especially to assess the impact of drying on aquatic ecosystems. The objective of this paper is to quantify how discrete (in space and time) field observations of flow intermittence help to extrapolate over time the daily probability of drying (defined at the regional scale). Two empirical models based on linear or logistic regressions have been developed to predict the daily probability of intermittence at the regional scale across France. Explanatory variables were derived from available daily discharge and groundwater-level data of a dense gauging/piezometer network, and models were calibrated using discrete series of field observations of flow intermittence. The robustness of the models was tested using an independent, dense regional dataset of intermittence observations and observations of the year 2017 excluded from the calibration. The resulting models were used to extrapolate the daily regional probability of drying in France: (i) over the period 2011-2017 to identify the regions most affected by flow intermittence; (ii) over the period 1989-2017, using a reduced input dataset, to analyse temporal variability of flow intermittence at the national level. The two empirical regression models performed equally well between 2011 and 2017. The accuracy of predictions depended on the number of continuous gauging/piezometer stations and intermittence observations available to calibrate the regressions. Regions with the highest performance were located in sedimentary plains, where the monitoring network was dense and where the regional probability of drying was the highest. Conversely, the worst performances were obtained in mountainous regions. Finally, temporal projections (1989-2016) suggested the highest probabilities of intermittence (> 35 %) in 1989-1991, 2003 and 2005. A high density of intermittence observations improved the information provided by gauging stations and piezometers to extrapolate the temporal variability of intermittent rivers and ephemeral streams.

  2. Use of a probabilistic PBPK/PD model to calculate Data Derived Extrapolation Factors for chlorpyrifos.

    PubMed

    Poet, Torka S; Timchalk, Charles; Bartels, Michael J; Smith, Jordan N; McDougal, Robin; Juberg, Daland R; Price, Paul S

    2017-06-01

    A physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model combined with Monte Carlo analysis of inter-individual variation was used to assess the effects of the insecticide, chlorpyrifos and its active metabolite, chlorpyrifos oxon in humans. The PBPK/PD model has previously been validated and used to describe physiological changes in typical individuals as they grow from birth to adulthood. This model was updated to include physiological and metabolic changes that occur with pregnancy. The model was then used to assess the impact of inter-individual variability in physiology and biochemistry on predictions of internal dose metrics and quantitatively assess the impact of major sources of parameter uncertainty and biological diversity on the pharmacodynamics of red blood cell acetylcholinesterase inhibition. These metrics were determined in potentially sensitive populations of infants, adult women, pregnant women, and a combined population of adult men and women. The parameters primarily responsible for inter-individual variation in RBC acetylcholinesterase inhibition were related to metabolic clearance of CPF and CPF-oxon. Data Derived Extrapolation Factors that address intra-species physiology and biochemistry to replace uncertainty factors with quantitative differences in metrics were developed in these same populations. The DDEFs were less than 4 for all populations. These data and modeling approach will be useful in ongoing and future human health risk assessments for CPF and could be used for other chemicals with potential human exposure. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Development of a Physiologically Based Model to Describe the Pharmacokinetics of Methylphenidate in Juvenile and Adult Humans and Nonhuman Primates

    PubMed Central

    Yang, Xiaoxia; Morris, Suzanne M.; Gearhart, Jeffery M.; Ruark, Christopher D.; Paule, Merle G.; Slikker, William; Mattison, Donald R.; Vitiello, Benedetto; Twaddle, Nathan C.; Doerge, Daniel R.; Young, John F.; Fisher, Jeffrey W.

    2014-01-01

    The widespread usage of methylphenidate (MPH) in the pediatric population has received considerable attention due to its potential effect on child development. For the first time a physiologically based pharmacokinetic (PBPK) model has been developed in juvenile and adult humans and nonhuman primates to quantitatively evaluate species- and age-dependent enantiomer specific pharmacokinetics of MPH and its primary metabolite ritalinic acid. The PBPK model was first calibrated in adult humans using in vitro enzyme kinetic data of MPH enantiomers, together with plasma and urine pharmacokinetic data with MPH in adult humans. Metabolism of MPH in the small intestine was assumed to account for the low oral bioavailability of MPH. Due to lack of information, model development for children and juvenile and adult nonhuman primates primarily relied on intra- and interspecies extrapolation using allometric scaling. The juvenile monkeys appear to metabolize MPH more rapidly than adult monkeys and humans, both adults and children. Model prediction performance is comparable between juvenile monkeys and children, with average root mean squared error values of 4.1 and 2.1, providing scientific basis for interspecies extrapolation of toxicity findings. Model estimated human equivalent doses in children that achieve similar internal dose metrics to those associated with pubertal delays in juvenile monkeys were found to be close to the therapeutic doses of MPH used in pediatric patients. This computational analysis suggests that continued pharmacovigilance assessment is prudent for the safe use of MPH. PMID:25184666

  4. MECHANISTIC DOSIMETRY MODELS OF NANOMATERIAL DEPOSITION IN THE RESPIRATORY TRACT

    EPA Science Inventory

    Accurate health risk assessments of inhalation exposure to nanomaterials will require dosimetry models that account for interspecies differences in dose delivered to the respiratory tract. Mechanistic models offer the advantage to interspecies extrapolation that physicochemica...

  5. PROBABILISTIC AQUATIC EXPOSURE ASSESSMENT FOR PESTICIDES 1: FOUNDATIONS

    EPA Science Inventory

    Models that capture underlying mechanisms and processes are necessary for reliable extrapolation of laboratory chemical data to field conditions. For validation, these models require a major revision of the conventional model testing paradigm to better recognize the conflict betw...

  6. Assessing Uncertainty of Interspecies Correlation Estimation Models for Aromatic Compounds

    EPA Science Inventory

    We developed Interspecies Correlation Estimation (ICE) models for aromatic compounds containing 1 to 4 benzene rings to assess uncertainty in toxicity extrapolation in two data compilation approaches. ICE models are mathematical relationships between surrogate and predicted test ...

  7. HIV Trends in the United States: Diagnoses and Estimated Incidence

    PubMed Central

    Song, Ruiguang; Tang, Tian; An, Qian; Prejean, Joseph; Dietz, Patricia; Hernandez, Angela L; Green, Timothy; Harris, Norma; McCray, Eugene; Mermin, Jonathan

    2017-01-01

    Background The best indicator of the impact of human immunodeficiency virus (HIV) prevention programs is the incidence of infection; however, HIV is a chronic infection and HIV diagnoses may include infections that occurred years before diagnosis. Alternative methods to estimate incidence use diagnoses, stage of disease, and laboratory assays of infection recency. Using a consistent, accurate method would allow for timely interpretation of HIV trends. Objective The objective of our study was to assess the recent progress toward reducing HIV infections in the United States overall and among selected population segments with available incidence estimation methods. Methods Data on cases of HIV infection reported to national surveillance for 2008-2013 were used to compare trends in HIV diagnoses, unadjusted and adjusted for reporting delay, and model-based incidence for the US population aged ≥13 years. Incidence was estimated using a biomarker for recency of infection (stratified extrapolation approach) and 2 back-calculation models (CD4 and Bayesian hierarchical models). HIV testing trends were determined from behavioral surveys for persons aged ≥18 years. Analyses were stratified by sex, race or ethnicity (black, Hispanic or Latino, and white), and transmission category (men who have sex with men, MSM). Results On average, HIV diagnoses decreased 4.0% per year from 48,309 in 2008 to 39,270 in 2013 (P<.001). Adjusting for reporting delays, diagnoses decreased 3.1% per year (P<.001). The CD4 model estimated an annual decrease in incidence of 4.6% (P<.001) and the Bayesian hierarchical model 2.6% (P<.001); the stratified extrapolation approach estimated a stable incidence. During these years, overall, the percentage of persons who ever had received an HIV test or had had a test within the past year remained stable; among MSM testing increased. For women, all 3 incidence models corroborated the decreasing trend in HIV diagnoses, and HIV diagnoses and 2 incidence models indicated decreases among blacks and whites. The CD4 and Bayesian hierarchical models, but not the stratified extrapolation approach, indicated decreases in incidence among MSM. Conclusions HIV diagnoses and CD4 and Bayesian hierarchical model estimates indicated decreases in HIV incidence overall, among both sexes and all race or ethnicity groups. Further progress depends on effectively reducing HIV incidence among MSM, among whom the majority of new infections occur. PMID:28159730

  8. Finite-volume and partial quenching effects in the magnetic polarizability of the neutron

    NASA Astrophysics Data System (ADS)

    Hall, J. M. M.; Leinweber, D. B.; Young, R. D.

    2014-03-01

    There has been much progress in the experimental measurement of the electric and magnetic polarizabilities of the nucleon. Similarly, lattice QCD simulations have recently produced dynamical QCD results for the magnetic polarizability of the neutron approaching the chiral regime. In order to compare the lattice simulations with experiment, calculation of partial quenching and finite-volume effects is required prior to an extrapolation in quark mass to the physical point. These dependencies are described using chiral effective field theory. Corrections to the partial quenching effects associated with the sea-quark-loop electric charges are estimated by modeling corrections to the pion cloud. These are compared to the uncorrected lattice results. In addition, the behavior of the finite-volume corrections as a function of pion mass is explored. Box sizes of approximately 7 fm are required to achieve a result within 5% of the infinite-volume result at the physical pion mass. A variety of extrapolations are shown at different box sizes, providing a benchmark to guide future lattice QCD calculations of the magnetic polarizabilities. A relatively precise value for the physical magnetic polarizability of the neutron is presented, βn=1.93(11)stat(11)sys×10-4 fm3, which is in agreement with current experimental results.

  9. Modeling low-dose mortality and disease incubation period of inhalational anthrax in the rabbit.

    PubMed

    Gutting, Bradford W; Marchette, David; Sherwood, Robert; Andrews, George A; Director-Myska, Alison; Channel, Stephen R; Wolfe, Daniel; Berger, Alan E; Mackie, Ryan S; Watson, Brent J; Rukhin, Andrey

    2013-07-21

    There is a need to advance our ability to conduct credible human risk assessments for inhalational anthrax associated with exposure to a low number of bacteria. Combining animal data with computational models of disease will be central in the low-dose and cross-species extrapolations required in achieving this goal. The objective of the current work was to apply and advance the competing risks (CR) computational model of inhalational anthrax where data was collected from NZW rabbits exposed to aerosols of Ames strain Bacillus anthracis. An initial aim was to parameterize the CR model using high-dose rabbit data and then conduct a low-dose extrapolation. The CR low-dose attack rate was then compared against known low-dose rabbit data as well as the low-dose curve obtained when the entire rabbit dose-response data set was fitted to an exponential dose-response (EDR) model. The CR model predictions demonstrated excellent agreement with actual low-dose rabbit data. We next used a modified CR model (MCR) to examine disease incubation period (the time to reach a fever >40 °C). The MCR model predicted a germination period of 14.5h following exposure to a low spore dose, which was confirmed by monitoring spore germination in the rabbit lung using PCR, and predicted a low-dose disease incubation period in the rabbit between 14.7 and 16.8 days. Overall, the CR and MCR model appeared to describe rabbit inhalational anthrax well. These results are discussed in the context of conducting laboratory studies in other relevant animal models, combining the CR/MCR model with other computation models of inhalational anthrax, and using the resulting information towards extrapolating a low-dose response prediction for man. Published by Elsevier Ltd.

  10. Sonic Boom Propagation Codes Validated by Flight Test

    NASA Technical Reports Server (NTRS)

    Poling, Hugh W.

    1996-01-01

    The sonic boom propagation codes reviewed in this study, SHOCKN and ZEPHYRUS, implement current theory on air absorption using different computational concepts. Review of the codes with a realistic atmosphere model confirm the agreement of propagation results reported by others for idealized propagation conditions. ZEPHYRUS offers greater flexibility in propagation conditions and is thus preferred for practical aircraft analysis. The ZEPHYRUS code was used to propagate sonic boom waveforms measured approximately 1000 feet away from an SR-71 aircraft flying at Mach 1.25 to 5000 feet away. These extrapolated signatures were compared to measurements at 5000 feet. Pressure values of the significant shocks (bow, canopy, inlet and tail) in the waveforms are consistent between extrapolation and measurement. Of particular interest is that four (independent) measurements taken under the aircraft centerline converge to the same extrapolated result despite differences in measurement conditions. Agreement between extrapolated and measured signature duration is prevented by measured duration of the 5000 foot signatures either much longer or shorter than would be expected. The duration anomalies may be due to signature probing not sufficiently parallel to the aircraft flight direction.

  11. Long-term predictions using natural analogues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewing, R.C.

    1995-09-01

    One of the unique and scientifically most challenging aspects of nuclear waste isolation is the extrapolation of short-term laboratory data (hours to years) to the long time periods (10{sup 3}-10{sup 5} years) required by regulatory agencies for performance assessment. The direct validation of these extrapolations is not possible, but methods must be developed to demonstrate compliance with government regulations and to satisfy the lay public that there is a demonstrable and reasonable basis for accepting the long-term extrapolations. Natural systems (e.g., {open_quotes}natural analogues{close_quotes}) provide perhaps the only means of partial {open_quotes}validation,{close_quotes} as well as data that may be used directlymore » in the models that are used in the extrapolation. Natural systems provide data on very large spatial (nm to km) and temporal (10{sup 3}-10{sup 8} years) scales and in highly complex terranes in which unknown synergisms may affect radionuclide migration. This paper reviews the application (and most importantly, the limitations) of data from natural analogue systems to the {open_quotes}validation{close_quotes} of performance assessments.« less

  12. Dynamics of Bacterial Gene Regulatory Networks.

    PubMed

    Shis, David L; Bennett, Matthew R; Igoshin, Oleg A

    2018-05-20

    The ability of bacterial cells to adjust their gene expression program in response to environmental perturbation is often critical for their survival. Recent experimental advances allowing us to quantitatively record gene expression dynamics in single cells and in populations coupled with mathematical modeling enable mechanistic understanding on how these responses are shaped by the underlying regulatory networks. Here, we review how the combination of local and global factors affect dynamical responses of gene regulatory networks. Our goal is to discuss the general principles that allow extrapolation from a few model bacteria to less understood microbes. We emphasize that, in addition to well-studied effects of network architecture, network dynamics are shaped by global pleiotropic effects and cell physiology.

  13. Tests and applications of nonlinear force-free field extrapolations in spherical geometry

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Ding, M. D.

    2013-07-01

    We test a nonlinear force-free field (NLFFF) optimization code in spherical geometry with an analytical solution from Low and Lou. The potential field source surface (PFSS) model is served as the initial and boundary conditions where observed data are not available. The analytical solution can be well recovered if the boundary and initial conditions are properly handled. Next, we discuss the preprocessing procedure for the noisy bottom boundary data, and find that preprocessing is necessary for NLFFF extrapolations when we use the observed photospheric magnetic field as bottom boundaries. Finally, we apply the NLFFF model to a solar area where four active regions interacting with each other. An M8.7 flare occurred in one active region. NLFFF modeling in spherical geometry simultaneously constructs the small and large scale magnetic field configurations better than the PFSS model does.

  14. Anomalous bulk behavior in the free parafermion Z (N ) spin chain

    NASA Astrophysics Data System (ADS)

    Alcaraz, Francisco C.; Batchelor, Murray T.

    2018-06-01

    We demonstrate using direct numerical diagonalization and extrapolation methods that boundary conditions have a profound effect on the bulk properties of a simple Z (N ) model for N ≥3 for which the model Hamiltonian is non-Hermitian. For N =2 the model reduces to the well-known quantum Ising model in a transverse field. For open boundary conditions, the Z (N ) model is known to be solved exactly in terms of free parafermions. Once the ends of the open chain are connected by considering the model on a ring, the bulk properties, including the ground-state energy per site, are seen to differ dramatically with increasing N . Other properties, such as the leading finite-size corrections to the ground-state energy, the mass gap exponent, and the specific-heat exponent, are also seen to be dependent on the boundary conditions. We speculate that this anomalous bulk behavior is a topological effect.

  15. Selection for sex in finite populations.

    PubMed

    Roze, D

    2014-07-01

    Finite population size generates interference between selected loci, which has been shown to favour increased rates of recombination. In this article, I present different analytical models exploring selection acting on a 'sex modifier locus' (that affects the relative investment into asexual and sexual reproduction) in a finite population. Two forms of selective forces act on the modifier: direct selection due to intrinsic costs associated with sexual reproduction and indirect selection generated by one or two other loci affecting fitness. The results show that indirect selective forces differ from those acting on a recombination modifier even in the case of a haploid population: in particular, a single selected locus generates indirect selection for sex, while two loci are required in the case of a recombination modifier. This effect stems from the fact that modifier alleles increasing sex escape more easily from low-fitness genetic backgrounds than alleles coding for lower rates of sex. Extrapolating the results from three-locus models to a large number of loci at mutation-selection balance indicates that in the parameter range where indirect selection is strong enough to outweigh a substantial cost of sex, interactions between selected loci have a stronger effect than the sum of individual effects of each selected locus. Comparisons with multilocus simulation results show that such extrapolations may provide correct predictions for the evolutionarily stable rate of sex, unless the cost of sex is high. © 2014 The Author. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  16. WORKSHOP ON APPLICATION OF STATISTICAL METHODS TO BIOLOGICALLY-BASED PHARMACOKINETIC MODELING FOR RISK ASSESSMENT

    EPA Science Inventory

    Biologically-based pharmacokinetic models are being increasingly used in the risk assessment of environmental chemicals. These models are based on biological, mathematical, statistical and engineering principles. Their potential uses in risk assessment include extrapolation betwe...

  17. Extrapolating target tracks

    NASA Astrophysics Data System (ADS)

    Van Zandt, James R.

    2012-05-01

    Steady-state performance of a tracking filter is traditionally evaluated immediately after a track update. However, there is commonly a further delay (e.g., processing and communications latency) before the tracks can actually be used. We analyze the accuracy of extrapolated target tracks for four tracking filters: Kalman filter with the Singer maneuver model and worst-case correlation time, with piecewise constant white acceleration, and with continuous white acceleration, and the reduced state filter proposed by Mookerjee and Reifler.1, 2 Performance evaluation of a tracking filter is significantly simplified by appropriate normalization. For the Kalman filter with the Singer maneuver model, the steady-state RMS error immediately after an update depends on only two dimensionless parameters.3 By assuming a worst case value of target acceleration correlation time, we reduce this to a single parameter without significantly changing the filter performance (within a few percent for air tracking).4 With this simplification, we find for all four filters that the RMS errors for the extrapolated state are functions of only two dimensionless parameters. We provide simple analytic approximations in each case.

  18. Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)

    1998-01-01

    The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.

  19. Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development.

    PubMed

    Burt, T; Yoshida, K; Lappin, G; Vuong, L; John, C; de Wildt, S N; Sugiyama, Y; Rowland, M

    2016-04-01

    A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications and design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. All phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.

  20. Impact of competitor species composition on predicting diameter growth and survival rates of Douglas-fir trees in southwestern Oregon

    USGS Publications Warehouse

    Bravo, Felipe; Hann, D.W.; Maguire, Douglas A.

    2001-01-01

    Mixed conifer and hardwood stands in southwestern Oregon were studied to explore the hypothesis that competition effects on individual-tree growth and survival will differ according to the species comprising the competition measure. Likewise, it was hypothesized that competition measures should extrapolate best if crown-based surrogates are given preference over diameter-based (basal area based) surrogates. Diameter growth and probability of survival were modeled for individual Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) trees growing in pure stands. Alternative models expressing one-sided and two-sided competition as a function of either basal area or crown structure were then applied to other plots in which Douglas-fir was mixed with other conifers and (or) hardwood species. Crown-based variables outperformed basal area based variables as surrogates for one-sided competition in both diameter growth and survival probability, regardless of species composition. In contrast, two-sided competition was best represented by total basal area of competing trees. Surrogates reflecting differences in crown morphology among species relate more closely to the mechanics of competition for light and, hence, facilitate extrapolation to species combinations for which no observations are available.

  1. MTCLIM: a mountain microclimate simulation model

    Treesearch

    Roger D. Hungerford; Ramakrishna R. Nemani; Steven W. Running; Joseph C. Coughlan

    1989-01-01

    A model for calculating daily microclimate conditions in mountainous terrain is presented. Daily air temperature, shortwave radiation, relative humidity, and precipitation are extrapolated form data measured at National Weather Service stations. The model equations are given and the paper describes how to execute the model. Model outputs are compared with observed data...

  2. Automata learning algorithms and processes for providing more complete systems requirements specification by scenario generation, CSP-based syntax-oriented model construction, and R2D2C system requirements transformation

    NASA Technical Reports Server (NTRS)

    Margaria, Tiziana (Inventor); Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor); Steffen, Bernard (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments, automata learning algorithms and techniques are implemented to generate a more complete set of scenarios for requirements based programming. More specifically, a CSP-based, syntax-oriented model construction, which requires the support of a theorem prover, is complemented by model extrapolation, via automata learning. This may support the systematic completion of the requirements, the nature of the requirement being partial, which provides focus on the most prominent scenarios. This may generalize requirement skeletons by extrapolation and may indicate by way of automatically generated traces where the requirement specification is too loose and additional information is required.

  3. The interaction of the solar wind with the interstellar medium

    NASA Technical Reports Server (NTRS)

    Axford, W. I.

    1972-01-01

    The expected characteristics of the solar wind, extrapolated from the vicinity of the earth are described. Several models are examined for the interaction of the solar wind with the interstellar plasma and magnetic field. Various aspects of the penetration of neutral interstellar gas into the solar wind are considered. The dynamic effects of the neutral gas on the solar wind are described. Problems associated with the interaction of cosmic rays with the solar wind are discussed.

  4. Cost-effectiveness of sacubitril/valsartan in the treatment of heart failure with reduced ejection fraction

    PubMed Central

    McMurray, John J V; Trueman, David; Hancock, Elizabeth; Cowie, Martin R; Briggs, Andrew; Taylor, Matthew; Mumby-Croft, Juliet; Woodcock, Fionn; Lacey, Michael; Haroun, Rola; Deschaseaux, Celine

    2018-01-01

    Objective Chronic heart failure with reduced ejection fraction (HF-REF) represents a major public health issue and is associated with considerable morbidity and mortality. We evaluated the cost-effectiveness of sacubitril/valsartan (formerly LCZ696) compared with an ACE inhibitor (ACEI) (enalapril) in the treatment of HF-REF from the perspective of healthcare providers in the UK, Denmark and Colombia. Methods A cost-utility analysis was performed based on data from a multinational, Phase III randomised controlled trial. A decision-analytic model was developed based on a series of regression models, which extrapolated health-related quality of life, hospitalisation rates and survival over a lifetime horizon. The primary outcome was the incremental cost-effectiveness ratio (ICER). Results In the UK, the cost per quality-adjusted life-year (QALY) gained for sacubitril/valsartan (using cardiovascular mortality) was £17 100 (€20 400) versus enalapril. In Denmark, the ICER for sacubitril/valsartan was Kr 174 000 (€22 600). In Colombia, the ICER was COP$39.5 million (€11 200) per QALY gained. Deterministic sensitivity analysis showed that results were most sensitive to the extrapolation of mortality, duration of treatment effect and time horizon, but were robust to other structural changes, with most scenarios associated with ICERs below the willingness-to-pay threshold for all three country settings. Probabilistic sensitivity analysis suggested the probability that sacubitril/valsartan was cost-effective at conventional willingness-to-pay thresholds was 68%–94% in the UK, 84% in Denmark and 95% in Colombia. Conclusions Our analysis suggests that, in all three countries, sacubitril/valsartan is likely to be cost-effective compared with an ACEI (the current standard of care) in patients with HF-REF. PMID:29269379

  5. Reliable yields of public water-supply wells in the fractured-rock aquifers of central Maryland, USA

    NASA Astrophysics Data System (ADS)

    Hammond, Patrick A.

    2018-02-01

    Most studies of fractured-rock aquifers are about analytical models used for evaluating aquifer tests or numerical methods for describing groundwater flow, but there have been few investigations on how to estimate the reliable long-term drought yields of individual hard-rock wells. During the drought period of 1998 to 2002, many municipal water suppliers in the Piedmont/Blue Ridge areas of central Maryland (USA) had to institute water restrictions due to declining well yields. Previous estimates of the yields of those wells were commonly based on extrapolating drawdowns, measured during short-term single-well hydraulic pumping tests, to the first primary water-bearing fracture in a well. The extrapolations were often made from pseudo-equilibrium phases, frequently resulting in substantially over-estimated well yields. The methods developed in the present study to predict yields consist of extrapolating drawdown data from infinite acting radial flow periods or by fitting type curves of other conceptual models to the data, using diagnostic plots, inverse analysis and derivative analysis. Available drawdowns were determined by the positions of transition zones in crystalline rocks or thin-bedded consolidated sandstone/limestone layers (reservoir rocks). Aquifer dewatering effects were detected by type-curve matching of step-test data or by breaks in the drawdown curves constructed from hydraulic tests. Operational data were then used to confirm the predicted yields and compared to regional groundwater levels to determine seasonal variations in well yields. Such well yield estimates are needed by hydrogeologists and water engineers for the engineering design of water systems, but should be verified by the collection of long-term monitoring data.

  6. Extrapolation of dynamic load behaviour on hydroelectric turbine blades with cyclostationary modelling

    NASA Astrophysics Data System (ADS)

    Poirier, Marc; Gagnon, Martin; Tahan, Antoine; Coutu, André; Chamberland-lauzon, Joël

    2017-01-01

    In this paper, we present the application of cyclostationary modelling for the extrapolation of short stationary load strain samples measured in situ on hydraulic turbine blades. Long periods of measurements allow for a wide range of fluctuations representative of long-term reality to be considered. However, sampling over short periods limits the dynamic strain fluctuations available for analysis. The purpose of the technique presented here is therefore to generate a representative signal containing proper long term characteristics and expected spectrum starting with a much shorter signal period. The final objective is to obtain a strain history that can be used to estimate long-term fatigue behaviour of hydroelectric turbine runners.

  7. Application of the backward extrapolation method to pulsed neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamo, Alberto; Gohar, Yousry

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  8. Application of the backward extrapolation method to pulsed neutron sources

    DOE PAGES

    Talamo, Alberto; Gohar, Yousry

    2017-09-23

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  9. Simulation of Hypervelocity Impact Effects on Reinforced Carbon-Carbon. Chapter 6

    NASA Technical Reports Server (NTRS)

    Park, Young-Keun; Fahrenthold, Eric P.

    2004-01-01

    Spacecraft operating in low earth orbit face a significant orbital debris impact hazard. Of particular concern, in the case of the Space Shuttle, are impacts on critical components of the thermal protection system. Recent research has formulated a new material model of reinforced carbon-carbon, for use in the analysis of hypervelocity impact effects on the Space Shuttle wing leading edge. The material model has been validated in simulations of published impact experiments and applied to model orbital debris impacts at velocities beyond the range of current experimental methods. The results suggest that momentum scaling may be used to extrapolate the available experimental data base, in order to predict the size of wing leading edge perforations at impact velocities as high as 13 km/s.

  10. Methods for converging correlation energies within the dielectric matrix formalism

    NASA Astrophysics Data System (ADS)

    Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario

    2018-03-01

    Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.

  11. VARIANCE OF MICROSOMAL PROTEIN AND ...

    EPA Pesticide Factsheets

    Differences in the pharmacokinetics of xenobiotics among humans makes them differentially susceptible to risk. Differences in enzyme content can mediate pharmacokinetic differences. Microsomal protein is often isolated fromliver to characterize enzyme content and activity, but no measures exist to extrapolate these data to the intact liver. Measures were developed from up to 60 samples of adult human liver to characterize the content of microsomal protein and cytochrome P450 (CYP) enzymes. Statistical evaluations are necessary to estimate values far from the mean value. Adult human liver contains 52.9 - 1.476 mg microsomal protein per g; 2587 - 1.84 pmoles CYP2E1 per g; and 5237 - 2.214 pmols CYP3A per g (geometric mean - geometric standard deviation). These values are useful for identifying and testing susceptibility as a function of enzyme content when used to extrapolate in vitro rates of chemical metabolism for input to physiologically based pharmacokinetic models which can then be exercised to quantify the effect of variance in enzyme expression on risk-relevant pharmacokinetic outcomes.

  12. Equation of state fits to the lower mantle and outer core

    NASA Technical Reports Server (NTRS)

    Butler, R.; Anderson, D. L.

    1978-01-01

    The lower mantle and outer core are subjected to tests for homogeneity and adiabaticity. An earth model is used which is based on the inversion of body waves and Q-corrected normal-mode data. Homogeneous regions are found at radii between 5125 and 4825 km, 4600 and 3850 km, and 3200 and 2200 km. The lower mantle and outer core are inhomogeneous on the whole and are only homogeneous in the above local regions. Finite-strain and atomistic equations of state are fit to the homogeneous regions. The apparent convergence of the finite-strain relations is examined to judge their applicability to a given region. In some cases the observed pressure derivatives of the elastic moduli are used as additional constraints. The effect of minor deviations from adiabaticity on the extrapolations is also considered. An ensemble of zero-pressure values of the density and seismic velocities are found for these regions. The range of extrapolated values from these several approaches provides a measure of uncertainties involved.

  13. Studying the Transfer of Magnetic Helicity in Solar Active Regions with the Connectivity-based Helicity Flux Density Method

    NASA Astrophysics Data System (ADS)

    Dalmasse, K.; Pariat, É.; Valori, G.; Jing, J.; Démoulin, P.

    2018-01-01

    In the solar corona, magnetic helicity slowly and continuously accumulates in response to plasma flows tangential to the photosphere and magnetic flux emergence through it. Analyzing this transfer of magnetic helicity is key for identifying its role in the dynamics of active regions (ARs). The connectivity-based helicity flux density method was recently developed for studying the 2D and 3D transfer of magnetic helicity in ARs. The method takes into account the 3D nature of magnetic helicity by explicitly using knowledge of the magnetic field connectivity, which allows it to faithfully track the photospheric flux of magnetic helicity. Because the magnetic field is not measured in the solar corona, modeled 3D solutions obtained from force-free magnetic field extrapolations must be used to derive the magnetic connectivity. Different extrapolation methods can lead to markedly different 3D magnetic field connectivities, thus questioning the reliability of the connectivity-based approach in observational applications. We address these concerns by applying this method to the isolated and internally complex AR 11158 with different magnetic field extrapolation models. We show that the connectivity-based calculations are robust to different extrapolation methods, in particular with regard to identifying regions of opposite magnetic helicity flux. We conclude that the connectivity-based approach can be reliably used in observational analyses and is a promising tool for studying the transfer of magnetic helicity in ARs and relating it to their flaring activity.

  14. [Critical of the additive model of the randomized controlled trial].

    PubMed

    Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

    2008-01-01

    Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.

  15. Human urine and plasma concentrations of bisphenol A extrapolated from pharmacokinetics established in in vivo experiments with chimeric mice with humanized liver and semi-physiological pharmacokinetic modeling.

    PubMed

    Miyaguchi, Takamori; Suemizu, Hiroshi; Shimizu, Makiko; Shida, Satomi; Nishiyama, Sayako; Takano, Ryohji; Murayama, Norie; Yamazaki, Hiroshi

    2015-06-01

    The aim of this study was to extrapolate to humans the pharmacokinetics of estrogen analog bisphenol A determined in chimeric mice transplanted with human hepatocytes. Higher plasma concentrations and urinary excretions of bisphenol A glucuronide (a primary metabolite of bisphenol A) were observed in chimeric mice than in control mice after oral administrations, presumably because of enterohepatic circulation of bisphenol A glucuronide in control mice. Bisphenol A glucuronidation was faster in mouse liver microsomes than in human liver microsomes. These findings suggest a predominantly urinary excretion route of bisphenol A glucuronide in chimeric mice with humanized liver. Reported human plasma and urine data for bisphenol A glucuronide after single oral administration of 0.1mg/kg bisphenol A were reasonably estimated using the current semi-physiological pharmacokinetic model extrapolated from humanized mice data using algometric scaling. The reported geometric mean urinary bisphenol A concentration in the U.S. population of 2.64μg/L underwent reverse dosimetry modeling with the current human semi-physiological pharmacokinetic model. This yielded an estimated exposure of 0.024μg/kg/day, which was less than the daily tolerable intake of bisphenol A (50μg/kg/day), implying little risk to humans. Semi-physiological pharmacokinetic modeling will likely prove useful for determining the species-dependent toxicological risk of bisphenol A. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Scaling Factor Variability and Toxicokinetic Outcomes in Children

    EPA Science Inventory

    Abstract title: Scaling Factor Variability and Toxicokinetic Outcomes in ChildrenBackgroundBiotransformation rates (Vmax) extrapolated from in vitro data are used increasingly in human physiologically based pharmacokinetic (PBPK) models. PBPK models are widely used in human hea...

  17. A Computational Model of the Rainbow Trout Hypothalamus-Pituitary-Ovary-Liver Axis

    PubMed Central

    Gillies, Kendall; Krone, Stephen M.; Nagler, James J.; Schultz, Irvin R.

    2016-01-01

    Reproduction in fishes and other vertebrates represents the timely coordination of many endocrine factors that culminate in the production of mature, viable gametes. In recent years there has been rapid growth in understanding fish reproductive biology, which has been motivated in part by recognition of the potential effects that climate change, habitat destruction and contaminant exposure can have on natural and cultured fish populations. New approaches to understanding the impacts of these stressors are being developed that require a systems biology approach with more biologically accurate and detailed mathematical models. We have developed a multi-scale mathematical model of the female rainbow trout hypothalamus-pituitary-ovary-liver axis to use as a tool to help understand the functioning of the system and for extrapolation of laboratory findings of stressor impacts on specific components of the axis. The model describes the essential endocrine components of the female rainbow trout reproductive axis. The model also describes the stage specific growth of maturing oocytes within the ovary and permits the presence of sub-populations of oocytes at different stages of development. Model formulation and parametrization was largely based on previously published in vivo and in vitro data in rainbow trout and new data on the synthesis of gonadotropins in the pituitary. Model predictions were validated against several previously published data sets for annual changes in gonadotropins and estradiol in rainbow trout. Estimates of select model parameters can be obtained from in vitro assays using either quantitative (direct estimation of rate constants) or qualitative (relative change from control values) approaches. This is an important aspect of mathematical models as in vitro, cell-based assays are expected to provide the bulk of experimental data for future risk assessments and will require quantitative physiological models to extrapolate across biological scales. PMID:27096735

  18. A Computational Model of the Rainbow Trout Hypothalamus-Pituitary-Ovary-Liver Axis.

    PubMed

    Gillies, Kendall; Krone, Stephen M; Nagler, James J; Schultz, Irvin R

    2016-04-01

    Reproduction in fishes and other vertebrates represents the timely coordination of many endocrine factors that culminate in the production of mature, viable gametes. In recent years there has been rapid growth in understanding fish reproductive biology, which has been motivated in part by recognition of the potential effects that climate change, habitat destruction and contaminant exposure can have on natural and cultured fish populations. New approaches to understanding the impacts of these stressors are being developed that require a systems biology approach with more biologically accurate and detailed mathematical models. We have developed a multi-scale mathematical model of the female rainbow trout hypothalamus-pituitary-ovary-liver axis to use as a tool to help understand the functioning of the system and for extrapolation of laboratory findings of stressor impacts on specific components of the axis. The model describes the essential endocrine components of the female rainbow trout reproductive axis. The model also describes the stage specific growth of maturing oocytes within the ovary and permits the presence of sub-populations of oocytes at different stages of development. Model formulation and parametrization was largely based on previously published in vivo and in vitro data in rainbow trout and new data on the synthesis of gonadotropins in the pituitary. Model predictions were validated against several previously published data sets for annual changes in gonadotropins and estradiol in rainbow trout. Estimates of select model parameters can be obtained from in vitro assays using either quantitative (direct estimation of rate constants) or qualitative (relative change from control values) approaches. This is an important aspect of mathematical models as in vitro, cell-based assays are expected to provide the bulk of experimental data for future risk assessments and will require quantitative physiological models to extrapolate across biological scales.

  19. Cost-effectiveness in Clostridium difficile treatment decision-making

    PubMed Central

    Nuijten, Mark JC; Keller, Josbert J; Visser, Caroline E; Redekop, Ken; Claassen, Eric; Speelman, Peter; Pronk, Marja H

    2015-01-01

    AIM: To develop a framework for the clinical and health economic assessment for management of Clostridium difficile infection (CDI). METHODS: CDI has vast economic consequences emphasizing the need for innovative and cost effective solutions, which were aim of this study. A guidance model was developed for coverage decisions and guideline development in CDI. The model included pharmacotherapy with oral metronidazole or oral vancomycin, which is the mainstay for pharmacological treatment of CDI and is recommended by most treatment guidelines. RESULTS: A design for a patient-based cost-effectiveness model was developed, which can be used to estimate the cost-effectiveness of current and future treatment strategies in CDI. Patient-based outcomes were extrapolated to the population by including factors like, e.g., person-to-person transmission, isolation precautions and closing and cleaning wards of hospitals. CONCLUSION: The proposed framework for a population-based CDI model may be used for clinical and health economic assessments of CDI guidelines and coverage decisions for emerging treatments for CDI. PMID:26601096

  20. Cost-effectiveness in Clostridium difficile treatment decision-making.

    PubMed

    Nuijten, Mark Jc; Keller, Josbert J; Visser, Caroline E; Redekop, Ken; Claassen, Eric; Speelman, Peter; Pronk, Marja H

    2015-11-16

    To develop a framework for the clinical and health economic assessment for management of Clostridium difficile infection (CDI). CDI has vast economic consequences emphasizing the need for innovative and cost effective solutions, which were aim of this study. A guidance model was developed for coverage decisions and guideline development in CDI. The model included pharmacotherapy with oral metronidazole or oral vancomycin, which is the mainstay for pharmacological treatment of CDI and is recommended by most treatment guidelines. A design for a patient-based cost-effectiveness model was developed, which can be used to estimate the cost-effectiveness of current and future treatment strategies in CDI. Patient-based outcomes were extrapolated to the population by including factors like, e.g., person-to-person transmission, isolation precautions and closing and cleaning wards of hospitals. The proposed framework for a population-based CDI model may be used for clinical and health economic assessments of CDI guidelines and coverage decisions for emerging treatments for CDI.

  1. In Vitro Measurements of Metabolism for Application in Pharmacokinetic Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipscomb, John C.; Poet, Torka S.

    2008-04-01

    Abstract Human risk and exposure assessments require dosimetry information. Species-specific tissue dose response will be driven by physiological and biochemical processes. While metabolism and pharmacokinetic data are often not available in humans, they are much more available in laboratory animals; metabolic rate constants can be readily derived in vitro. The physiological differences between laboratory animals and humans are known. Biochemical processes, especially metabolism, can be measured in vitro and extrapolated to account for in vivo metabolism through clearance models or when linked to a physiologically based biological (PBPK) model to describe the physiological processes, such as drug delivery to themore » metabolic organ. This review focuses on the different organ, cellular, and subcellular systems that can be used to measure in vitro metabolic rate constants and how that data is extrapolated to be used in biokinetic modeling.« less

  2. Surface dose measurements with commonly used detectors: a consistent thickness correction method.

    PubMed

    Reynolds, Tatsiana A; Higgins, Patrick

    2015-09-08

    The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30-360) with other parallel plate chambers RMI-449 (Attix), Capintec PS-033, PTW 30-329 (Markus) and Memorial. Measurements of surface dose for 6MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (-0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid-state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three-detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth-dose curves and is not recommended for these types of measurements.

  3. Visible Infrared Imaging Radiometer Suite (VIIRS) and uncertainty in the ocean color calibration methodology

    NASA Astrophysics Data System (ADS)

    Turpie, Kevin R.; Eplee, Robert E.; Meister, Gerhard

    2015-09-01

    During the first few years of the Suomi National Polar-orbiting Partnership (NPP) mission, the NASA Ocean Color calibration team continued to improve on their approach to the on-orbit calibration of the Visible Infrared Imaging Radiometer Suite (VIIRS). As the calibration was adjusted for changes in ocean band responsitivity, the team also estimated a theoretic residual error in the calibration trends well within a few tenths of a percent, which could be translated into trend uncertainties in regional time series of surface reflectance and derived products, where biases as low as a few tenths of a percent in certain bands can lead to significant effects. This study looks at effects from spurious trends inherent to the calibration and biases that arise between reprocessing efforts because of extrapolation of the timedependent calibration table. With the addition of new models for instrument and calibration system trend artifacts, new calibration trends led to improved estimates of ocean time series uncertainty. Table extrapolation biases are presented for the first time. The results further the understanding of uncertainty in measuring regional and global biospheric trends in the ocean using VIIRS, which better define the roles of such records in climate research.

  4. The contribution of benzene to smoking-induced leukemia.

    PubMed

    Korte, J E; Hertz-Picciotto, I; Schulz, M R; Ball, L M; Duell, E J

    2000-04-01

    Cigarette smoking is associated with an increased risk of leukemia; benzene, an established leukemogen, is present in cigarette smoke. By combining epidemiologic data on the health effects of smoking with risk assessment techniques for low-dose extrapolation, we assessed the proportion of smoking-induced total leukemia and acute myeloid leukemia (AML) attributable to the benzene in cigarette smoke. We fit both linear and quadratic models to data from two benzene-exposed occupational cohorts to estimate the leukemogenic potency of benzene. Using multiple-decrement life tables, we calculated lifetime risks of total leukemia and AML deaths for never, light, and heavy smokers. We repeated these calculations, removing the effect of benzene in cigarettes based on the estimated potencies. From these life tables we determined smoking-attributable risks and benzene-attributable risks. The ratio of the latter to the former constitutes the proportion of smoking-induced cases attributable to benzene. Based on linear potency models, the benzene in cigarette smoke contributed from 8 to 48% of smoking-induced total leukemia deaths [95% upper confidence limit (UCL), 20-66%], and from 12 to 58% of smoking-induced AML deaths (95% UCL, 19-121%). The inclusion of a quadratic term yielded results that were comparable; however, potency models with only quadratic terms resulted in much lower attributable fractions--all < 1%. Thus, benzene is estimated to be responsible for approximately one-tenth to one-half of smoking-induced total leukemia mortality and up to three-fifths of smoking-related AML mortality. In contrast to theoretical arguments that linear models substantially overestimate low-dose risk, linear extrapolations from empirical data over a dose range of 10- to 100-fold resulted in plausible predictions.

  5. Controlled experiments in cosmological gravitational clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Shandarin, Sergei F.

    1993-01-01

    A systematic study is conducted of gravitational instability in 3D on the basis of power-law initial spectra with and without spectral cutoff, emphasizing nonlinear effects and measures of nonlinearity; effects due to short and long waves in the initial conditions are separated. The existence of second-general pancakes is confirmed, and it is noted that while these are inhomogeneous, they generate a visually strong signal of filamentarity. An explicit comparison of smoothed initial conditions with smoothed envelope models also reconfirms the need to smooth over a scale larger than any nonlinearity, in order to extrapolate directly by linear theory from Gaussian initial conditions.

  6. X-Ray Polarization Imaging

    DTIC Science & Technology

    2006-07-01

    linearity; (4) determination of polarization as a function of radiographic parameters ; and (5) determination of the effect of binding energy on... hydroxyapatite . Type II calcifications are known to be associated with carcinoma, while it is generally accepted that the exclusive finding of type I...concentrate on the extrapolation of the Rh target spectra. The extrapolation was split in two parts. Below 24 keV we used the parameters from Boone’s paper

  7. [Characteristics of Waves Generated Beneath the Solar Convection Zone by Penetrative Overshoot

    NASA Technical Reports Server (NTRS)

    Julien, Keith

    2000-01-01

    The goal of this project was to theoretically and numerically characterize the waves generated beneath the solar convection zone by penetrative overshoot. Three dimensional model simulations were designed to isolate the effects of rotation and shear. In order to overcome the numerically imposed limitations of finite Reynolds numbers (Re) below solar values, series of simulations were designed to elucidate the Reynolds-number dependence (hoped to exhibit mathematically simple scaling on Re) so that one could cautiously extrapolate to solar values.

  8. Predicting the impact of biocorona formation kinetics on interspecies extrapolations of nanoparticle biodistribution modeling.

    PubMed

    Sahneh, Faryad Darabi; Scoglio, Caterina M; Monteiro-Riviere, Nancy A; Riviere, Jim E

    2015-01-01

    To assess the impact of biocorona kinetics on expected tissue distribution of nanoparticles (NPs) across species. The potential fate of NPs in vivo is described through a simple and descriptive pharmacokinetic model using rate processes dependent upon basal metabolic rate coupled to dynamics of protein corona. Mismatch of time scales between interspecies allometric scaling and the kinetics of corona formation is potentially a fundamental issue with interspecies extrapolations of NP biodistribution. The impact of corona evolution on NP biodistribution across two species is maximal when corona transition half-life is close to the geometric mean of NP half-lives of the two species. While engineered NPs can successfully reach target cells in rodent models, the results may be different in humans due to the fact that the longer circulation time allows for further biocorona evolution.

  9. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.

    PubMed

    Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Travtek Evaluation Modeling Study

    DOT National Transportation Integrated Search

    1996-03-01

    THE FOLLOWING REPORT DESCRIBES A MODELING STUDY THAT WAS PERFORMED TO EXTRAPOLATE, FROM THE TRAVTEK OPERATIONAL TEST DATA, A SET OF SYSTEM WIDE BENEFITS AND PERFORMANCE VALUES FOR A WIDER-SCALE DEPLOYMENT OF A TRAVTEK-LIKE SYSTEM. IN THE FIRST PART O...

  11. How Accurate Are Infrared Luminosities from Monochromatic Photometric Extrapolation?

    NASA Astrophysics Data System (ADS)

    Lin, Zesen; Fang, Guanwen; Kong, Xu

    2016-12-01

    Template-based extrapolations from only one photometric band can be a cost-effective method to estimate the total infrared (IR) luminosities ({L}{IR}) of galaxies. By utilizing multi-wavelength data that covers across 0.35-500 μm in GOODS-North and GOODS-South fields, we investigate the accuracy of this monochromatic extrapolated {L}{IR} based on three IR spectral energy distribution (SED) templates out to z˜ 3.5. We find that the Chary & Elbaz template provides the best estimate of {L}{IR} in Herschel/Photodetector Array Camera and Spectrometer (PACS) bands, while the Dale & Helou template performs best in Herschel/Spectral and Photometric Imaging Receiver (SPIRE) bands. To estimate {L}{IR}, we suggest that extrapolations from the available longest wavelength PACS band based on the Chary & Elbaz template can be a good estimator. Moreover, if the PACS measurement is unavailable, extrapolations from SPIRE observations but based on the Dale & Helou template can also provide a statistically unbiased estimate for galaxies at z≲ 2. The emission with a rest-frame 10-100 μm range of IR SED can be well described by all three templates, but only the Dale & Helou template shows a nearly unbiased estimate of the emission of the rest-frame submillimeter part.

  12. [Requirements imposed on model objects in microevolutionary investigations].

    PubMed

    Mina, M V

    2015-01-01

    Extrapolation of results of investigations of a model object is justified only within the limits of a set of objects that have essential properties in common with the modal object. Which properties are essential depends on the aim of a study. Similarity of objects emerged in the process of their independent evolution does not prove similarity of ways and mechanisms of their evolution. If the objects differ in their essential properties then extrapolation of results of investigation of an object on another one is risky because it may lead to wrong decisions and, moreover, to the loss of interest to alternative hypotheses. Positions formulated above are considered with the reference to species flocks of fishes, large African Barbus in particular.

  13. Chiral extrapolations of the ρ ( 770 ) meson in N f = 2 + 1 lattice QCD simulations

    DOE PAGES

    Hu, B.; Molina, R.; Döring, M.; ...

    2017-08-24

    Recentmore » $$N_f=2+1$$ lattice data for meson-meson scattering in $p$-wave and isospin $I=1$ are analyzed using a unitarized model inspired by Chiral Perturbation Theory in the inverse-amplitude formulation for two and three flavors. We perform chiral extrapolations that postdict phase shifts extracted from experiment quite well. Additionally, the low-energy constants are compared to the ones from a recent analysis of $$N_f=2$$ lattice QCD simulations to check for the consistency of the hadronic model used here. Some inconsistencies are detected in the fits to $$N_f=2+1$$ data, in contrast to the previous analysis of $$N_f=2$$ data.« less

  14. Characterization of Ascentis RP-Amide column: Lipophilicity measurement and linear solvation energy relationships.

    PubMed

    Benhaim, Deborah; Grushka, Eli

    2010-01-01

    This study investigates lipophilicity determination by chromatographic measurements using the polar embedded Ascentis RP-Amide stationary phase. As a new generation of amide-functionalized silica stationary phase, the Ascentis RP-Amide column is evaluated as a possible substitution to the n-octanol/water partitioning system for lipophilicity measurements. For this evaluation, extrapolated retention factors, log k'w, of a set of diverse compounds were determined using different methanol contents in the mobile phase. The use of n-octanol enriched mobile phase enhances the relationship between the slope (S) of the extrapolation lines and the extrapolated log k'w (the intercept of the extrapolation),as well as the correlation between log P values and the extrapolated log k'w (1:1 correlation, r2 = 0.966).In addition, the use of isocratic retention factors, at 40% methanol in the mobile phase, provides a rapid tool for lipophilicity determination. The intermolecular interactions that contribute to the retention process in the Ascentis RP-Amide phase are characterized using the solvation parameter model of Abraham.The LSER system constants for the column are very similar to the LSER constants of the n-octanol/water extraction system. Tanaka radar plots are used for quick visual comparison of the system constants of the Ascentis RP-Amide column and the n-octanol/water extraction system. The results all indicate that the Ascentis RP-Amide stationary phase can provide reliable lipophilic data. Copyright 2009 Elsevier B.V. All rights reserved.

  15. Filling gaps in visual motion for target capture

    PubMed Central

    Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637

  16. Filling gaps in visual motion for target capture.

    PubMed

    Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.

  17. Dose and dose rate extrapolation factors for malignant and non-malignant health endpoints after exposure to gamma and neutron radiation.

    PubMed

    Tran, Van; Little, Mark P

    2017-11-01

    Murine experiments were conducted at the JANUS reactor in Argonne National Laboratory from 1970 to 1992 to study the effect of acute and protracted radiation dose from gamma rays and fission neutron whole body exposure. The present study reports the reanalysis of the JANUS data on 36,718 mice, of which 16,973 mice were irradiated with neutrons, 13,638 were irradiated with gamma rays, and 6107 were controls. Mice were mostly Mus musculus, but one experiment used Peromyscus leucopus. For both types of radiation exposure, a Cox proportional hazards model was used, using age as timescale, and stratifying on sex and experiment. The optimal model was one with linear and quadratic terms in cumulative lagged dose, with adjustments to both linear and quadratic dose terms for low-dose rate irradiation (<5 mGy/h) and with adjustments to the dose for age at exposure and sex. After gamma ray exposure there is significant non-linearity (generally with upward curvature) for all tumours, lymphoreticular, respiratory, connective tissue and gastrointestinal tumours, also for all non-tumour, other non-tumour, non-malignant pulmonary and non-malignant renal diseases (p < 0.001). Associated with this the low-dose extrapolation factor, measuring the overestimation in low-dose risk resulting from linear extrapolation is significantly elevated for lymphoreticular tumours 1.16 (95% CI 1.06, 1.31), elevated also for a number of non-malignant endpoints, specifically all non-tumour diseases, 1.63 (95% CI 1.43, 2.00), non-malignant pulmonary disease, 1.70 (95% CI 1.17, 2.76) and other non-tumour diseases, 1.47 (95% CI 1.29, 1.82). However, for a rather larger group of malignant endpoints the low-dose extrapolation factor is significantly less than 1 (implying downward curvature), with central estimates generally ranging from 0.2 to 0.8, in particular for tumours of the respiratory system, vasculature, ovary, kidney/urinary bladder and testis. For neutron exposure most endpoints, malignant and non-malignant, show downward curvature in the dose response, and for most endpoints this is statistically significant (p < 0.05). Associated with this, the low-dose extrapolation factor associated with neutron exposure is generally statistically significantly less than 1 for most malignant and non-malignant endpoints, with central estimates mostly in the range 0.1-0.9. In contrast to the situation at higher dose rates, there are statistically non-significant decreases of risk per unit dose at gamma dose rates of less than or equal to 5 mGy/h for most malignant endpoints, and generally non-significant increases in risk per unit dose at gamma dose rates ≤5 mGy/h for most non-malignant endpoints. Associated with this, the dose-rate extrapolation factor, the ratio of high dose-rate to low dose-rate (≤5 mGy/h) gamma dose response slopes, for many tumour sites is in the range 1.2-2.3, albeit not statistically significantly elevated from 1, while for most non-malignant endpoints the gamma dose-rate extrapolation factor is less than 1, with most estimates in the range 0.2-0.8. After neutron exposure there are non-significant indications of lower risk per unit dose at dose rates ≤5 mGy/h compared to higher dose rates for most malignant endpoints, and for all tumours (p = 0.001), and respiratory tumours (p = 0.007) this reduction is conventionally statistically significant; for most non-malignant outcomes risks per unit dose non-significantly increase at lower dose rates. Associated with this, the neutron dose-rate extrapolation factor is less than 1 for most malignant and non-malignant endpoints, in many cases statistically significantly so, with central estimates mostly in the range 0.0-0.2.

  18. Static and wind tunnel near-field/far field jet noise measurements from model scale single-flow baseline and suppressor nozzles. Volume 2: Forward speed effects

    NASA Technical Reports Server (NTRS)

    Jaeck, C. L.

    1976-01-01

    A model scale flight effects test was conducted in the 40 by 80 foot wind tunnel to investigate the effect of aircraft forward speed on single flow jet noise characteristics. The models tested included a 15.24 cm baseline round convergent nozzle, a 20-lobe and annular nozzle with and without lined ejector shroud, and a 57-tube nozzle with a lined ejector shroud. Nozzle operating conditions covered jet velocities from 412 to 640 m/s at a total temperature of 844 K. Wind tunnel speeds were varied from near zero to 91.5 m/s. Measurements were analyzed to (1) determine apparent jet noise source location including effects of ambient velocity; (2) verify a technique for extrapolating near field jet noise measurements into the far field; (3) determine flight effects in the near and far field for baseline and suppressor nozzles; and (4) establish the wind tunnel as a means of accurately defining flight effects for model nozzles and full scale engines.

  19. Modular Open System Architecture for Reducing Contamination Risk in the Space and Missile Defense Supply Chain

    NASA Technical Reports Server (NTRS)

    Seasly, Elaine

    2015-01-01

    To combat contamination of physical assets and provide reliable data to decision makers in the space and missile defense community, a modular open system architecture for creation of contamination models and standards is proposed. Predictive tools for quantifying the effects of contamination can be calibrated from NASA data of long-term orbiting assets. This data can then be extrapolated to missile defense predictive models. By utilizing a modular open system architecture, sensitive data can be de-coupled and protected while benefitting from open source data of calibrated models. This system architecture will include modules that will allow the designer to trade the effects of baseline performance against the lifecycle degradation due to contamination while modeling the lifecycle costs of alternative designs. In this way, each member of the supply chain becomes an informed and active participant in managing contamination risk early in the system lifecycle.

  20. Assimilating Leaf Area Index Estimates from Remote Sensing into the Simulations of a Cropping Systems Model

    USDA-ARS?s Scientific Manuscript database

    Spatial extrapolation of cropping systems models for regional crop growth and water use assessment and farm-level precision management has been limited by the vast model input requirements and the model sensitivity to parameter uncertainty. Remote sensing has been proposed as a viable source of spat...

  1. Extending the Operational Envelope of a Turbofan Engine Simulation into the Sub-Idle Region

    NASA Technical Reports Server (NTRS)

    Chapman, Jeffryes W.; Hamley, Andrew J.; Guo, Ten-Huei; Litt, Jonathan S.

    2016-01-01

    In many non-linear gas turbine simulations, operation in the sub-idle region can lead to model instability. This paper lays out a method for extending the operational envelope of a map based gas turbine simulation to include the sub-idle region. This method develops a multi-simulation solution where the baseline component maps are extrapolated below the idle level and an alternate model is developed to serve as a safety net when the baseline model becomes unstable or unreliable. Sub-idle model development takes place in two distinct operational areas, windmilling/shutdown and purge/cranking/ startup. These models are based on derived steady state operating points with transient values extrapolated between initial (known) and final (assumed) states. Model transitioning logic is developed to predict baseline model sub-idle instability, and transition smoothly and stably to the backup sub-idle model. Results from the simulation show a realistic approximation of sub-idle behavior as compared to generic sub-idle engine performance that allows the engine to operate continuously and stably from shutdown to full power.

  2. Extending the Operational Envelope of a Turbofan Engine Simulation into the Sub-Idle Region

    NASA Technical Reports Server (NTRS)

    Chapman, Jeffryes Walter; Hamley, Andrew J.; Guo, Ten-Huei; Litt, Jonathan S.

    2016-01-01

    In many non-linear gas turbine simulations, operation in the sub-idle region can lead to model instability. This paper lays out a method for extending the operational envelope of a map based gas turbine simulation to include the sub-idle region. This method develops a multi-simulation solution where the baseline component maps are extrapolated below the idle level and an alternate model is developed to serve as a safety net when the baseline model becomes unstable or unreliable. Sub-idle model development takes place in two distinct operational areas, windmilling/shutdown and purge/cranking/startup. These models are based on derived steady state operating points with transient values extrapolated between initial (known) and final (assumed) states. Model transitioning logic is developed to predict baseline model sub-idle instability, and transition smoothly and stably to the backup sub-idle model. Results from the simulation show a realistic approximation of sub-idle behavior as compared to generic sub-idle engine performance that allows the engine to operate continuously and stably from shutdown to full power.

  3. An Observationally Constrained Model of a Flux Rope that Formed in the Solar Corona

    NASA Astrophysics Data System (ADS)

    James, Alexander W.; Valori, Gherardo; Green, Lucie M.; Liu, Yang; Cheung, Mark C. M.; Guo, Yang; van Driel-Gesztelyi, Lidia

    2018-03-01

    Coronal mass ejections (CMEs) are large-scale eruptions of plasma from the coronae of stars. Understanding the plasma processes involved in CME initiation has applications for space weather forecasting and laboratory plasma experiments. James et al. used extreme-ultraviolet (EUV) observations to conclude that a magnetic flux rope formed in the solar corona above NOAA Active Region 11504 before it erupted on 2012 June 14 (SOL2012-06-14). In this work, we use data from the Solar Dynamics Observatory (SDO) to model the coronal magnetic field of the active region one hour prior to eruption using a nonlinear force-free field extrapolation, and find a flux rope reaching a maximum height of 150 Mm above the photosphere. Estimations of the average twist of the strongly asymmetric extrapolated flux rope are between 1.35 and 1.88 turns, depending on the choice of axis, although the erupting structure was not observed to kink. The decay index near the apex of the axis of the extrapolated flux rope is comparable to typical critical values required for the onset of the torus instability, so we suggest that the torus instability drove the eruption.

  4. A TEST OF WATERSHED CLASSIFICATION SYSTEMS FOR ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    To facilitate extrapolation among watersheds, ecological risk assessments should be based on a model of underlying factors influencing watershed response, particularly vulnerability. We propose a conceptual model of landscape vulnerability to serve as a basis for watershed classi...

  5. A CONSISTENT APPROACH FOR THE APPLICATION OF PHARMACOKINETIC MODELING IN CANCER RISK ASSESSMENT

    EPA Science Inventory

    Physiologically based pharmacokinetic (PBPK) modeling provides important capabilities for improving the reliability of the extrapolations across dose, species, and exposure route that are generally required in chemical risk assessment regardless of the toxic endpoint being consid...

  6. Biological effectiveness of neutrons: Research needs

    NASA Astrophysics Data System (ADS)

    Casarett, G. W.; Braby, L. A.; Broerse, J. J.; Elkind, M. M.; Goodhead, D. T.; Oleinick, N. L.

    1994-02-01

    The goal of this report was to provide a conceptual plan for a research program that would provide a basis for determining more precisely the biological effectiveness of neutron radiation with emphasis on endpoints relevant to the protection of human health. This report presents the findings of the experts for seven particular categories of scientific information on neutron biological effectiveness. Chapter 2 examines the radiobiological mechanisms underlying the assumptions used to estimate human risk from neutrons and other radiations. Chapter 3 discusses the qualitative and quantitative models used to organize and evaluate experimental observations and to provide extrapolations where direct observations cannot be made. Chapter 4 discusses the physical principles governing the interaction of radiation with biological systems and the importance of accurate dosimetry in evaluating radiation risk and reducing the uncertainty in the biological data. Chapter 5 deals with the chemical and molecular changes underlying cellular responses and the LET dependence of these changes. Chapter 6, in turn, discusses those cellular and genetic changes which lead to mutation or neoplastic transformation. Chapters 7 and 8 examine deterministic and stochastic effects, respectively, and the data required for the prediction of such effects at different organizational levels and for the extrapolation from experimental results in animals to risks for man. Gaps and uncertainties in this data are examined relative to data required for establishing radiation protection standards for neutrons and procedures for the effective and safe use of neutron and other high-LET radiation therapy.

  7. On the dangers of model complexity without ecological justification in species distribution modeling

    Treesearch

    David M. Bell; Daniel R. Schlaepfer

    2016-01-01

    Although biogeographic patterns are the product of complex ecological processes, the increasing com-plexity of correlative species distribution models (SDMs) is not always motivated by ecological theory,but by model fit. The validity of model projections, such as shifts in a species’ climatic niche, becomesquestionable particularly during extrapolations, such as for...

  8. Motion-based prediction explains the role of tracking in motion extrapolation.

    PubMed

    Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U

    2013-11-01

    During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated. Then, during tracking, trajectory estimation is robust to blanks even in the presence of relatively high levels of noise. Moreover, we found that tracking is necessary for motion extrapolation, this calls for further experimental work exploring the role of noise in motion extrapolation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Multiscale modelling approaches for assessing cosmetic ingredients safety.

    PubMed

    Bois, Frédéric Y; Ochoa, Juan G Diaz; Gajewska, Monika; Kovarich, Simona; Mauch, Klaus; Paini, Alicia; Péry, Alexandre; Benito, Jose Vicente Sala; Teng, Sophie; Worth, Andrew

    2017-12-01

    The European Union's ban on animal testing for cosmetic ingredients and products has generated a strong momentum for the development of in silico and in vitro alternative methods. One of the focus of the COSMOS project was ab initio prediction of kinetics and toxic effects through multiscale pharmacokinetic modeling and in vitro data integration. In our experience, mathematical or computer modeling and in vitro experiments are complementary. We present here a summary of the main models and results obtained within the framework of the project on these topics. A first section presents our work at the organelle and cellular level. We then go toward modeling cell levels effects (monitored continuously), multiscale physiologically based pharmacokinetic and effect models, and route to route extrapolation. We follow with a short presentation of the automated KNIME workflows developed for dissemination and easy use of the models. We end with a discussion of two challenges to the field: our limited ability to deal with massive data and complex computations. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development

    DOE PAGES

    Burt, T.; Yoshida, K.; Lappin, G.; ...

    2016-02-26

    A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications andmore » design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. Lastly, all phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.« less

  11. Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burt, T.; Yoshida, K.; Lappin, G.

    A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications andmore » design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. Lastly, all phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.« less

  12. Dioxin equivalency: Challenge to dose extrapolation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.F. Jr.; Silkworth, J.B.

    1995-12-31

    Extensive research has shown that all biological effects of dioxin-like agents are mediated via a single biochemical target, the Ah receptor (AhR), and that the relative biologic potencies of such agents in any given system, coupled with their exposure levels, may be described in terms of toxic equivalents (TEQ). It has also shown that the TEQ sources include not only chlorinated species such as the dioxins (PCDDs), PCDFs, and coplanar PCBs, but also non-chlorinated substances such as the PAHs of wood smoke, the AhR agonists of cooked meat, and the indolocarbazol (ICZ) derived from cruciferous vegetables. Humans have probably hadmore » elevated exposures to these non-chlorinated TEQ sources ever since the discoveries of fire, cooking, and the culinary use of Brassica spp. Recent assays of CYP1A2 induction show that these ``natural`` or ``traditional`` AhR agonists are contributing 50--100 times as much to average human TEQ exposures as do the chlorinated xenobiotics. Currently, the safe doses of the xenobiotic TEQ sources are estimated from their NOAELs and large extrapolation factors, derived from arbitrary mathematical models, whereas the NOAELs themselves are regarded as the safe doses for the TEQs of traditional dietary components. Available scientific data can neither support nor refute either approach to assessing the health risk of an individual chemical substance. However, if two substances be toxicologically equivalent, then their TEQ-adjusted health risks must also be equivalent, and the same dose extrapolation procedure should be used for both.« less

  13. Determination of the Kwall correction factor for a cylindrical ionization chamber to measure air-kerma in 60Co gamma beams.

    PubMed

    Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M

    2002-07-21

    The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.

  14. Brief communication: The global signature of post-1900 land ice wastage on vertical land motion

    NASA Astrophysics Data System (ADS)

    Riva, Riccardo E. M.; Frederikse, Thomas; King, Matt A.; Marzeion, Ben; van den Broeke, Michiel R.

    2017-06-01

    Melting glaciers, ice caps and ice sheets have made an important contribution to sea-level rise through the last century. Self-attraction and loading effects driven by shrinking ice masses cause a spatially varying redistribution of ocean waters that affects reconstructions of past sea level from sparse observations. We model the solid-earth response to ice mass changes and find significant vertical deformation signals over large continental areas. We show how deformation rates have been strongly varying through the last century, which implies that they should be properly modelled before interpreting and extrapolating recent observations of vertical land motion and sea-level change.

  15. DISEASE RISK ANALYSIS--A TOOL FOR POLICY MAKING WHEN EVIDENCE IS LACKING: IMPORT OF RABIES-SUSCEPTIBLE ZOO MAMMALS AS A MODEL.

    PubMed

    Hartley, Matt; Roberts, Helen

    2015-09-01

    Disease control management relies on the development of policy supported by an evidence base. The evidence base for disease in zoo animals is often absent or incomplete. Resources for disease research in these species are limited, and so in order to develop effective policies, novel approaches to extrapolating knowledge and dealing with uncertainty need to be developed. This article demonstrates how qualitative risk analysis techniques can be used to aid decision-making in circumstances in which there is a lack of specific evidence using the import of rabies-susceptible zoo mammals into the United Kingdom as a model.

  16. Computer program for pulsed thermocouples with corrections for radiation effects

    NASA Technical Reports Server (NTRS)

    Will, H. A.

    1981-01-01

    A pulsed thermocouple was used for measuring gas temperatures above the melting point of common thermocouples. This was done by allowing the thermocouple to heat until it approaches its melting point and then turning on the protective cooling gas. This method required a computer to extrapolate the thermocouple data to the higher gas temperatures. A method that includes the effect of radiation in the extrapolation is described. Computations of gas temperature are provided, along with the estimate of the final thermocouple wire temperature. Results from tests on high temperature combustor research rigs are presented.

  17. A comparison of POPs bioaccumulation in Eisenia fetida in natural and artificial soils and the effects of aging.

    PubMed

    Vlčková, Klára; Hofman, Jakub

    2012-01-01

    The close relationship between soil organic matter and the bioavailability of POPs in soils suggests the possibility of using it for the extrapolation between different soils. The aim of this study was to prove that TOC content is not a single factor affecting the bioavailability of POPs and that TOC based extrapolation might be incorrect, especially when comparing natural and artificial soils. Three natural soils with increasing TOC and three artificial soils with TOC comparable to these natural soils were spiked with phenanthrene, pyrene, lindane, p,p'-DDT, and PCB 153 and studied after 0, 14, 28, and 56 days. At each sampling point, total soil concentration and bioaccumulation in earthworms Eisenia fetida were measured. The results showed different behavior and bioavailability of POPs in natural and artificial soils and apparent effects of aging on these differences. Hence, direct TOC based extrapolation between various soils seems to be limited. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Comparison of annual percentage change in breast cancer incidence rate between Taiwan and the United States-A smoothed Lexis diagram approach.

    PubMed

    Chien, Li-Hsin; Tseng, Tzu-Jui; Chen, Chung-Hsing; Jiang, Hsin-Fang; Tsai, Fang-Yu; Liu, Tsang-Wu; Hsiung, Chao A; Chang, I-Shou

    2017-07-01

    Recent studies compared the age effects and birth cohort effects on female invasive breast cancer (FIBC) incidence in Asian populations with those in the US white population. They were based on age-period-cohort model extrapolation and estimated annual percentage change (EAPC) in the age-standardized incidence rates (ASR). It is of interest to examine these results based on cohort-specific annual percentage change in rate (APCR) by age and without age-period-cohort model extrapolation. FIBC data (1991-2010) were obtained from the Taiwan Cancer Registry and the U.S. SEER 9 registries. APCR based on smoothed Lexis diagrams were constructed to study the age, period, and cohort effects on FIBC incidence. The patterns of age-specific rates by birth cohort are similar between Taiwan and the US. Given any age-at-diagnosis group, cohort-specific rates increased overtime in Taiwan but not in the US; cohort-specific APCR by age decreased with birth year in both Taiwan and the US but was always positive and large in Taiwan. Given a diagnosis year, APCR decreased as birth year increased in Taiwan but not in the US. In Taiwan, the proportion of APCR attributable to cohort effect was substantial and that due to case ascertainment was becoming smaller. Although our study shows that incidence rates of FIBC have increased rapidly in Taiwan, thereby confirming previous results, the rate of increase over time is slowing. Continued monitoring of APCR and further investigation of the cause of the APCR decrease in Taiwan are warranted. © 2017 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  19. Description of new dry granular materials of variable cohesion and friction coefficient: Implications for laboratory modeling of the brittle crust

    NASA Astrophysics Data System (ADS)

    Abdelmalak, M. M.; Bulois, C.; Mourgues, R.; Galland, O.; Legland, J.-B.; Gruber, C.

    2016-08-01

    Cohesion and friction coefficient are fundamental parameters for scaling brittle deformation in laboratory models of geological processes. However, they are commonly not experimental variable, whereas (1) rocks range from cohesion-less to strongly cohesive and from low friction to high friction and (2) strata exhibit substantial cohesion and friction contrasts. This brittle paradox implies that the effects of brittle properties on processes involving brittle deformation cannot be tested in laboratory models. Solving this paradox requires the use of dry granular materials of tunable and controllable brittle properties. In this paper, we describe dry mixtures of fine-grained cohesive, high friction silica powder (SP) and low-cohesion, low friction glass microspheres (GM) that fulfill this requirement. We systematically estimated the cohesions and friction coefficients of mixtures of variable proportions using two independent methods: (1) a classic Hubbert-type shear box to determine the extrapolated cohesion (C) and friction coefficient (μ), and (2) direct measurements of the tensile strength (T0) and the height (H) of open fractures to calculate the true cohesion (C0). The measured values of cohesion increase from 100 Pa for pure GM to 600 Pa for pure SP, with a sub-linear trend of the cohesion with the mixture GM content. The two independent cohesion measurement methods, from shear tests and tension/extensional tests, yield very similar results of extrapolated cohesion (C) and show that both are robust and can be used independently. The measured values of friction coefficients increase from 0.5 for pure GM to 1.05 for pure SP. The use of these granular material mixtures now allows testing (1) the effects of cohesion and friction coefficient in homogeneous laboratory models and (2) testing the effect of brittle layering on brittle deformation, as demonstrated by preliminary experiments. Therefore, the brittle properties become, at last, experimental variables.

  20. HIV Trends in the United States: Diagnoses and Estimated Incidence.

    PubMed

    Hall, H Irene; Song, Ruiguang; Tang, Tian; An, Qian; Prejean, Joseph; Dietz, Patricia; Hernandez, Angela L; Green, Timothy; Harris, Norma; McCray, Eugene; Mermin, Jonathan

    2017-02-03

    The best indicator of the impact of human immunodeficiency virus (HIV) prevention programs is the incidence of infection; however, HIV is a chronic infection and HIV diagnoses may include infections that occurred years before diagnosis. Alternative methods to estimate incidence use diagnoses, stage of disease, and laboratory assays of infection recency. Using a consistent, accurate method would allow for timely interpretation of HIV trends. The objective of our study was to assess the recent progress toward reducing HIV infections in the United States overall and among selected population segments with available incidence estimation methods. Data on cases of HIV infection reported to national surveillance for 2008-2013 were used to compare trends in HIV diagnoses, unadjusted and adjusted for reporting delay, and model-based incidence for the US population aged ≥13 years. Incidence was estimated using a biomarker for recency of infection (stratified extrapolation approach) and 2 back-calculation models (CD4 and Bayesian hierarchical models). HIV testing trends were determined from behavioral surveys for persons aged ≥18 years. Analyses were stratified by sex, race or ethnicity (black, Hispanic or Latino, and white), and transmission category (men who have sex with men, MSM). On average, HIV diagnoses decreased 4.0% per year from 48,309 in 2008 to 39,270 in 2013 (P<.001). Adjusting for reporting delays, diagnoses decreased 3.1% per year (P<.001). The CD4 model estimated an annual decrease in incidence of 4.6% (P<.001) and the Bayesian hierarchical model 2.6% (P<.001); the stratified extrapolation approach estimated a stable incidence. During these years, overall, the percentage of persons who ever had received an HIV test or had had a test within the past year remained stable; among MSM testing increased. For women, all 3 incidence models corroborated the decreasing trend in HIV diagnoses, and HIV diagnoses and 2 incidence models indicated decreases among blacks and whites. The CD4 and Bayesian hierarchical models, but not the stratified extrapolation approach, indicated decreases in incidence among MSM. HIV diagnoses and CD4 and Bayesian hierarchical model estimates indicated decreases in HIV incidence overall, among both sexes and all race or ethnicity groups. Further progress depends on effectively reducing HIV incidence among MSM, among whom the majority of new infections occur. ©H Irene Hall, Ruiguang Song, Tian Tang, Qian An, Joseph Prejean, Patricia Dietz, Angela L Hernandez, Timothy Green, Norma Harris, Eugene McCray, Jonathan Mermin. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 03.02.2017.

  1. Line-of-sight extrapolation noise in dust polarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poh, Jason; Dodelson, Scott

    The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g., 350 GHz) is due solely to dust and then extrapolate the signal down to lower frequency (e.g., 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typicalmore » Galactic thermal dust temperatures of about 20K, these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r < 0.0015 .« less

  2. XUV Photometer System (XPS): New Dark-Count Corrections Model and Improved Data Products

    NASA Astrophysics Data System (ADS)

    Elliott, J. P.; Vanier, B.; Woods, T. N.

    2017-12-01

    We present newly updated dark-count calibrations for the SORCE XUV Photometer System (XPS) and the resultant improved data products released in March of 2017. The SORCE mission has provided a 14-year solar spectral irradiance record, and the XPS contributes to this record in the 0.1 nm to 40 nm range. The SORCE spacecraft has been operating in what is known as Day-Only Operations (DO-Op) mode since February of 2014. In this mode it is not possible to collect data, including dark-counts, when the spacecraft is in eclipse as we did prior to DO-Op. Instead, we take advantage of the position of the XPS filter-wheel, and collect these data when the wheel position is in a "dark" position. Further, in this mode dark data are not always available for all observations, requiring an extrapolation in order to calibrate data at these times. To extrapolate, we model this with a piece-wise 2D nonlinear least squares surface fit in the time and temperature dimensions. Our model allows us to calibrate XPS data into the DO-Op phase of the mission by extrapolating along this surface. The XPS version 11 data product release benefits from this new calibration. We present comparisons of the previous and current calibration methods in addition to planned future upgrades of our data products.

  3. Climatic and biotic stochasticity: disparate causes of convergent demographies in rare, sympatric plants.

    PubMed

    Fox, Laurel R

    2007-12-01

    Species with known demographies may be used as proxies, or approximate models, to predict vital rates and ecological properties of target species that either have not been studied or are species for which data may be difficult to obtain. These extrapolations assume that model and target species with similar properties respond in the same ways to the same ecological factors, that they have similar population dynamics, and that the similarity of vital rates reflects analogous responses to the same factors. I used two rare, sympatric annual plants (sand gilia [Gilia tenuiflora arenaria] and Monterey spineflower [Chorizanthe pungens pungens]) to test these assumptions experimentally. The vital rates of these species are similar and strongly correlated with rainfall, and I added water and/or prevented herbivore access to experimental plots. Their survival and reproduction were driven by different, largely stochastic factors and processes: sand gilia by herbivory and Monterey spineflower by rainfall. Because the causal agents and processes generating similar demographic patterns were species specific, these results demonstrate, both theoretically and empirically, that it is critical to identify the ecological processes generating observed effects and that experimental manipulations are usually needed to determine causal mechanisms. Without such evidence to identify mechanisms, extrapolations among species may lead to counterproductive management and conservation practices.

  4. Extrapolation of the dna fragment-size distribution after high-dose irradiation to predict effects at low doses

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Cucinotta, F. A.; Sachs, R. K.; Brenner, D. J.; Peterson, L. E.

    2001-01-01

    The patterns of DSBs induced in the genome are different for sparsely and densely ionizing radiations: In the former case, the patterns are well described by a random-breakage model; in the latter, a more sophisticated tool is needed. We used a Monte Carlo algorithm with a random-walk geometry of chromatin, and a track structure defined by the radial distribution of energy deposition from an incident ion, to fit the PFGE data for fragment-size distribution after high-dose irradiation. These fits determined the unknown parameters of the model, enabling the extrapolation of data for high-dose irradiation to the low doses that are relevant for NASA space radiation research. The randomly-located-clusters formalism was used to speed the simulations. It was shown that only one adjustable parameter, Q, the track efficiency parameter, was necessary to predict DNA fragment sizes for wide ranges of doses. This parameter was determined for a variety of radiations and LETs and was used to predict the DSB patterns at the HPRT locus of the human X chromosome after low-dose irradiation. It was found that high-LET radiation would be more likely than low-LET radiation to induce additional DSBs within the HPRT gene if this gene already contained one DSB.

  5. Human health risk assessment of nitrosamines and nitramines for potential application in CO2 capture.

    PubMed

    Ravnum, S; Rundén-Pran, E; Fjellsbø, L M; Dusinska, M

    2014-07-01

    Emission and accumulation of carbon dioxide (CO2) in the atmosphere exert an environmental and climate change challenge. An attempt to deal with this challenge is made at Mongstad by application of amines for CO2 capture and storage (CO2 capture Mongstad (CCM) project). As part of the CO2 capture process, nitrosamines and nitramines may be emitted. Toxicological testing of nitrosamines and nitramines indicate a genotoxic potential of these substances. Here we present a risk characterization and assessment for five nitrosamines (N-Nitrosodi-methylamine (NDMA) N-Nitrosodi-ethylamine (NDEA), N-Nitroso-morpholine (NNM), N-Nitroso-piperidine (NPIP), and Dinitroso-piperazine (DNP)) and two nitramines (N-Methyl-nitramine (NTMA), Dimethyl-nitramine (NDTMA)), which are potentially emitted from the CO2 capture plant (CCP). Human health risk assessment of genotoxic non-threshold substances is a heavily debated topic, and no consensus methodology exists internationally. Extrapolation modeling from high-dose animal exposures to low-dose human exposures can be crucial for the final risk calculation. In the work presented here, different extrapolation models are discussed, and suggestions on applications are given. Then, preferred methods for calculating derived minimal effect level (DMEL) are presented with the selected nitrosamines and nitramines. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. A Chemically Relevant Model for Teaching the Second Law of Thermodynamics.

    ERIC Educational Resources Information Center

    Williamson, Bryce E.; Morikawa, Tetsuo

    2002-01-01

    Introduces a chemical model illustrating the aspects of the second law of thermodynamics which explains concepts such as reversibility, path dependence, and extrapolation in terms of electrochemistry and calorimetry. Presents a thought experiment using an ideal galvanic electrochemical cell. (YDS)

  7. Past and Future of Astronomy and SETI Cast in Maths

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    Assume that the history of Astronomy and SETI is the leading proof of the evolution of human knowledge on Earth over the last 3000 years. Then, human knowledge has increased a lot, although not at a uniform pace. A mathematical description of how much human knowledge has increased, however, is difficult to achieve. In this paper, we cast a mathematical model of the evolution of human knowledge over the last three thousand years that seems to reflect reasonably well both what is known from the past and might be extrapolated into the future. Our model is based on two seminal books by Sagan and Finney and Jones. Our model is based on the use of two cubic curves, representing the evolution of Astronomy and of SETI, respectively. We conclude by extrapolating these curves into the future and reach the conclusion that the "Star Trek" age of humankind might possibly begin by the end of this century.

  8. Determination of Extrapolation Distance with Measured Pressure Signatures from Two Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.; Kuhn, Neil

    2004-01-01

    A study to determine a limiting distance to span ratio for the extrapolation of near-field pressure signatures is described and discussed. This study was to be done in two wind-tunnel facilities with two wind-tunnel models. At this time, only the first half had been completed, so the scope of this report is limited to the design of the models, and to an analysis of the first set of measured pressure signatures. The results from this analysis showed that the pressure signatures measured at separation distances of 2 to 5 span lengths did not show the desired low-boom shapes. However, there were indications that the pressure signature shapes were becoming 'flat-topped'. This trend toward a 'flat-top' pressure signatures shape was seen to be a gradual one at the distance ratios employed in this first series of wind-tunnel tests.

  9. Approaches for the Application of Physiologically Based ...

    EPA Pesticide Factsheets

    This draft report of Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment addresses the application and evaluation of PBPK models for risk assessment purposes. These models represent an important class of dosimetry models that are useful for predicting internal dose at target organs for risk assessment applications. Topics covered include:the types of data required use of PBPK models in risk assessment,evaluation of PBPK models for use in risk assessment, andthe application of these models to address uncertainties resulting from extrapolations (e.g. interspecies extrapolation) often used in risk assessment.In addition, appendices are provided that includea compilation of chemical partition coefficients and rate constants,algorithms for estimating chemical-specific parameters, anda list of publications relating to PBPK modeling. This report is primarily meant to serve as a learning tool for EPA scientists and risk assessors who may be less familiar with the field. In addition, this report can be informative to PBPK modelers within and outside the Agency, as it provides an assessment of the types of data and models that the EPA requires for consideration of a model for use in risk assessment.

  10. Review of Air Vitiation Effects on Scramjet Ignition and Flameholding Combustion Processes

    NASA Technical Reports Server (NTRS)

    Pellett, G. L.; Bruno, Claudio; Chinitz, W.

    2002-01-01

    This paper offers a detailed review and analysis of more than 100 papers on the physics and chemistry of scramjet ignition and flameholding combustion processes, and the known effects of air vitiation on these processes. The paper attempts to explain vitiation effects in terms of known chemical kinetics and flame propagation phenomena. Scaling methodology is also examined, and a highly simplified Damkoehler scaling technique based on OH radical production/destruction is developed to extrapolate ground test results, affected by vitiation, to flight testing conditions. The long term goal of this effort is to help provide effective means for extrapolating ground test data to flight, and thus to reduce the time and expense of both ground and flight testing.

  11. Comparison of Two Coronal Magnetic Field Models to Reconstruct a Sigmoidal Solar Active Region with Coronal Loops

    NASA Astrophysics Data System (ADS)

    Duan, Aiying; Jiang, Chaowei; Hu, Qiang; Zhang, Huai; Gary, G. Allen; Wu, S. T.; Cao, Jinbin

    2017-06-01

    Magnetic field extrapolation is an important tool to study the three-dimensional (3D) solar coronal magnetic field, which is difficult to directly measure. Various analytic models and numerical codes exist, but their results often drastically differ. Thus, a critical comparison of the modeled magnetic field lines with the observed coronal loops is strongly required to establish the credibility of the model. Here we compare two different non-potential extrapolation codes, a nonlinear force-free field code (CESE-MHD-NLFFF) and a non-force-free field (NFFF) code, in modeling a solar active region (AR) that has a sigmoidal configuration just before a major flare erupted from the region. A 2D coronal-loop tracing and fitting method is employed to study the 3D misalignment angles between the extrapolated magnetic field lines and the EUV loops as imaged by SDO/AIA. It is found that the CESE-MHD-NLFFF code with preprocessed magnetogram performs the best, outputting a field that matches the coronal loops in the AR core imaged in AIA 94 Å with a misalignment angle of ˜10°. This suggests that the CESE-MHD-NLFFF code, even without using the information of the coronal loops in constraining the magnetic field, performs as good as some coronal-loop forward-fitting models. For the loops as imaged by AIA 171 Å in the outskirts of the AR, all the codes including the potential field give comparable results of the mean misalignment angle (˜30°). Thus, further improvement of the codes is needed for a better reconstruction of the long loops enveloping the core region.

  12. Extrapolating cetacean densities to quantitatively assess human impacts on populations in the high seas.

    PubMed

    Mannocci, Laura; Roberts, Jason J; Miller, David L; Halpin, Patrick N

    2017-06-01

    As human activities expand beyond national jurisdictions to the high seas, there is an increasing need to consider anthropogenic impacts to species inhabiting these waters. The current scarcity of scientific observations of cetaceans in the high seas impedes the assessment of population-level impacts of these activities. We developed plausible density estimates to facilitate a quantitative assessment of anthropogenic impacts on cetacean populations in these waters. Our study region extended from a well-surveyed region within the U.S. Exclusive Economic Zone into a large region of the western North Atlantic sparsely surveyed for cetaceans. We modeled densities of 15 cetacean taxa with available line transect survey data and habitat covariates and extrapolated predictions to sparsely surveyed regions. We formulated models to reduce the extent of extrapolation beyond covariate ranges, and constrained them to model simple and generalizable relationships. To evaluate confidence in the predictions, we mapped where predictions were made outside sampled covariate ranges, examined alternate models, and compared predicted densities with maps of sightings from sources that could not be integrated into our models. Confidence levels in model results depended on the taxon and geographic area and highlighted the need for additional surveying in environmentally distinct areas. With application of necessary caution, our density estimates can inform management needs in the high seas, such as the quantification of potential cetacean interactions with military training exercises, shipping, fisheries, and deep-sea mining and be used to delineate areas of special biological significance in international waters. Our approach is generally applicable to other marine taxa and geographic regions for which management will be implemented but data are sparse. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  13. Creation of Abdominal Aortic Aneurysms in Sheep by Extrapolation of Rodent Models: Is It Feasible?

    PubMed

    Verbrugghe, Peter; Verhoeven, Jelle; Clijsters, Marnick; Vervoort, Dominique; Coudyzer, Walter; Verbeken, Eric; Meuris, Bart; Herijgers, Paul

    2018-06-07

    Abdominal aortic aneurysms (AAAs) are a potentially deathly disease, needing surgical or endovascular treatment. To evaluate potentially new diagnostic tools and treatments, a large animal model, which resembles not only the morphological characteristics but also the pathophysiological background, would be useful. Rodent animal aneurysm models were extrapolated to sheep. Four groups were created: intraluminal infusion with an elastase-collagenase solution (n = 4), infusion with elastase-collagenase solution combined with proximal stenosis (n = 7), aortic xenograft (n = 3), and elastase-collagenase-treated xenograft (n = 4). At fixed time intervals (6, 12, and 24 weeks), computer tomography and autopsy with histological evaluation were performed. The described models had a high perioperative mortality (45%), due to acute aortic thrombosis or fatale hemorrhage. A maximum aortic diameter increase of 30% was obtained in the protease-stenosis group. In the protease-treated groups, some histological features of human AAAs, such as inflammation, thinning of the media, and loss of elastin could be reproduced. In the xenotransplant groups, a pronounced inflammatory reaction was visible at the start. In all models, inflammation decreased and fibrosis occurred at long follow-up, 24 weeks postoperatively. None of the extrapolated small animal aneurysm models could produce an AAA in sheep with similar morphological features as the human disease. Some histological findings of human surgical specimens could be reproduced in the elastase-collagenase-treated groups. Long-term histological evaluation indicated stabilization and healing of the aortic wall months after the initial stimulus. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  14. Comparison of Two Coronal Magnetic Field Models to Reconstruct a Sigmoidal Solar Active Region with Coronal Loops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Aiying; Zhang, Huai; Jiang, Chaowei

    Magnetic field extrapolation is an important tool to study the three-dimensional (3D) solar coronal magnetic field, which is difficult to directly measure. Various analytic models and numerical codes exist, but their results often drastically differ. Thus, a critical comparison of the modeled magnetic field lines with the observed coronal loops is strongly required to establish the credibility of the model. Here we compare two different non-potential extrapolation codes, a nonlinear force-free field code (CESE–MHD–NLFFF) and a non-force-free field (NFFF) code, in modeling a solar active region (AR) that has a sigmoidal configuration just before a major flare erupted from themore » region. A 2D coronal-loop tracing and fitting method is employed to study the 3D misalignment angles between the extrapolated magnetic field lines and the EUV loops as imaged by SDO /AIA. It is found that the CESE–MHD–NLFFF code with preprocessed magnetogram performs the best, outputting a field that matches the coronal loops in the AR core imaged in AIA 94 Å with a misalignment angle of ∼10°. This suggests that the CESE–MHD–NLFFF code, even without using the information of the coronal loops in constraining the magnetic field, performs as good as some coronal-loop forward-fitting models. For the loops as imaged by AIA 171 Å in the outskirts of the AR, all the codes including the potential field give comparable results of the mean misalignment angle (∼30°). Thus, further improvement of the codes is needed for a better reconstruction of the long loops enveloping the core region.« less

  15. An energy budget agent-based model of earthworm populations and its application to study the effects of pesticides

    PubMed Central

    Johnston, A.S.A.; Hodson, M.E.; Thorbek, P.; Alvarez, T.; Sibly, R.M.

    2014-01-01

    Earthworms are important organisms in soil communities and so are used as model organisms in environmental risk assessments of chemicals. However current risk assessments of soil invertebrates are based on short-term laboratory studies, of limited ecological relevance, supplemented if necessary by site-specific field trials, which sometimes are challenging to apply across the whole agricultural landscape. Here, we investigate whether population responses to environmental stressors and pesticide exposure can be accurately predicted by combining energy budget and agent-based models (ABMs), based on knowledge of how individuals respond to their local circumstances. A simple energy budget model was implemented within each earthworm Eisenia fetida in the ABM, based on a priori parameter estimates. From broadly accepted physiological principles, simple algorithms specify how energy acquisition and expenditure drive life cycle processes. Each individual allocates energy between maintenance, growth and/or reproduction under varying conditions of food density, soil temperature and soil moisture. When simulating published experiments, good model fits were obtained to experimental data on individual growth, reproduction and starvation. Using the energy budget model as a platform we developed methods to identify which of the physiological parameters in the energy budget model (rates of ingestion, maintenance, growth or reproduction) are primarily affected by pesticide applications, producing four hypotheses about how toxicity acts. We tested these hypotheses by comparing model outputs with published toxicity data on the effects of copper oxychloride and chlorpyrifos on E. fetida. Both growth and reproduction were directly affected in experiments in which sufficient food was provided, whilst maintenance was targeted under food limitation. Although we only incorporate toxic effects at the individual level we show how ABMs can readily extrapolate to larger scales by providing good model fits to field population data. The ability of the presented model to fit the available field and laboratory data for E. fetida demonstrates the promise of the agent-based approach in ecology, by showing how biological knowledge can be used to make ecological inferences. Further work is required to extend the approach to populations of more ecologically relevant species studied at the field scale. Such a model could help extrapolate from laboratory to field conditions and from one set of field conditions to another or from species to species. PMID:25844009

  16. Extrapolation of toxic indices among test objects

    PubMed Central

    Tichý, Miloň; Rucki, Marián; Roth, Zdeněk; Hanzlíková, Iveta; Vlková, Alena; Tumová, Jana; Uzlová, Rút

    2010-01-01

    Oligochaeta Tubifex tubifex, fish fathead minnow (Pimephales promelas), hepatocytes isolated from rat liver and ciliated protozoan are absolutely different organisms and yet their acute toxicity indices correlate. Correlation equations for special effects were developed for a large heterogeneous series of compounds (QSAR, quantitative structure-activity relationships). Knowing those correlation equations and their statistic evaluation, one can extrapolate the toxic indices. The reason is that a common physicochemical property governs the biological effect, namely the partition coefficient between two unmissible phases, simulated generally by n-octanol and water. This may mean that the transport of chemicals towards a target is responsible for the magnitude of the effect, rather than reactivity, as one would assume suppose. PMID:21331180

  17. Fourth order scheme for wavelet based solution of Black-Scholes equation

    NASA Astrophysics Data System (ADS)

    Finěk, Václav

    2017-12-01

    The present paper is devoted to the numerical solution of the Black-Scholes equation for pricing European options. We apply the Crank-Nicolson scheme with Richardson extrapolation for time discretization and Hermite cubic spline wavelets with four vanishing moments for space discretization. This scheme is the fourth order accurate both in time and in space. Computational results indicate that the Crank-Nicolson scheme with Richardson extrapolation significantly decreases the amount of computational work. We also numerically show that optimal convergence rate for the used scheme is obtained without using startup procedure despite the data irregularities in the model.

  18. MEGA16 - Computer program for analysis and extrapolation of stress-rupture data

    NASA Technical Reports Server (NTRS)

    Ensign, C. R.

    1981-01-01

    The computerized form of the minimum commitment method of interpolating and extrapolating stress versus time to failure data, MEGA16, is described. Examples are given of its many plots and tabular outputs for a typical set of data. The program assumes a specific model equation and then provides a family of predicted isothermals for any set of data with at least 12 stress-rupture results from three different temperatures spread over reasonable stress and time ranges. It is written in FORTRAN 4 using IBM plotting subroutines and its runs on an IBM 370 time sharing system.

  19. Uncertainty factors in screening ecological risk assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duke, L.D.; Taggart, M.

    2000-06-01

    The hazard quotient (HQ) method is commonly used in screening ecological risk assessments (ERAs) to estimate risk to wildlife at contaminated sites. Many ERAs use uncertainty factors (UFs) in the HQ calculation to incorporate uncertainty associated with predicting wildlife responses to contaminant exposure using laboratory toxicity data. The overall objective was to evaluate the current UF methodology as applied to screening ERAs in California, USA. Specific objectives included characterizing current UF methodology, evaluating the degree of conservatism in UFs as applied, and identifying limitations to the current approach. Twenty-four of 29 evaluated ERAs used the HQ approach: 23 of thesemore » used UFs in the HQ calculation. All 24 made interspecies extrapolations, and 21 compensated for its uncertainty, most using allometric adjustments and some using RFs. Most also incorporated uncertainty for same-species extrapolations. Twenty-one ERAs used UFs extrapolating from lowest observed adverse effect level (LOAEL) to no observed adverse effect level (NOAEL), and 18 used UFs extrapolating from subchronic to chronic exposure. Values and application of all UF types were inconsistent. Maximum cumulative UFs ranged from 10 to 3,000. Results suggest UF methodology is widely used but inconsistently applied and is not uniformly conservative relative to UFs recommended in regulatory guidelines and academic literature. The method is limited by lack of consensus among scientists, regulators, and practitioners about magnitudes, types, and conceptual underpinnings of the UF methodology.« less

  20. Extrapolation of a predictive model for growth of a low inoculum size of Salmonella typhimurium DT104 on chicken skin to higher inoculum sizes

    USDA-ARS?s Scientific Manuscript database

    Validation of model predictions for independent variables not included in model development can save time and money by identifying conditions for which new models are not needed. A single strain of Salmonella Typhimurium DT104 was used to develop a general regression neural network model for growth...

  1. PROPOSED MODELS FOR ESTIMATING RELEVANT DOSE RESULTING FROM EXPOSURES BY THE GASTROINTESTINAL ROUTE

    EPA Science Inventory

    Simple first-order intestinal absorption commonly used in physiologically-based pharmacokinetic(PBPK) models can be made to fit many clinical administrations but may not provide relevant information to extrapolate to real-world exposure scenarios for risk assessment. Small hydr...

  2. Neoplastic and nonneoplastic liver lesions induced by dimethylinitrosamine in Japanese Medaka fish

    EPA Science Inventory

    Small fish models are becoming commonplace in the laboratory, and have been used for decades in chemical toxicity and carcinogenicity testing. However, extrapolation of findings from aquatic models to humans is still a concern in risk assessment. Demonstration of common morpholog...

  3. MODELING APPROACHES FOR ESTIMATING THE DOSIMETRY OF INHALED TOXICANTS IN CHILDREN

    EPA Science Inventory

    Risk assessment of inhaled toxicants has typically focused upon adults, with modeling used to extrapolate dosimetry and risks from laboratory animals to humans. However, behavioral factors such as time spent playing outdoors can lead to more exposure to inhaled toxicants in chil...

  4. Methods of Technological Forecasting,

    DTIC Science & Technology

    1977-05-01

    Trend Extrapolation Progress Curve Analogy Trend Correlation Substitution Analysis or Substitution Growth Curves Envelope Curve Advances in the State of...the Art Technological Mapping Contextual Mapping Matrix Input-Output Analysis Mathematical Models Simulation Models Dynamic Modelling. CHAPTER IV...Generation Interaction between Needs and Possibilities Map of the Technological Future — (‘ross- Impact Matri x Discovery Matrix Morphological Analysis

  5. Correaltion of full-scale drag predictions with flight measurements on the C-141A aircraft. Phase 2: Wind tunnel test, analysis, and prediction techniques. Volume 1: Drag predictions, wind tunnel data analysis and correlation

    NASA Technical Reports Server (NTRS)

    Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.

    1974-01-01

    The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.

  6. The Binary Collision-Induced Second Overtone Band of Gaseous Hydrogen: Modelling and Laboratory Measurements

    NASA Technical Reports Server (NTRS)

    Brodbeck, C.; Bouanich, J.-P.; Nguyen, Van Thanh; Borysow, Aleksandra

    1999-01-01

    Collision-induced absorption (CIA) is the major source of the infrared opacity of dense planetary atmospheres which are composed of nonpolar molecules. Knowledge of CIA absorption spectra of H2-H2 pairs is important for modelling the atmospheres of planets and cold stars that are mainly composed of hydrogen. The spectra of hydrogen in the region of the second overtone at 0.8 microns have been recorded at temperatures of 298 and 77.5 K for gas densities ranging from 100 to 800 amagats. By extrapolation to zero density of the absorption coefficient measured every 10 cm(exp -1) in the spectral range from 11100 to 13800 cm(exp -1), we have determined the binary absorption coefficient. These extrapolated measurements are compared with calculations based on a model that was obtained by using simple computer codes and lineshape profiles. In view of the very weak absorption of the second overtone band, we find the agreement between results of the model and experiment to be reasonable.

  7. Evidence for using Monte Carlo calculated wall attenuation and scatter correction factors for three styles of graphite-walled ion chamber.

    PubMed

    McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O

    2004-06-21

    The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.

  8. Statistical modeling for Bayesian extrapolation of adult clinical trial information in pediatric drug evaluation.

    PubMed

    Gamalo-Siebers, Margaret; Savic, Jasmina; Basu, Cynthia; Zhao, Xin; Gopalakrishnan, Mathangi; Gao, Aijun; Song, Guochen; Baygani, Simin; Thompson, Laura; Xia, H Amy; Price, Karen; Tiwari, Ram; Carlin, Bradley P

    2017-07-01

    Children represent a large underserved population of "therapeutic orphans," as an estimated 80% of children are treated off-label. However, pediatric drug development often faces substantial challenges, including economic, logistical, technical, and ethical barriers, among others. Among many efforts trying to remove these barriers, increased recent attention has been paid to extrapolation; that is, the leveraging of available data from adults or older age groups to draw conclusions for the pediatric population. The Bayesian statistical paradigm is natural in this setting, as it permits the combining (or "borrowing") of information across disparate sources, such as the adult and pediatric data. In this paper, authored by the pediatric subteam of the Drug Information Association Bayesian Scientific Working Group and Adaptive Design Working Group, we develop, illustrate, and provide suggestions on Bayesian statistical methods that could be used to design improved pediatric development programs that use all available information in the most efficient manner. A variety of relevant Bayesian approaches are described, several of which are illustrated through 2 case studies: extrapolating adult efficacy data to expand the labeling for Remicade to include pediatric ulcerative colitis and extrapolating adult exposure-response information for antiepileptic drugs to pediatrics. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Nuclear's role in 21. century Pacific rim energy use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, Clifford; Taylor, J'Tia

    2007-07-01

    Extrapolations contrast the future of nuclear energy use in Japan and the Republic of Korea (ROK) to that of the Association of Southeast Asian Nations (ASEAN). Japan can expect a gradual rise in the nuclear fraction of a nearly constant total energy use rate as the use of fossil fuels declines. ROK nuclear energy rises gradually with total energy use. ASEAN's total nuclear energy use rate can rapidly approach that of the ROK if Indonesia and Vietnam make their current nuclear energy targets by 2020, but experience elsewhere suggests that nuclear energy growth may be slower than planned. Extrapolations aremore » based on econometric calibration to a utility optimization model of the impact of growth of population, gross domestic product, total energy use, and cumulative fossil carbon use. Fractions of total energy use from fluid fossil fuels, coal, water-driven electrical power production, nuclear energy, and wind and solar electric energy sources are fit to market fractions data. Where historical data is insufficient for extrapolation, plans for non-fossil energy are used as a guide. Extrapolations suggest much more U.S. nuclear energy and spent nuclear fuel generation than for the ROK and ASEAN until beyond the first half of the twenty-first century. (authors)« less

  10. Calculation of Optical Parameters of Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Kumar, A.

    2007-12-01

    Validation of a modified four-parameter model describing temperature effect on liquid crystal refractive indices is being reported in the present article. This model is based upon the Vuks equation. Experimental data of ordinary and extraordinary refractive indices for two liquid crystal samples MLC-9200-000 and MLC-6608 are used to validate the above-mentioned theoretical model. Using these experimental data, birefringence, order parameter, normalized polarizabilities, and the temperature gradient of refractive indices are determined. Two methods: directly using birefringence measurements and using Haller's extrapolation procedure are adopted for the determination of order parameter. Both approches of order parameter calculation are compared. The temperature dependences of all these parameters are discussed. A close agreement between theory and experiment is obtained.

  11. The forecast for RAC extrapolation: mostly cloudy.

    PubMed

    Goldman, Elizabeth; Jacobs, Robert; Scott, Ellen; Scott, Bonnie

    2011-09-01

    The current statutory and regulatory guidance for recovery audit contractor (RAC) extrapolation leaves providers with minimal protection against the process and a limited ability to challenge overpayment demands. Providers not only should understand the statutory and regulatory basis for extrapolation forecast, but also should be able to assess their extrapolation risk and their recourse through regulatory safeguards against contractor error. Providers also should aggressively appeal all incorrect RAC denials to minimize the potential impact of extrapolation.

  12. A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data

    NASA Technical Reports Server (NTRS)

    Smith, Laura J.

    2004-01-01

    Tests are conducted on a quad-redundant fault tolerant flight control computer to establish upset characteristics of an avionics system in an electromagnetic field. A numerical simulation and statistical model are described in this work to analyze the open loop experiment data collected in the reverberation chamber at NASA LaRC as a part of an effort to examine the effects of electromagnetic interference on fly-by-wire aircraft control systems. By comparing thousands of simulation and model outputs, the models that best describe the data are first identified and then a systematic statistical analysis is performed on the data. All of these efforts are combined which culminate in an extrapolation of values that are in turn used to support previous efforts used in evaluating the data.

  13. A retrospective evaluation of traffic forecasting techniques.

    DOT National Transportation Integrated Search

    2016-08-01

    Traffic forecasting techniquessuch as extrapolation of previous years traffic volumes, regional travel demand models, or : local trip generation rateshelp planners determine needed transportation improvements. Thus, knowing the accuracy of t...

  14. Surface dose measurements with commonly used detectors: a consistent thickness correction method

    PubMed Central

    Higgins, Patrick

    2015-01-01

    The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30‐360) with other parallel plate chambers RMI‐449 (Attix), Capintec PS‐033, PTW 30‐329 (Markus) and Memorial. Measurements of surface dose for 6 MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (−0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid‐state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three‐detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth‐dose curves and is not recommended for these types of measurements. PACS number: 87.56.‐v PMID:26699319

  15. Hydrocode predictions of collisional outcomes: Effects of target size

    NASA Technical Reports Server (NTRS)

    Ryan, Eileen V.; Asphaug, Erik; Melosh, H. J.

    1991-01-01

    Traditionally, laboratory impact experiments, designed to simulate asteroid collisions, attempted to establish a predictive capability for collisional outcomes given a particular set of initial conditions. Unfortunately, laboratory experiments are restricted to using targets considerably smaller than the modelled objects. It is therefore necessary to develop some methodology for extrapolating the extensive experimental results to the size regime of interest. Results are reported obtained through the use of two dimensional hydrocode based on 2-D SALE and modified to include strength effects and the fragmentation equations. The hydrocode was tested by comparing its predictions for post-impact fragment size distributions to those observed in laboratory impact experiments.

  16. In vivo fascicle length measurements via B-mode ultrasound imaging with single vs dual transducer arrangements.

    PubMed

    Brennan, Scott F; Cresswell, Andrew G; Farris, Dominic J; Lichtwark, Glen A

    2017-11-07

    Ultrasonography is a useful technique to study muscle contractions in vivo, however larger muscles like vastus lateralis may be difficult to visualise with smaller, commonly used transducers. Fascicle length is often estimated using linear trigonometry to extrapolate fascicle length to regions where the fascicle is not visible. However, this approach has not been compared to measurements made with a larger field of view for dynamic muscle contractions. Here we compared two different single-transducer extrapolation methods to measure VL muscle fascicle length to a direct measurement made using two synchronised, in-series transducers. The first method used pennation angle and muscle thickness to extrapolate fascicle length outside the image (extrapolate method). The second method determined fascicle length based on the extrapolated intercept between a fascicle and the aponeurosis (intercept method). Nine participants performed maximal effort, isometric, knee extension contractions on a dynamometer at 10° increments from 50 to 100° of knee flexion. Fascicle length and torque were simultaneously recorded for offline analysis. The dual transducer method showed similar patterns of fascicle length change (overall mean coefficient of multiple correlation was 0.76 and 0.71 compared to extrapolate and intercept methods respectively), but reached different absolute lengths during the contractions. This had the effect of producing force-length curves of the same shape, but each curve was shifted in terms of absolute length. We concluded that dual transducers are beneficial for studies that examine absolute fascicle lengths, whereas either of the single transducer methods may produce similar results for normalised length changes, and repeated measures experimental designs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teng, S.; Tebby, C.

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-timemore » cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.« less

  18. Role of stacking disorder in ice nucleation

    NASA Astrophysics Data System (ADS)

    Lupi, Laura; Hudait, Arpa; Peters, Baron; Grünwald, Michael; Gotchy Mullen, Ryan; Nguyen, Andrew H.; Molinero, Valeria

    2017-11-01

    The freezing of water affects the processes that determine Earth’s climate. Therefore, accurate weather and climate forecasts hinge on good predictions of ice nucleation rates. Such rate predictions are based on extrapolations using classical nucleation theory, which assumes that the structure of nanometre-sized ice crystallites corresponds to that of hexagonal ice, the thermodynamically stable form of bulk ice. However, simulations with various water models find that ice nucleated and grown under atmospheric temperatures is at all sizes stacking-disordered, consisting of random sequences of cubic and hexagonal ice layers. This implies that stacking-disordered ice crystallites either are more stable than hexagonal ice crystallites or form because of non-equilibrium dynamical effects. Both scenarios challenge central tenets of classical nucleation theory. Here we use rare-event sampling and free energy calculations with the mW water model to show that the entropy of mixing cubic and hexagonal layers makes stacking-disordered ice the stable phase for crystallites up to a size of at least 100,000 molecules. We find that stacking-disordered critical crystallites at 230 kelvin are about 14 kilojoules per mole of crystallite more stable than hexagonal crystallites, making their ice nucleation rates more than three orders of magnitude higher than predicted by classical nucleation theory. This effect on nucleation rates is temperature dependent, being the most pronounced at the warmest conditions, and should affect the modelling of cloud formation and ice particle numbers, which are very sensitive to the temperature dependence of ice nucleation rates. We conclude that classical nucleation theory needs to be corrected to include the dependence of the crystallization driving force on the size of the ice crystallite when interpreting and extrapolating ice nucleation rates from experimental laboratory conditions to the temperatures that occur in clouds.

  19. Limitations to the Use of Species-Distribution Models for Environmental-Impact Assessments in the Amazon.

    PubMed

    Carneiro, Lorena Ribeiro de A; Lima, Albertina P; Machado, Ricardo B; Magnusson, William E

    2016-01-01

    Species-distribution models (SDM) are tools with potential to inform environmental-impact studies (EIA). However, they are not always appropriate and may result in improper and expensive mitigation and compensation if their limitations are not understood by decision makers. Here, we examine the use of SDM for frogs that were used in impact assessment using data obtained from the EIA of a hydroelectric project located in the Amazon Basin in Brazil. The results show that lack of knowledge of species distributions limits the appropriate use of SDM in the Amazon region for most target species. Because most of these targets are newly described and their distributions poorly known, data about their distributions are insufficient to be effectively used in SDM. Surveys that are mandatory for the EIA are often conducted only near the area under assessment, and so models must extrapolate well beyond the sampled area to inform decisions made at much larger spatial scales, such as defining areas to be used to offset the negative effects of the projects. Using distributions of better-known species in simulations, we show that geographical-extrapolations based on limited information of species ranges often lead to spurious results. We conclude that the use of SDM as evidence to support project-licensing decisions in the Amazon requires much greater area sampling for impact studies, or, alternatively, integrated and comparative survey strategies, to improve biodiversity sampling. When more detailed distribution information is unavailable, SDM will produce results that generate uncertain and untestable decisions regarding impact assessment. In many cases, SDM is unlikely to be better than the use of expert opinion.

  20. Role of stacking disorder in ice nucleation.

    PubMed

    Lupi, Laura; Hudait, Arpa; Peters, Baron; Grünwald, Michael; Gotchy Mullen, Ryan; Nguyen, Andrew H; Molinero, Valeria

    2017-11-08

    The freezing of water affects the processes that determine Earth's climate. Therefore, accurate weather and climate forecasts hinge on good predictions of ice nucleation rates. Such rate predictions are based on extrapolations using classical nucleation theory, which assumes that the structure of nanometre-sized ice crystallites corresponds to that of hexagonal ice, the thermodynamically stable form of bulk ice. However, simulations with various water models find that ice nucleated and grown under atmospheric temperatures is at all sizes stacking-disordered, consisting of random sequences of cubic and hexagonal ice layers. This implies that stacking-disordered ice crystallites either are more stable than hexagonal ice crystallites or form because of non-equilibrium dynamical effects. Both scenarios challenge central tenets of classical nucleation theory. Here we use rare-event sampling and free energy calculations with the mW water model to show that the entropy of mixing cubic and hexagonal layers makes stacking-disordered ice the stable phase for crystallites up to a size of at least 100,000 molecules. We find that stacking-disordered critical crystallites at 230 kelvin are about 14 kilojoules per mole of crystallite more stable than hexagonal crystallites, making their ice nucleation rates more than three orders of magnitude higher than predicted by classical nucleation theory. This effect on nucleation rates is temperature dependent, being the most pronounced at the warmest conditions, and should affect the modelling of cloud formation and ice particle numbers, which are very sensitive to the temperature dependence of ice nucleation rates. We conclude that classical nucleation theory needs to be corrected to include the dependence of the crystallization driving force on the size of the ice crystallite when interpreting and extrapolating ice nucleation rates from experimental laboratory conditions to the temperatures that occur in clouds.

  1. Limitations to the Use of Species-Distribution Models for Environmental-Impact Assessments in the Amazon

    PubMed Central

    Carneiro, Lorena Ribeiro de A.; Lima, Albertina P.; Machado, Ricardo B.; Magnusson, William E.

    2016-01-01

    Species-distribution models (SDM) are tools with potential to inform environmental-impact studies (EIA). However, they are not always appropriate and may result in improper and expensive mitigation and compensation if their limitations are not understood by decision makers. Here, we examine the use of SDM for frogs that were used in impact assessment using data obtained from the EIA of a hydroelectric project located in the Amazon Basin in Brazil. The results show that lack of knowledge of species distributions limits the appropriate use of SDM in the Amazon region for most target species. Because most of these targets are newly described and their distributions poorly known, data about their distributions are insufficient to be effectively used in SDM. Surveys that are mandatory for the EIA are often conducted only near the area under assessment, and so models must extrapolate well beyond the sampled area to inform decisions made at much larger spatial scales, such as defining areas to be used to offset the negative effects of the projects. Using distributions of better-known species in simulations, we show that geographical-extrapolations based on limited information of species ranges often lead to spurious results. We conclude that the use of SDM as evidence to support project-licensing decisions in the Amazon requires much greater area sampling for impact studies, or, alternatively, integrated and comparative survey strategies, to improve biodiversity sampling. When more detailed distribution information is unavailable, SDM will produce results that generate uncertain and untestable decisions regarding impact assessment. In many cases, SDM is unlikely to be better than the use of expert opinion. PMID:26784891

  2. Simulation of nonlinear superconducting rf losses derived from characteristic topography of etched and electropolished niobium surfaces

    DOE PAGES

    Xu, Chen; Reece, Charles E.; Kelley, Michael J.

    2016-03-22

    A simplified numerical model has been developed to simulate nonlinear superconducting radiofrequency (SRF) losses on Nb surfaces. This study focuses exclusively on excessive surface resistance (R s) losses due to the microscopic topographical magnetic field enhancements. When the enhanced local surface magnetic field exceeds the superconducting critical transition magnetic field H c, small volumes of surface material may become normal conducting and increase the effective surface resistance without inducing a quench. We seek to build an improved quantitative characterization of this qualitative model. Using topographic data from typical buffered chemical polish (BCP)- and electropolish (EP)-treated fine grain niobium, we havemore » estimated the resulting field-dependent losses and extrapolated this model to the implications for cavity performance. The model predictions correspond well to the characteristic BCP versus EP high field Q 0 performance differences for fine grain niobium. Lastly, we describe the algorithm of the model, its limitations, and the effects of this nonlinear loss contribution on SRF cavity performance.« less

  3. Medium Effects on Freeze-Out of Light Clusters at NICA Energies

    NASA Astrophysics Data System (ADS)

    Röpke, G.; Blaschke, D.; Ivanov, Yu. B.; Karpenko, Iu.; Rogachevsky, O. V.; Wolter, H. H.

    2018-05-01

    We estimate the chemical freeze-out of light nuclear clusters for NICA energies of above 2 A GeV. On the one hand we use results from the low energy domain of about 35 A MeV, where medium effects have been shown to be important to explain experimental results. On the high energy side of LHC energies the statistical model without medium effects has provided results for the chemical freeze-out. The two approaches extrapolated to NICA energies show a discrepancy that can be attributed to medium effects and that for the deuteron/proton ratio amounts to a factor of about three. These findings underline the importance of a detailed investigation of light cluster production at NICA energies.

  4. Rational use of medicines in older adults: Can we do better during clinical development?

    PubMed

    Saeed, M A; Vlasakakis, G; Della Pasqua, O

    2015-05-01

    There is an evidence gap to ensuring safe/effective use of medicines in older adults. Generating clinical data in these patients poses ethical and operational challenges, yielding results that may not be generalizable to the overall population. Modeling and simulation (M&S) is proposed as a basis for assessing the impact of age-related changes and their clinical implications. M&S can be used in conjunction with bridging and extrapolation to ensure the selection of appropriate dose(s)/regimen(s) in this population. © 2015 ASCPT.

  5. Computational chemistry in 25 years

    NASA Astrophysics Data System (ADS)

    Abagyan, Ruben

    2012-01-01

    Here we are making some predictions based on three methods: a straightforward extrapolations of the existing trends; a self-fulfilling prophecy; and picking some current grievances and predicting that they will be addressed or solved. We predict the growth of multicore computing and dramatic growth of data, as well as the improvements in force fields and sampling methods. We also predict that effects of therapeutic and environmental molecules on human body, as well as complex natural chemical signalling will be understood in terms of three dimensional models of their binding to specific pockets.

  6. Quantification of the biocontrol agent Trichoderma harzianum with real-time TaqMan PCR and its potential extrapolation to the hyphal biomass.

    PubMed

    López-Mondéjar, Rubén; Antón, Anabel; Raidl, Stefan; Ros, Margarita; Pascual, José Antonio

    2010-04-01

    The species of the genus Trichoderma are used successfully as biocontrol agents against a wide range of phytopathogenic fungi. Among them, Trichoderma harzianum is especially effective. However, to develop more effective fungal biocontrol strategies in organic substrates and soil, tools for monitoring the control agents are required. Real-time PCR is potentially an effective tool for the quantification of fungi in environmental samples. The aim of this study consisted of the development and application of a real-time PCR-based method to the quantification of T. harzianum, and the extrapolation of these data to fungal biomass values. A set of primers and a TaqMan probe for the ITS region of the fungal genome were designed and tested, and amplification was correlated to biomass measurements obtained with optical microscopy and image analysis, of the hyphal length of the mycelium of the colony. A correlation of 0.76 between ITS copies and biomass was obtained. The extrapolation of the quantity of ITS copies, calculated based on real-time PCR data, into quantities of fungal biomass provides potentially a more accurate value of the quantity of soil fungi. Copyright 2009 Elsevier Ltd. All rights reserved.

  7. An Extrapolation of a Radical Equation More Accurately Predicts Shelf Life of Frozen Biological Matrices.

    PubMed

    De Vore, Karl W; Fatahi, Nadia M; Sass, John E

    2016-08-01

    Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.

  8. Peak Communication Experiences: Concept, Structure, and Sex Differences.

    ERIC Educational Resources Information Center

    Gordon, Ron; Dulaney, Earl

    A study was conducted to test a "peak communication experience" (PCE) scale developed from Abraham Maslow's theory of PCE's, a model of one's highest interpersonal communication moments in terms of perceived mutual understanding, happiness, and personal fulfillment. Nineteen items, extrapolated from Maslow's model but rendered more…

  9. Evaluation of Pharmacokinetic Assumptions Using a 443 Chemical Library (SOT)

    EPA Science Inventory

    With the increasing availability of high-throughput and in vitro data for untested chemicals, there is a need for pharmacokinetic (PK) models for in vitro to in vivo extrapolation (IVIVE). Though some PBPK models have been created for individual compounds using in vivo data, we ...

  10. Extrapolating Single Organic Ion Solvation Thermochemistry from Simulated Water Nanodroplets.

    PubMed

    Coles, Jonathan P; Houriez, Céline; Meot-Ner Mautner, Michael; Masella, Michel

    2016-09-08

    We compute the ion/water interaction energies of methylated ammonium cations and alkylated carboxylate anions solvated in large nanodroplets of 10 000 water molecules using 10 ns molecular dynamics simulations and an all-atom polarizable force-field approach. Together with our earlier results concerning the solvation of these organic ions in nanodroplets whose molecular sizes range from 50 to 1000, these new data allow us to discuss the reliability of extrapolating absolute single-ion bulk solvation energies from small ion/water droplets using common power-law functions of cluster size. We show that reliable estimates of these energies can be extrapolated from a small data set comprising the results of three droplets whose sizes are between 100 and 1000 using a basic power-law function of droplet size. This agrees with an earlier conclusion drawn from a model built within the mean spherical framework and paves the road toward a theoretical protocol to systematically compute the solvation energies of complex organic ions.

  11. Environmental impact and risk assessments and key factors contributing to the overall uncertainties.

    PubMed

    Salbu, Brit

    2016-01-01

    There is a significant number of nuclear and radiological sources that have contributed, are still contributing, or have the potential to contribute to radioactive contamination of the environment in the future. To protect the environment from radioactive contamination, impact and risk assessments are performed prior to or during a release event, short or long term after deposition or prior and after implementation of countermeasures. When environmental impact and risks are assessed, however, a series of factors will contribute to the overall uncertainties. To provide environmental impact and risk assessments, information on processes, kinetics and a series of input variables is needed. Adding problems such as variability, questionable assumptions, gaps in knowledge, extrapolations and poor conceptual model structures, a series of factors are contributing to large and often unacceptable uncertainties in impact and risk assessments. Information on the source term and the release scenario is an essential starting point in impact and risk models; the source determines activity concentrations and atom ratios of radionuclides released, while the release scenario determine the physico-chemical forms of released radionuclides such as particle size distribution, structure and density. Releases will most often contain other contaminants such as metals, and due to interactions, contaminated sites should be assessed as a multiple stressor scenario. Following deposition, a series of stressors, interactions and processes will influence the ecosystem transfer of radionuclide species and thereby influence biological uptake (toxicokinetics) and responses (toxicodynamics) in exposed organisms. Due to the variety of biological species, extrapolation is frequently needed to fill gaps in knowledge e.g., from effects to no effects, from effects in one organism to others, from one stressor to mixtures. Most toxtests are, however, performed as short term exposure of adult organisms, ignoring sensitive history life stages of organisms and transgenerational effects. To link sources, ecosystem transfer and biological effects to future impact and risks, a series of models are usually interfaced, while uncertainty estimates are seldom given. The model predictions are, however, only valid within the boundaries of the overall uncertainties. Furthermore, the model predictions are only useful and relevant when uncertainties are estimated, communicated and understood. Among key factors contributing most to uncertainties, the present paper focuses especially on structure uncertainties (model bias or discrepancies) as aspects such as particle releases, ecosystem dynamics, mixed exposure, sensitive life history stages and transgenerational effects, are usually ignored in assessment models. Research focus on these aspects should significantly reduce the overall uncertainties in the impact and risk assessment of radioactive contaminated ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Gaussian process model for extrapolation of scattering observables for complex molecules: From benzene to benzonitrile

    NASA Astrophysics Data System (ADS)

    Cui, Jie; Li, Zhiying; Krems, Roman V.

    2015-10-01

    We consider a problem of extrapolating the collision properties of a large polyatomic molecule A-H to make predictions of the dynamical properties for another molecule related to A-H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A-X. We assume that the effect of the -H →-X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C6H5CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C6H6 collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C6H5CN with He.

  13. Cost-effectiveness of sacubitril/valsartan in the treatment of heart failure with reduced ejection fraction.

    PubMed

    McMurray, John J V; Trueman, David; Hancock, Elizabeth; Cowie, Martin R; Briggs, Andrew; Taylor, Matthew; Mumby-Croft, Juliet; Woodcock, Fionn; Lacey, Michael; Haroun, Rola; Deschaseaux, Celine

    2018-06-01

    Chronic heart failure with reduced ejection fraction (HF-REF) represents a major public health issue and is associated with considerable morbidity and mortality. We evaluated the cost-effectiveness of sacubitril/valsartan (formerly LCZ696) compared with an ACE inhibitor (ACEI) (enalapril) in the treatment of HF-REF from the perspective of healthcare providers in the UK, Denmark and Colombia. A cost-utility analysis was performed based on data from a multinational, Phase III randomised controlled trial. A decision-analytic model was developed based on a series of regression models, which extrapolated health-related quality of life, hospitalisation rates and survival over a lifetime horizon. The primary outcome was the incremental cost-effectiveness ratio (ICER). In the UK, the cost per quality-adjusted life-year (QALY) gained for sacubitril/valsartan (using cardiovascular mortality) was £17 100 (€20 400) versus enalapril. In Denmark, the ICER for sacubitril/valsartan was Kr 174 000 (€22 600). In Colombia, the ICER was COP$39.5 million (€11 200) per QALY gained. Deterministic sensitivity analysis showed that results were most sensitive to the extrapolation of mortality, duration of treatment effect and time horizon, but were robust to other structural changes, with most scenarios associated with ICERs below the willingness-to-pay threshold for all three country settings. Probabilistic sensitivity analysis suggested the probability that sacubitril/valsartan was cost-effective at conventional willingness-to-pay thresholds was 68%-94% in the UK, 84% in Denmark and 95% in Colombia. Our analysis suggests that, in all three countries, sacubitril/valsartan is likely to be cost-effective compared with an ACEI (the current standard of care) in patients with HF-REF. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Calibration artefacts in radio interferometry - II. Ghost patterns for irregular arrays

    NASA Astrophysics Data System (ADS)

    Wijnholds, S. J.; Grobler, T. L.; Smirnov, O. M.

    2016-04-01

    Calibration artefacts, like the self-calibration bias, usually emerge when data are calibrated using an incomplete sky model. In the first paper of this series, in which we analysed calibration artefacts in data from the Westerbork Synthesis Radio Telescope, we showed that these artefacts take the form of spurious positive and negative sources, which we refer to as ghosts or ghost sources. We also developed a mathematical framework with which we could predict the ghost pattern of an east-west interferometer for a simple two-source test case. In this paper, we extend our analysis to more general array layouts. This provides us with a useful method for the analysis of ghosts that we refer to as extrapolation. Combining extrapolation with a perturbation analysis, we are able to (1) analyse the ghost pattern for a two-source test case with one modelled and one unmodelled source for an arbitrary array layout, (2) explain why some ghosts are brighter than others, (3) define a taxonomy allowing us to classify the different ghosts, (4) derive closed form expressions for the fluxes and positions of the brightest ghosts, and (5) explain the strange two-peak structure with which some ghosts manifest during imaging. We illustrate our mathematical predictions using simulations of the KAT-7 (seven-dish Karoo Array Telescope) array. These results show the explanatory power of our mathematical model. The insights gained in this paper provide a solid foundation to study calibration artefacts in arbitrary, I.e. more complicated than the two-source example discussed here, incomplete sky models or full synthesis observations including direction-dependent effects.

  15. Electron-deuteron deep-inelastic scattering with spectator nucleon tagging and final-state interactions at intermediate x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strikman, Mark; Weiss, Christian

    We consider electron-deuteron deep-inelastic scattering (DIS) with detection of a proton in the nuclear fragmentation region ("spectator tagging") as a method for extracting the free neutron structure functions and studying their nuclear modifications. Such measurements could be performed at a future Electron-Ion Collider (EIC) with suitable forward detectors. The measured proton recoil momentum (≲ 100 MeV in the deuteron rest frame) specifies the deuteron configuration during the high-energy process and permits a controlled theoretical treatment of nuclear effects. Nuclear and nucleonic structure are separated using methods of light-front quantum mechanics. The impulse approximation (IA) to the tagged DIS cross sectionmore » contains the free neutron pole, which can be reached by on-shell extrapolation in the recoil momentum. Final-state interactions (FSI) distort the recoil momentum distribution away from the pole. In the intermediate-x region 0.1 < x < 0.5 FSI arise predominantly from interactions of the spectator proton with slow hadrons produced in the DIS process on the neutron (rest frame momenta ≲1 GeV, target fragmentation region). We construct a schematic model describing this effect, using final-state hadron distributions measured in nucleon DIS experiments and low-energy hadron scattering amplitudes. We investigate the magnitude of FSI, their dependence on the recoil momentum (angular dependence, forward/backward regions), their analytic properties, and their effect on the on-shell extrapolation. We comment on the prospects for neutron structure extraction in tagged DIS with EIC. Finally, we discuss possible extensions of the FSI model to other kinematic regions (large/small x). In tagged DIS at x << 0.1 FSI resulting from diffractive scattering on the nucleons become important and require separate treatment.« less

  16. Electron-deuteron deep-inelastic scattering with spectator nucleon tagging and final-state interactions at intermediate x

    NASA Astrophysics Data System (ADS)

    Strikman, M.; Weiss, C.

    2018-03-01

    We consider electron-deuteron deep-inelastic scattering (DIS) with detection of a proton in the nuclear fragmentation region ("spectator tagging") as a method for extracting the free neutron structure functions and studying their nuclear modifications. Such measurements could be performed at a future electron-ion collider (EIC) with suitable forward detectors. The measured proton recoil momentum (≲100 MeV in the deuteron rest frame) specifies the deuteron configuration during the high-energy process and permits a controlled theoretical treatment of nuclear effects. Nuclear and nucleonic structure are separated using methods of light-front quantum mechanics. The impulse approximation to the tagged DIS cross section contains the free neutron pole, which can be reached by on-shell extrapolation in the recoil momentum. Final-state interactions (FSIs) distort the recoil momentum distribution away from the pole. In the intermediate-x region 0.1

  17. Electron-deuteron deep-inelastic scattering with spectator nucleon tagging and final-state interactions at intermediate x

    DOE PAGES

    Strikman, Mark; Weiss, Christian

    2018-03-27

    We consider electron-deuteron deep-inelastic scattering (DIS) with detection of a proton in the nuclear fragmentation region ("spectator tagging") as a method for extracting the free neutron structure functions and studying their nuclear modifications. Such measurements could be performed at a future Electron-Ion Collider (EIC) with suitable forward detectors. The measured proton recoil momentum (≲ 100 MeV in the deuteron rest frame) specifies the deuteron configuration during the high-energy process and permits a controlled theoretical treatment of nuclear effects. Nuclear and nucleonic structure are separated using methods of light-front quantum mechanics. The impulse approximation (IA) to the tagged DIS cross sectionmore » contains the free neutron pole, which can be reached by on-shell extrapolation in the recoil momentum. Final-state interactions (FSI) distort the recoil momentum distribution away from the pole. In the intermediate-x region 0.1 < x < 0.5 FSI arise predominantly from interactions of the spectator proton with slow hadrons produced in the DIS process on the neutron (rest frame momenta ≲1 GeV, target fragmentation region). We construct a schematic model describing this effect, using final-state hadron distributions measured in nucleon DIS experiments and low-energy hadron scattering amplitudes. We investigate the magnitude of FSI, their dependence on the recoil momentum (angular dependence, forward/backward regions), their analytic properties, and their effect on the on-shell extrapolation. We comment on the prospects for neutron structure extraction in tagged DIS with EIC. Finally, we discuss possible extensions of the FSI model to other kinematic regions (large/small x). In tagged DIS at x << 0.1 FSI resulting from diffractive scattering on the nucleons become important and require separate treatment.« less

  18. Comprehensive mollusk acute toxicity database improves the use of Interspecies Correlation Estimation (ICE) models to predict toxicity of untested freshwater and endangered mussel species

    EPA Science Inventory

    Interspecies correlation estimation (ICE) models extrapolate acute toxicity data from surrogate test species to untested taxa. A suite of ICE models developed from a comprehensive database is available on the US Environmental Protection Agency’s web-based application, Web-I...

  19. Increased spring freezing vulnerability for alpine shrubs under early snowmelt.

    PubMed

    Wheeler, J A; Hoch, G; Cortés, A J; Sedlacek, J; Wipf, S; Rixen, C

    2014-05-01

    Alpine dwarf shrub communities are phenologically linked with snowmelt timing, so early spring exposure may increase risk of freezing damage during early development, and consequently reduce seasonal growth. We examined whether environmental factors (duration of snow cover, elevation) influenced size and the vulnerability of shrubs to spring freezing along elevational gradients and snow microhabitats by modelling the past frequency of spring freezing events. We sampled biomass and measured the size of Salix herbacea, Vaccinium myrtillus, Vaccinium uliginosum and Loiseleuria procumbens in late spring. Leaves were exposed to freezing temperatures to determine the temperature at which 50% of specimens are killed for each species and sampling site. By linking site snowmelt and temperatures to long-term climate measurements, we extrapolated the frequency of spring freezing events at each elevation, snow microhabitat and per species over 37 years. Snowmelt timing was significantly driven by microhabitat effects, but was independent of elevation. Shrub growth was neither enhanced nor reduced by earlier snowmelt, but decreased with elevation. Freezing resistance was strongly species dependent, and did not differ along the elevation or snowmelt gradient. Microclimate extrapolation suggested that potentially lethal freezing events (in May and June) occurred for three of the four species examined. Freezing events never occurred on late snow beds, and increased in frequency with earlier snowmelt and higher elevation. Extrapolated freezing events showed a slight, non-significant increase over the 37-year record. We suggest that earlier snowmelt does not enhance growth in four dominant alpine shrubs, but increases the risk of lethal spring freezing exposure for less freezing-resistant species.

  20. Using Web-based Interspecies Correlation Estimation (Web-ICE) models as a tool for acute toxicity prediction

    EPA Science Inventory

    In order to assess risk of contaminants to taxa with limited or no toxicity data available, Interspecies Correlation Estimation (ICE) models have been developed by the U.S. Environmental Protection Agency to extrapolate contaminant sensitivity predictions based on data from commo...

  1. ACCUMULATION OF PBDE-47 IN PRIMARY CULTURES OF RAT NEOCORTICAL CELLS.

    EPA Science Inventory

    Cell culture models are often used in mechanistic studies of toxicant action. However, one area of uncertainty is the extrapolation of dose from the in vitro model to the in vivo tissue. A common assumption of in vitro studies is that media concentration is a predictive marker of...

  2. QUANTITATIVE MODELING APPROACHES TO PREDICTING THE ACUTE NEUROTOXICITY OF VOLATILE ORGANIC COMPOUNDS (VOCS).

    EPA Science Inventory

    Lack of complete and appropriate human data requires prediction of the hazards for exposed human populations by extrapolation from available animal and in vitro data. Predictive models for the toxicity of chemicals can be constructed by linking kinetic and mode of action data uti...

  3. Evaluation of Pharmacokinetic Assumptions Using a 443 Chemical Library (IVIVE)

    EPA Science Inventory

    With the increasing availability of high-throughput and in vitro data for untested chemicals, there is a need for pharmacokinetic (PK) models for in vitro to in vivo extrapolation (IVIVE). Though some PBPK models have been created for individual compounds us...

  4. Life-Stage Physiologically-Based Pharmacokinetic (PBPK) Model Applications to Screen Environmental Hazards.

    EPA Science Inventory

    This presentation discusses methods used to extrapolate from in vitro high-throughput screening (HTS) toxicity data for an endocrine pathway to in vivo for early life stages in humans, and the use of a life stage PBPK model to address rapidly changing physiological parameters. A...

  5. Restricted Complexity Framework for Nonlinear Adaptive Control in Complex Systems

    NASA Astrophysics Data System (ADS)

    Williams, Rube B.

    2004-02-01

    Control law adaptation that includes implicit or explicit adaptive state estimation, can be a fundamental underpinning for the success of intelligent control in complex systems, particularly during subsystem failures, where vital system states and parameters can be impractical or impossible to measure directly. A practical algorithm is proposed for adaptive state filtering and control in nonlinear dynamic systems when the state equations are unknown or are too complex to model analytically. The state equations and inverse plant model are approximated by using neural networks. A framework for a neural network based nonlinear dynamic inversion control law is proposed, as an extrapolation of prior developed restricted complexity methodology used to formulate the adaptive state filter. Examples of adaptive filter performance are presented for an SSME simulation with high pressure turbine failure to support extrapolations to adaptive control problems.

  6. Quantifying biomass consumption and carbon release from the California Rim fire by integrating airborne LiDAR and Landsat OLI data.

    PubMed

    Garcia, Mariano; Saatchi, Sassan; Casas, Angeles; Koltunov, Alexander; Ustin, Susan; Ramirez, Carlos; Garcia-Gutierrez, Jorge; Balzter, Heiko

    2017-02-01

    Quantifying biomass consumption and carbon release is critical to understanding the role of fires in the carbon cycle and air quality. We present a methodology to estimate the biomass consumed and the carbon released by the California Rim fire by integrating postfire airborne LiDAR and multitemporal Landsat Operational Land Imager (OLI) imagery. First, a support vector regression (SVR) model was trained to estimate the aboveground biomass (AGB) from LiDAR-derived metrics over the unburned area. The selected model estimated AGB with an R 2 of 0.82 and RMSE of 59.98 Mg/ha. Second, LiDAR-based biomass estimates were extrapolated to the entire area before and after the fire, using Landsat OLI reflectance bands, Normalized Difference Infrared Index, and the elevation derived from LiDAR data. The extrapolation was performed using SVR models that resulted in R 2 of 0.73 and 0.79 and RMSE of 87.18 (Mg/ha) and 75.43 (Mg/ha) for the postfire and prefire images, respectively. After removing bias from the AGB extrapolations using a linear relationship between estimated and observed values, we estimated the biomass consumption from postfire LiDAR and prefire Landsat maps to be 6.58 ± 0.03 Tg (10 12  g), which translate into 12.06 ± 0.06 Tg CO2 e released to the atmosphere, equivalent to the annual emissions of 2.57 million cars.

  7. Quantifying biomass consumption and carbon release from the California Rim fire by integrating airborne LiDAR and Landsat OLI data

    PubMed Central

    Saatchi, Sassan; Casas, Angeles; Koltunov, Alexander; Ustin, Susan; Ramirez, Carlos; Garcia‐Gutierrez, Jorge; Balzter, Heiko

    2017-01-01

    Abstract Quantifying biomass consumption and carbon release is critical to understanding the role of fires in the carbon cycle and air quality. We present a methodology to estimate the biomass consumed and the carbon released by the California Rim fire by integrating postfire airborne LiDAR and multitemporal Landsat Operational Land Imager (OLI) imagery. First, a support vector regression (SVR) model was trained to estimate the aboveground biomass (AGB) from LiDAR‐derived metrics over the unburned area. The selected model estimated AGB with an R 2 of 0.82 and RMSE of 59.98 Mg/ha. Second, LiDAR‐based biomass estimates were extrapolated to the entire area before and after the fire, using Landsat OLI reflectance bands, Normalized Difference Infrared Index, and the elevation derived from LiDAR data. The extrapolation was performed using SVR models that resulted in R 2 of 0.73 and 0.79 and RMSE of 87.18 (Mg/ha) and 75.43 (Mg/ha) for the postfire and prefire images, respectively. After removing bias from the AGB extrapolations using a linear relationship between estimated and observed values, we estimated the biomass consumption from postfire LiDAR and prefire Landsat maps to be 6.58 ± 0.03 Tg (1012 g), which translate into 12.06 ± 0.06 Tg CO2e released to the atmosphere, equivalent to the annual emissions of 2.57 million cars. PMID:28405539

  8. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less

  9. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    NASA Astrophysics Data System (ADS)

    Spackman, Peter R.; Karton, Amir

    2015-05-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.

  10. Toward refined environmental scenarios for ecological risk assessment of down-the-drain chemicals in freshwater environments.

    PubMed

    Franco, Antonio; Price, Oliver R; Marshall, Stuart; Jolliet, Olivier; Van den Brink, Paul J; Rico, Andreu; Focks, Andreas; De Laender, Frederik; Ashauer, Roman

    2017-03-01

    Current regulatory practice for chemical risk assessment suffers from the lack of realism in conventional frameworks. Despite significant advances in exposure and ecological effect modeling, the implementation of novel approaches as high-tier options for prospective regulatory risk assessment remains limited, particularly among general chemicals such as down-the-drain ingredients. While reviewing the current state of the art in environmental exposure and ecological effect modeling, we propose a scenario-based framework that enables a better integration of exposure and effect assessments in a tiered approach. Global- to catchment-scale spatially explicit exposure models can be used to identify areas of higher exposure and to generate ecologically relevant exposure information for input into effect models. Numerous examples of mechanistic ecological effect models demonstrate that it is technically feasible to extrapolate from individual-level effects to effects at higher levels of biological organization and from laboratory to environmental conditions. However, the data required to parameterize effect models that can embrace the complexity of ecosystems are large and require a targeted approach. Experimental efforts should, therefore, focus on vulnerable species and/or traits and ecological conditions of relevance. We outline key research needs to address the challenges that currently hinder the practical application of advanced model-based approaches to risk assessment of down-the-drain chemicals. Integr Environ Assess Manag 2017;13:233-248. © 2016 SETAC. © 2016 SETAC.

  11. Effects of sport expertise on representational momentum during timing control.

    PubMed

    Nakamoto, Hiroki; Mori, Shiro; Ikudome, Sachi; Unenaka, Satoshi; Imanaka, Kuniyasu

    2015-04-01

    Sports involving fast visual perception require players to compensate for delays in neural processing of visual information. Memory for the final position of a moving object is distorted forward along its path of motion (i.e., "representational momentum," RM). This cognitive extrapolation of visual perception might compensate for the neural delay in interacting appropriately with a moving object. The present study examined whether experienced batters cognitively extrapolate the location of a fast-moving object and whether this extrapolation is associated with coincident timing control. Nine expert and nine novice baseball players performed a prediction motion task in which a target moved from one end of a straight 400-cm track at a constant velocity. In half of the trials, vision was suddenly occluded when the target reached the 200-cm point (occlusion condition). Participants had to press a button concurrently with the target arrival at the end of the track and verbally report their subjective assessment of the first target-occluded position. Experts showed larger RM magnitude (cognitive extrapolation) than did novices in the occlusion condition. RM magnitude and timing errors were strongly correlated in the fast velocity condition in both experts and novices, whereas in the slow velocity condition, a significant correlation appeared only in experts. This suggests that experts can cognitively extrapolate the location of a moving object according to their anticipation and, as a result, potentially circumvent neural processing delays. This process might be used to control response timing when interacting with moving objects.

  12. Using Survival Analysis to Improve Estimates of Life Year Gains in Policy Evaluations.

    PubMed

    Meacock, Rachel; Sutton, Matt; Kristensen, Søren Rud; Harrison, Mark

    2017-05-01

    Policy evaluations taking a lifetime horizon have converted estimated changes in short-term mortality to expected life year gains using general population life expectancy. However, the life expectancy of the affected patients may differ from the general population. In trials, survival models are commonly used to extrapolate life year gains. The objective was to demonstrate the feasibility and materiality of using parametric survival models to extrapolate future survival in health care policy evaluations. We used our previous cost-effectiveness analysis of a pay-for-performance program as a motivating example. We first used the cohort of patients admitted prior to the program to compare 3 methods for estimating remaining life expectancy. We then used a difference-in-differences framework to estimate the life year gains associated with the program using general population life expectancy and survival models. Patient-level data from Hospital Episode Statistics was utilized for patients admitted to hospitals in England for pneumonia between 1 April 2007 and 31 March 2008 and between 1 April 2009 and 31 March 2010, and linked to death records for the period from 1 April 2007 to 31 March 2011. In our cohort of patients, using parametric survival models rather than general population life expectancy figures reduced the estimated mean life years remaining by 30% (9.19 v. 13.15 years, respectively). However, the estimated mean life year gains associated with the program are larger using survival models (0.380 years) compared to using general population life expectancy (0.154 years). Using general population life expectancy to estimate the impact of health care policies can overestimate life expectancy but underestimate the impact of policies on life year gains. Using a longer follow-up period improved the accuracy of estimated survival and program impact considerably.

  13. Geoacoustic models of the Donghae-to-Gangneung region in the Korean continental margin of the East Sea

    NASA Astrophysics Data System (ADS)

    Ryang, Woo Hun; Kim, Seong Pil; Hahn, Jooyoung

    2016-04-01

    Geoacoustic model is to provide a model of the real seafloor with measured, extrapolated, and predicted values of geoacoustic environmental parameters. It controls acoustic propagation in underwater acoustics. In the Korean continental margin of the East Sea, this study reconstructed geoacoustic models using geoacoustic and marine geologic data of the Donghae-to-Gangneung region (37.4° to 37.8° in latitude). The models were based on the data of the high-resolution subbottom and air-gun seismic profiles with sediment cores. The Donghae region comprised measured P-wave velocities and attenuations of the cores, whereas the Gangneung region comprised regression values using measured values of the adjacent areas. Geoacoustic data of the cores were extrapolated down to a depth of the geoacoustic models. For actual modeling, the P-wave speed of the models was compensated to in situ depth below the sea floor using the Hamilton method. These geoacoustic models of this region probably contribute for geoacoustic and underwater acoustic modelling reflecting vertical and lateral variability of acoustic properties in the Korean continental margin of the western East Sea. Keywords: geoacoustic model, environmental parameter, East Sea, continental margin Acknowledgements: This research was supported by the research grants from the Agency of Defense Development (UD140003DD and UE140033DD).

  14. Highly efficient molecular simulation methods for evaluation of thermodynamic properties of crystalline phases

    NASA Astrophysics Data System (ADS)

    Moustafa, Sabry Gad Al-Hak Mohammad

    Molecular simulation (MS) methods (e.g. Monte Carlo (MC) and molecular dynamics (MD)) provide a reliable tool (especially at extreme conditions) to measure solid properties. However, measuring them accurately and efficiently (smallest uncertainty for a given time) using MS can be a big challenge especially with ab initio-type models. In addition, comparing with experimental results through extrapolating properties from finite size to the thermodynamic limit can be a critical obstacle. We first estimate the free energy (FE) of crystalline system of simple discontinuous potential, hard-spheres (HS), at its melting condition. Several approaches are explored to determine the most efficient route. The comparison study shows a considerable improvement in efficiency over the standard MS methods that are known for solid phases. In addition, we were able to accurately extrapolate to the thermodynamic limit using relatively small system sizes. Although the method is applied to HS model, it is readily extended to more complex hard-body potentials, such as hard tetrahedra. The harmonic approximation of the potential energy surface is usually an accurate model (especially at low temperature and large density) to describe many realistic solid phases. In addition, since the analysis is done numerically the method is relatively cheap. Here, we apply lattice dynamics (LD) techniques to get the FE of clathrate hydrates structures. Rigid-bonds model is assumed to describe water molecules; this, however, requires additional orientation degree-of-freedom in order to specify each molecule. However, we were able to efficiently avoid using those degrees of freedom through a mathematical transformation that only uses the atomic coordinates of water molecules. In addition, the proton-disorder nature of hydrate water networks adds extra complexity to the problem, especially when extrapolating to the thermodynamic limit is needed. The finite-size effects of the proton disorder contribution is shown to vary slowly with system-size. This allow us to get the FE in the thermodynamic limit by extrapolating the one isomer results to infinity and correct for that by the effect from considering proton-disorder measured at a small system. These techniques are applied to empty hydrates (of types: SI, SII, and SH) to estimate their thermodynamic stability. For conditions where the harmonic model fails, performing MS is needed to estimate rigorously the full (harmonic plus anharmonic) quantity. Although several MS methods are available for that purpose, they do not benefit from the harmonic nature of crystals---which represents the main contribution and is cheap to compute. In other words, those "conventional" methods always "start from scratch" even at states where anharmonic part is negligible. In this work, we develop very efficient MS methods that leverage information, on-the-fly, from the harmonic behavior of configurations such that the anharmonic contributions are directly measured. The approach is named harmonically-mapped averaging (HMA) for the rest of this thesis. Since the major contribution of thermodynamic properties comes from the harmonic nature of crystal, the fluctuations in the anharmonic quantities is to be small; hence, uncertainty associated with the HMA method is small. The HMA method is given in a general formulation such that it can handle properties related to both first- and second-derivatives of free energy. The HMA approach is first applied to Lennard-Jones (LJ) model. First- and second-derivatives of FE with respect to temperature and volume yield the following properties: energy, pressure, isochoric heat capacity, bulk modulus, and thermal pressure coefficient. A considerable improvement in the efficiency of measuring those properties is observed even at melting conditions where anharmonicity is non-negligible. First-derivative properties are computed with 100 to 10,000 times less computational effort, while speedup for the second-derivative properties exceeds a millionfold for the highest density examined. In addition, the finite-size and long-range cutoff effects of the anharmonic contribution is much smaller than those due to harmonic part. Therefore, we were able to get the thermodynamic limit of thermodynamic properties by extrapolating the harmonic contribution to infinity and fix that with the anharmonic contribution from MS of small systems. Moreover, the anharmonic trajectory shows better features than the conventional one; it equilibrates almost instantaneously and data is less correlated (i.e. good statistics can be obtained with shorter trajectory). As a byproduct of the HMA, the free energy along an isochore is computed using thermodynamic integration (TI) technique of energy. Again, the HMA shows substantial improvement (50--1000 speedup) over the well-known Frenkel-Ladd integration (with Einstein crystal reference) method. Finally, to test the method against a more sophisticated model, we applied it to an embedded-atom-model (EAM) model of iron system. The results show a qualitatively similar behavior as that of LJ model. Finally, the method is applied to tackle one of the long-standing problems of Earth science; namely, the crystal structure of the Earth's inner core (IC). (Abstract shortened by UMI.).

  15. Extrapolation of scattering data to the negative-energy region. II. Applicability of effective range functions within an exactly solvable model

    DOE PAGES

    Blokhintsev, L. D.; Kadyrov, A. S.; Mukhamedzhanov, A. M.; ...

    2018-02-05

    A problem of analytical continuation of scattering data to the negative-energy region to obtain information about bound states is discussed within an exactly solvable potential model. This work is continuation of the previous one by the same authors [L. D. Blokhintsev et al., Phys. Rev. C 95, 044618 (2017)]. The goal of this paper is to determine the most effective way of analytic continuation for different systems. The d + α and α + 12C systems are considered and, for comparison, an effective-range function approach and a recently suggested Δ method [O. L. Ramírez Suárez and J.-M. Sparenberg, Phys. Rev.more » C 96, 034601 (2017).] are applied. We conclude that the method is more effective for heavier systems with large values of the Coulomb parameter, whereas for light systems with small values of the Coulomb parameter the effective-range function method might be preferable.« less

  16. Extrapolation of scattering data to the negative-energy region. II. Applicability of effective range functions within an exactly solvable model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blokhintsev, L. D.; Kadyrov, A. S.; Mukhamedzhanov, A. M.

    A problem of analytical continuation of scattering data to the negative-energy region to obtain information about bound states is discussed within an exactly solvable potential model. This work is continuation of the previous one by the same authors [L. D. Blokhintsev et al., Phys. Rev. C 95, 044618 (2017)]. The goal of this paper is to determine the most effective way of analytic continuation for different systems. The d + α and α + 12C systems are considered and, for comparison, an effective-range function approach and a recently suggested Δ method [O. L. Ramírez Suárez and J.-M. Sparenberg, Phys. Rev.more » C 96, 034601 (2017).] are applied. We conclude that the method is more effective for heavier systems with large values of the Coulomb parameter, whereas for light systems with small values of the Coulomb parameter the effective-range function method might be preferable.« less

  17. Solar Physics

    NASA Technical Reports Server (NTRS)

    Wu, S. T.

    2000-01-01

    The areas of emphasis are: (1) develop theoretical models of the transient release of magnetic energy in the solar atmosphere, e.g., in solar flares, eruptive prominences, coronal mass ejections, etc.; (2) investigate the role of the Sun's magnetic field in the structuring of solar corona by the development of three-dimensional numerical models that describe the field configuration at various heights in the solar atmosphere by extrapolating the field at the photospheric level; (3) develop numerical models to investigate the physical parameters obtained by the ULYSSES mission; (4) develop numerical and theoretical models to investigate solar activity effects on the solar wind characteristics for the establishment of the solar-interplanetary transmission line; and (5) develop new instruments to measure solar magnetic fields and other features in the photosphere, chromosphere transition region and corona. We focused our investigation on the fundamental physical processes in solar atmosphere which directly effect our Planet Earth. The overall goal is to establish the physical process for the Sun-Earth connections.

  18. Toxicokinetic Model Development for the Insensitive Munitions Component 2,4-Dinitroanisole.

    PubMed

    Sweeney, Lisa M; Goodwin, Michelle R; Hulgan, Angela D; Gut, Chester P; Bannon, Desmond I

    2015-01-01

    The Armed Forces are developing new explosives that are less susceptible to unintentional detonation (insensitive munitions [IMX]). 2,4-Dinitroanisole (DNAN) is a component of IMX. Toxicokinetic data for DNAN are required to support interpretation of toxicology studies and refinement of dose estimates for human risk assessment. Male Sprague-Dawley rats were dosed by gavage (5, 20, or 80 mg DNAN/kg), and blood and tissue samples were analyzed to determine the levels of DNAN and its metabolite 2,4-dinitrophenol (DNP). These data and data from the literature were used to develop preliminary physiologically based pharmacokinetic (PBPK) models. The model simulations indicated saturable metabolism of DNAN in rats at higher tested doses. The PBPK model was extrapolated to estimate the toxicokinetics of DNAN and DNP in humans, allowing the estimation of human-equivalent no-effect levels of DNAN exposure from no-observed adverse effect levels determined in laboratory animals, which may guide the selection of exposure limits for DNAN. © The Author(s) 2015.

  19. An experimental extrapolation technique using the Gafchromic EBT3 film for relative output factor measurements in small x-ray fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morales, Johnny E., E-mail: johnny.morales@lh.org.

    Purpose: An experimental extrapolation technique is presented, which can be used to determine the relative output factors for very small x-ray fields using the Gafchromic EBT3 film. Methods: Relative output factors were measured for the Brainlab SRS cones ranging in diameters from 4 to 30 mm{sup 2} on a Novalis Trilogy linear accelerator with 6 MV SRS x-rays. The relative output factor was determined from an experimental reducing circular region of interest (ROI) extrapolation technique developed to remove the effects of volume averaging. This was achieved by scanning the EBT3 film measurements with a high scanning resolution of 1200 dpi.more » From the high resolution scans, the size of the circular regions of interest was varied to produce a plot of relative output factors versus area of analysis. The plot was then extrapolated to zero to determine the relative output factor corresponding to zero volume. Results: Results have shown that for a 4 mm field size, the extrapolated relative output factor was measured as a value of 0.651 ± 0.018 as compared to 0.639 ± 0.019 and 0.633 ± 0.021 for 0.5 and 1.0 mm diameter of analysis values, respectively. This showed a change in the relative output factors of 1.8% and 2.8% at these comparative regions of interest sizes. In comparison, the 25 mm cone had negligible differences in the measured output factor between zero extrapolation, 0.5 and 1.0 mm diameter ROIs, respectively. Conclusions: This work shows that for very small fields such as 4.0 mm cone sizes, a measureable difference can be seen in the relative output factor based on the circular ROI and the size of the area of analysis using radiochromic film dosimetry. The authors recommend to scan the Gafchromic EBT3 film at a resolution of 1200 dpi for cone sizes less than 7.5 mm and to utilize an extrapolation technique for the output factor measurements of very small field dosimetry.« less

  20. Review of DoD Malaria Research Programs,

    DTIC Science & Technology

    1992-05-01

    the irraliated sporozoite vaccine. Work in the mouse model system and then extrapolate to human malarias. Study naturally acquired immune ...recombinant vaccines. Work simultaneously in the mouse model system and with human malarias. 3. Identify targets and mechanisms of protective immunity not...multivalent vaccines that attack these same targets. 3. Working again in the mouse model, non- human primate model, andI human systems we

  1. Combustion Technology for Incinerating Wastes from Air Force Industrial Processes.

    DTIC Science & Technology

    1984-02-01

    The assumption of equilibrium between environmental compartments. * The statistical extrapolations yielding "safe" doses of various constituents...would be contacted to identify the assumptions and data requirements needed to design, construct and implement the model. The model’s primary objective...Recovery Planning Model (RRPLAN) is described. This section of the paper summarizes the model’s assumptions , major components and modes of operation

  2. Predicting Lactational and Early Post-Weaning Exposures in Rats Using Biologically Based Pharmacokinetic Modeling

    EPA Science Inventory

    Risk and safety assessments for early life exposures to environmental chemicals or pharmaceuticals based on cross-species extrapolation would greatly benefit from information on chemical dosimetry in the young.

  3. Semileptonic decays of Λ _c baryons in the relativistic quark model

    NASA Astrophysics Data System (ADS)

    Faustov, R. N.; Galkin, V. O.

    2016-11-01

    Motivated by recent experimental progress in studying weak decays of the Λ _c baryon we investigate its semileptonic decays in the framework of the relativistic quark model based on the quasipotential approach with the QCD-motivated potential. The form factors of the Λ _c→ Λ lν _l and Λ _c→ nlν _l decays are calculated in the whole accessible kinematical region without extrapolations and additional model assumptions. Relativistic effects are systematically taken into account including transformations of baryon wave functions from the rest to moving reference frame and contributions of the intermediate negative-energy states. Baryon wave functions found in the previous mass spectrum calculations are used for the numerical evaluation. Comprehensive predictions for decay rates, asymmetries and polarization parameters are given. They agree well with available experimental data.

  4. On the existence of the optimal order for wavefunction extrapolation in Born-Oppenheimer molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Jun; Wang, Han, E-mail: wang-han@iapcm.ac.cn; CAEP Software Center for High Performance Numerical Simulation, Beijing

    2016-06-28

    Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn–Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps ormore » more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.« less

  5. UV testing of solar cells: Effects of antireflective coating, prior irradiation, and UV source

    NASA Technical Reports Server (NTRS)

    Meulenberg, A.

    1993-01-01

    Short-circuit current degradation of electron irradiated double-layer antireflective-coated cells after 3000 hours ultraviolet (UV) exposure exceeds 3 percent; extrapolation of the data to 10(exp 5) hours (11.4 yrs.) gives a degradation that exceeds 10 percent. Significant qualitative and quantitative differences in degradation were observed in cells with double- and single-layer antireflective coatings. The effects of UV-source age were observed and corrections were made to the data. An additional degradation mechanism was identified that occurs only in previously electron-irradiated solar cells since identical unirradiated cells degrade to only 6 +/- 3 percent when extrapolated 10(exp 5) hours of UV illumination.

  6. Extrapolating bound state data of anions into the metastable domain

    NASA Astrophysics Data System (ADS)

    Feuerbacher, Sven; Sommerfeld, Thomas; Cederbaum, Lorenz S.

    2004-10-01

    Computing energies of electronically metastable resonance states is still a great challenge. Both scattering techniques and quantum chemistry based L2 methods are very time consuming. Here we investigate two more economical extrapolation methods. Extrapolating bound states energies into the metastable region using increased nuclear charges has been suggested almost 20 years ago. We critically evaluate this attractive technique employing our complex absorbing potential/Green's function method that allows us to follow a bound state into the continuum. Using the 2Πg resonance of N2- and the 2Πu resonance of CO2- as examples, we found that the extrapolation works suprisingly well. The second extrapolation method involves increasing of bond lengths until the sought resonance becomes stable. The keystone is to extrapolate the attachment energy and not the total energy of the system. This method has the great advantage that the whole potential energy curve is obtained with quite good accuracy by the extrapolation. Limitations of the two techniques are discussed.

  7. Identification of novel uncertainty factors and thresholds of toxicological concern for health hazard and risk assessment: Application to cleaning product ingredients.

    PubMed

    Wang, Zhen; Scott, W Casan; Williams, E Spencer; Ciarlo, Michael; DeLeo, Paul C; Brooks, Bryan W

    2018-04-01

    Uncertainty factors (UFs) are commonly used during hazard and risk assessments to address uncertainties, including extrapolations among mammals and experimental durations. In risk assessment, default values are routinely used for interspecies extrapolation and interindividual variability. Whether default UFs are sufficient for various chemical uses or specific chemical classes remains understudied, particularly for ingredients in cleaning products. Therefore, we examined publicly available acute median lethal dose (LD50), and reproductive and developmental no-observed-adverse-effect level (NOAEL) and lowest-observed-adverse-effect level (LOAEL) values for the rat model (oral). We employed probabilistic chemical toxicity distributions to identify likelihoods of encountering acute, subacute, subchronic and chronic toxicity thresholds for specific chemical categories and ingredients in cleaning products. We subsequently identified thresholds of toxicological concern (TTC) and then various UFs for: 1) acute (LD50s)-to-chronic (reproductive/developmental NOAELs) ratios (ACRs), 2) exposure duration extrapolations (e.g., subchronic-to-chronic; reproductive/developmental), and 3) LOAEL-to-NOAEL ratios considering subacute/acute developmental responses. These ratios (95% CIs) were calculated from pairwise threshold levels using Monte Carlo simulations to identify UFs for all ingredients in cleaning products. Based on data availability, chemical category-specific UFs were also identified for aliphatic acids and salts, aliphatic alcohols, inorganic acids and salts, and alkyl sulfates. In a number of cases, derived UFs were smaller than default values (e.g., 10) employed by regulatory agencies; however, larger UFs were occasionally identified. Such UFs could be used by assessors instead of relying on default values. These approaches for identifying mammalian TTCs and diverse UFs represent robust alternatives to application of default values for ingredients in cleaning products and other chemical classes. Findings can also support chemical substitutions during alternatives assessment, and data dossier development (e.g., read across), identification of TTCs, and screening-level hazard and risk assessment when toxicity data is unavailable for specific chemicals. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. A well-balanced meshless tsunami propagation and inundation model

    NASA Astrophysics Data System (ADS)

    Brecht, Rüdiger; Bihlo, Alexander; MacLachlan, Scott; Behrens, Jörn

    2018-05-01

    We present a novel meshless tsunami propagation and inundation model. We discretize the nonlinear shallow-water equations using a well-balanced scheme relying on radial basis function based finite differences. For the inundation model, radial basis functions are used to extrapolate the dry region from nearby wet points. Numerical results against standard one- and two-dimensional benchmarks are presented.

  9. 3-D inelastic analysis methods for hot section components (base program). [turbine blades, turbine vanes, and combustor liners

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1984-01-01

    A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.

  10. A consistent two-mutation model of bone cancer for two data sets of radium-injected beagles.

    PubMed

    Bijwaard, H; Brugmans, M J P; Leenhouts, H P

    2002-09-01

    A two-mutation carcinogenesis model has been applied to model osteosarcoma incidence in two data sets of beagles injected with 226Ra. Taking age-specific retention into account, the following results have been obtained: (1) a consistent and well-fitting solution for all age and dose groups, (2) mutation rates that are linearly dependent on dose rate, with an exponential decrease for the second mutation at high dose rates, (3) a linear-quadratic dose-effect relationship, which indicates that care should be taken when extrapolating linearly, (4) highest cumulative incidences for injection at young adult age, and highest risks for injection doses of a few kBq kg(-1) at these ages, and (5) when scaled appropriately, the beagle model compares fairly well with a description for radium dial painters, suggesting that a consistent model description of bone cancer induction in beagles and humans may be possible.

  11. TIME AND CONCENTRATION DEPENDENT ACCUMULATION OF [3H]-DELTAMETHRIN IN XENOPUS LAEVIS OOCYTES.

    EPA Science Inventory

    Cell culture models are often used in mechanistic studies of toxicant action. However, one area of uncertainty is the extrapolation of dose from the in vitro model to the in vivo tissue. A common assumption of in vitro studies is that media concentration is a predictive marker of...

  12. DEVELOPMENT OF A PHYSIOLOGICALLY BASED PHARMOKINETICS (PBPK) MODEL TO COMPARE DIFFERENCES IN DISPOSITION OF TRICHLOROETHYLENE (TCE) IN ADULT VERSUS ELDERLY RATS

    EPA Science Inventory

    Due to the increasing number of elderly in the American Population, the question as to whether the aged have different susceptibility to environmental contaminants needs to be addressed. Physiologically based pharmacokinetic (PBPK) models are used to extrapolate between rodents (...

  13. Simulating Electron Cyclotron Maser Emission for Low Mass Stars

    NASA Astrophysics Data System (ADS)

    Llama, Joe; Jardine, Moira

    2018-01-01

    Zeeman-Doppler Imaging (ZDI) is a powerful technique that enables us to map the large-scale magnetic fields of stars spanning the pre- and main-sequence. Coupling these magnetic maps with field extrapolation methods allow us to investigate the topology of the closed, X-ray bright corona, and the cooler, open stellar wind.Using ZDI maps of young M dwarfs with simultaneous radio light curves obtained from the VLA, we present the results of modeling the Electron-Cyclotron Maser (ECM) emission from these systems. We determine the X-ray luminosity and ECM emission that is produced using the ZDI maps and our field extrapolation model. We compare these findings with the observed radio light curves of these stars. This allows us to predict the relative phasing and amplitude of the stellar X-ray and radio light curves.This benchmarking of our model using these systems allows us to predict the ECM emission for all stars that have a ZDI map and an observed X-ray luminosity. Our model allows us to understand the origin of transient radio emission observations and is crucial for disentangling stellar and exoplanetary radio signals.

  14. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  15. Very high resolution surface mass balance over Greenland modeled by the regional climate model MAR with a downscaling technique

    NASA Astrophysics Data System (ADS)

    Kittel, Christoph; Lang, Charlotte; Agosta, Cécile; Prignon, Maxime; Fettweis, Xavier; Erpicum, Michel

    2016-04-01

    This study presents surface mass balance (SMB) results at 5 km resolution with the regional climate MAR model over the Greenland ice sheet. Here, we use the last MAR version (v3.6) where the land-ice module (SISVAT) using a high resolution grid (5km) for surface variables is fully coupled while the MAR atmospheric module running at a lower resolution of 10km. This online downscaling technique enables to correct near-surface temperature and humidity from MAR by a gradient based on elevation before forcing SISVAT. The 10 km precipitation is not corrected. Corrections are stronger over the ablation zone where topography presents more variations. The model has been force by ERA-Interim between 1979 and 2014. We will show the advantages of using an online SMB downscaling technique in respect to an offline downscaling extrapolation based on local SMB vertical gradients. Results at 5 km show a better agreement with the PROMICE surface mass balance data base than the extrapolated 10 km MAR SMB results.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willemin, Marie-Emilie; Lumen, Annie, E-mail: Anni

    Thyroid homeostasis can be disturbed due to thiocyanate exposure from the diet or tobacco smoke. Thiocyanate inhibits both thyroidal uptake of iodide, via the sodium-iodide symporter (NIS), and thyroid hormone (TH) synthesis in the thyroid, via thyroid peroxidase (TPO), but the mode of action of thiocyanate is poorly quantified in the literature. The characterization of the link between intra-thyroidal thiocyanate concentrations and dose of exposure is crucial for assessing the risk of thyroid perturbations due to thiocyanate exposure. We developed a PBPK model for thiocyanate that describes its kinetics in the whole-body up to daily doses of 0.15 mmol/kg, withmore » a mechanistic description of the thyroidal kinetics including NIS, passive diffusion, and TPO. The model was calibrated in a Bayesian framework using published studies in rats. Goodness-of-fit was satisfactory, especially for intra-thyroidal thiocyanate concentrations. Thiocyanate kinetic processes were quantified in vivo, including the metabolic clearance by TPO. The passive diffusion rate was found to be greater than NIS-mediated uptake rate. The model captured the dose-dependent kinetics of thiocyanate after acute and chronic exposures. Model behavior was evaluated using a Morris screening test. The distribution of thiocyanate into the thyroid was found to be determined primarily by the partition coefficient, followed by NIS and passive diffusion; the impact of the latter two mechanisms appears to increase at very low doses. Extrapolation to humans resulted in good predictions of thiocyanate kinetics during chronic exposure. The developed PBPK model can be used in risk assessment to quantify dose-response effects of thiocyanate on TH. - Highlights: • A PBPK model of thiocyanate (SCN{sup −}) was calibrated in rats in a Bayesian framework. • The intra-thyroidal kinetics of thiocyanate including NIS and TPO was modeled. • Passive diffusion rate for SCN{sup −} seemed to be greater than the NIS-mediated uptake. • The dose-dependent kinetics of SCN{sup −} was captured after an acute and chronic exposure. • The PBPK model of thiocyanate was successfully extrapolated to humans.« less

  17. Low dose radiation risks for women surviving the a-bombs in Japan: generalized additive model.

    PubMed

    Dropkin, Greg

    2016-11-24

    Analyses of cancer mortality and incidence in Japanese A-bomb survivors have been used to estimate radiation risks, which are generally higher for women. Relative Risk (RR) is usually modelled as a linear function of dose. Extrapolation from data including high doses predicts small risks at low doses. Generalized Additive Models (GAMs) are flexible methods for modelling non-linear behaviour. GAMs are applied to cancer incidence in female low dose subcohorts, using anonymous public data for the 1958 - 1998 Life Span Study, to test for linearity, explore interactions, adjust for the skewed dose distribution, examine significance below 100 mGy, and estimate risks at 10 mGy. For all solid cancer incidence, RR estimated from 0 - 100 mGy and 0 - 20 mGy subcohorts is significantly raised. The response tapers above 150 mGy. At low doses, RR increases with age-at-exposure and decreases with time-since-exposure, the preferred covariate. Using the empirical cumulative distribution of dose improves model fit, and capacity to detect non-linear responses. RR is elevated over wide ranges of covariate values. Results are stable under simulation, or when removing exceptional data cells, or adjusting neutron RBE. Estimates of Excess RR at 10 mGy using the cumulative dose distribution are 10 - 45 times higher than extrapolations from a linear model fitted to the full cohort. Below 100 mGy, quasipoisson models find significant effects for all solid, squamous, uterus, corpus, and thyroid cancers, and for respiratory cancers when age-at-exposure > 35 yrs. Results for the thyroid are compatible with studies of children treated for tinea capitis, and Chernobyl survivors. Results for the uterus are compatible with studies of UK nuclear workers and the Techa River cohort. Non-linear models find large, significant cancer risks for Japanese women exposed to low dose radiation from the atomic bombings. The risks should be reflected in protection standards.

  18. ROUTE-SPECIFIC DOSIMETRY

    EPA Science Inventory

    The capacity to perform route-to-route extrapolation of toxicity data is becoming increasingly crucial to the Agency, with a number of strategies suggested and demonstrated. One strategy involves using a combination of existing data and modeling approaches. This strategy propos...

  19. Toxicokinetic Triage for Environmental Chemicals

    EPA Science Inventory

    Toxicokinetic (TK) models are essential for linking administered doses to blood and tissue concentrations. In vitro-to-in vivo extrapolation (IVIVE) methods have been developed to determine TK from limited in vitro measurements and chemical structure-based property predictions, p...

  20. Statistical validation of predictive TRANSP simulations of baseline discharges in preparation for extrapolation to JET D-T

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Tae; Romanelli, M.; Yuan, X.; Kaye, S.; Sips, A. C. C.; Frassinetti, L.; Buchanan, J.; Contributors, JET

    2017-06-01

    This paper presents for the first time a statistical validation of predictive TRANSP simulations of plasma temperature using two transport models, GLF23 and TGLF, over a database of 80 baseline H-mode discharges in JET-ILW. While the accuracy of the predicted T e with TRANSP-GLF23 is affected by plasma collisionality, the dependency of predictions on collisionality is less significant when using TRANSP-TGLF, indicating that the latter model has a broader applicability across plasma regimes. TRANSP-TGLF also shows a good matching of predicted T i with experimental measurements allowing for a more accurate prediction of the neutron yields. The impact of input data and assumptions prescribed in the simulations are also investigated in this paper. The statistical validation and the assessment of uncertainty level in predictive TRANSP simulations for JET-ILW-DD will constitute the basis for the extrapolation to JET-ILW-DT experiments.

  1. Unmasking the masked Universe: the 2M++ catalogue through Bayesian eyes

    NASA Astrophysics Data System (ADS)

    Lavaux, Guilhem; Jasche, Jens

    2016-01-01

    This work describes a full Bayesian analysis of the Nearby Universe as traced by galaxies of the 2M++ survey. The analysis is run in two sequential steps. The first step self-consistently derives the luminosity-dependent galaxy biases, the power spectrum of matter fluctuations and matter density fields within a Gaussian statistic approximation. The second step makes a detailed analysis of the three-dimensional large-scale structures, assuming a fixed bias model and a fixed cosmology. This second step allows for the reconstruction of both the final density field and the initial conditions at z = 1000 assuming a fixed bias model. From these, we derive fields that self-consistently extrapolate the observed large-scale structures. We give two examples of these extrapolation and their utility for the detection of structures: the visibility of the Sloan Great Wall, and the detection and characterization of the Local Void using DIVA, a Lagrangian based technique to classify structures.

  2. Physiologically-based pharmacokinetic model for Fentanyl in support of the development of Provisional Advisory Levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shankaran, Harish, E-mail: harish.shankaran@pnnl.gov; Adeshina, Femi; Teeguarden, Justin G.

    Provisional Advisory Levels (PALs) are tiered exposure limits for toxic chemicals in air and drinking water that are developed to assist in emergency responses. Physiologically-based pharmacokinetic (PBPK) modeling can support this process by enabling extrapolations across doses, and exposure routes, thereby addressing gaps in the available toxicity data. Here, we describe the development of a PBPK model for Fentanyl – a synthetic opioid used clinically for pain management – to support the establishment of PALs. Starting from an existing model for intravenous Fentanyl, we first optimized distribution and clearance parameters using several additional IV datasets. We then calibrated the modelmore » using pharmacokinetic data for various formulations, and determined the absorbed fraction, F, and time taken for the absorbed amount to reach 90% of its final value, t90. For aerosolized pulmonary Fentanyl, F = 1 and t90 < 1 min indicating complete and rapid absorption. The F value ranged from 0.35 to 0.74 for oral and various transmucosal routes. Oral Fentanyl was absorbed the slowest (t90 ∼ 300 min); the absorption of intranasal Fentanyl was relatively rapid (t90 ∼ 20–40 min); and the various oral transmucosal routes had intermediate absorption rates (t90 ∼ 160–300 min). Based on these results, for inhalation exposures, we assumed that all of the Fentanyl inhaled from the air during each breath directly, and instantaneously enters the arterial circulation. We present model predictions of Fentanyl blood concentrations in oral and inhalation scenarios relevant for PAL development, and provide an analytical expression that can be used to extrapolate between oral and inhalation routes for the derivation of PALs. - Highlights: • We develop a Fentanyl PBPK model for relating external dose to internal levels. • We calibrate the model to oral and inhalation exposures using > 50 human datasets. • Model predictions are in good agreement with the available pharmacokinetic data. • The model can be used for extrapolating across routes, doses and exposure durations. • We illustrate how the model can be used for developing Provisional Advisory Levels.« less

  3. Mantle-circulation models with sequential data assimilation: inferring present-day mantle structure from plate-motion histories.

    PubMed

    Bunge, Hans-Peter; Richards, M A; Baumgardner, J R

    2002-11-15

    Data assimilation is an approach to studying geodynamic models consistent simultaneously with observables and the governing equations of mantle flow. Such an approach is essential in mantle circulation models, where we seek to constrain an unknown initial condition some time in the past, and thus cannot hope to use first-principles convection calculations to infer the flow history of the mantle. One of the most important observables for mantle-flow history comes from models of Mesozoic and Cenozoic plate motion that provide constraints not only on the surface velocity of the mantle but also on the evolution of internal mantle-buoyancy forces due to subducted oceanic slabs. Here we present five mantle circulation models with an assimilated plate-motion history spanning the past 120 Myr, a time period for which reliable plate-motion reconstructions are available. All models agree well with upper- and mid-mantle heterogeneity imaged by seismic tomography. A simple standard model of whole-mantle convection, including a factor 40 viscosity increase from the upper to the lower mantle and predominantly internal heat generation, reveals downwellings related to Farallon and Tethys subduction. Adding 35% bottom heating from the core has the predictable effect of producing prominent high-temperature anomalies and a strong thermal boundary layer at the base of the mantle. Significantly delaying mantle flow through the transition zone either by modelling the dynamic effects of an endothermic phase reaction or by including a steep, factor 100, viscosity rise from the upper to the lower mantle results in substantial transition-zone heterogeneity, enhanced by the effects of trench migration implicit in the assimilated plate-motion history. An expected result is the failure to account for heterogeneity structure in the deepest mantle below 1500 km, which is influenced by Jurassic plate motions and thus cannot be modelled from sequential assimilation of plate motion histories limited in age to the Cretaceous. This result implies that sequential assimilation of past plate-motion models is ineffective in studying the temporal evolution of core-mantle-boundary heterogeneity, and that a method for extrapolating present-day information backwards in time is required. For short time periods (of the order of perhaps a few tens of Myr) such a method exists in the form of crude 'backward' convection calculations. For longer time periods (of the order of a mantle overturn), a rigorous approach to extrapolating information back in time exists in the form of iterative nonlinear optimization methods that carry assimilated information into the past through the use of an adjoint mantle convection model.

  4. Interspecies Extrapolation

    EPA Science Inventory

    Interspecies extrapolation encompasses two related but distinct topic areas that are germane to quantitative extrapolation and hence computational toxicology-dose scaling and parameter scaling. Dose scaling is the process of converting a dose determined in an experimental animal ...

  5. Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments

    PubMed Central

    Dittrich, Sandra; Noesselt, Tömme

    2018-01-01

    Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research. PMID:29618999

  6. Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments.

    PubMed

    Dittrich, Sandra; Noesselt, Tömme

    2018-01-01

    Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research.

  7. Mapping local and global variability in plant trait distributions

    DOE PAGES

    Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc; ...

    2017-12-01

    Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less

  8. Alternative Method to Simulate a Sub-idle Engine Operation in Order to Synthesize Its Control System

    NASA Astrophysics Data System (ADS)

    Sukhovii, Sergii I.; Sirenko, Feliks F.; Yepifanov, Sergiy V.; Loboda, Igor

    2016-09-01

    The steady-state and transient engine performances in control systems are usually evaluated by applying thermodynamic engine models. Most models operate between the idle and maximum power points, only recently, they sometimes address a sub-idle operating range. The lack of information about the component maps at the sub-idle modes presents a challenging problem. A common method to cope with the problem is to extrapolate the component performances to the sub-idle range. Precise extrapolation is also a challenge. As a rule, many scientists concern only particular aspects of the problem such as the lighting combustion chamber or the turbine operation under the turned-off conditions of the combustion chamber. However, there are no reports about a model that considers all of these aspects and simulates the engine starting. The proposed paper addresses a new method to simulate the starting. The method substitutes the non-linear thermodynamic model with a linear dynamic model, which is supplemented with a simplified static model. The latter model is the set of direct relations between parameters that are used in the control algorithms instead of commonly used component performances. Specifically, this model consists of simplified relations between the gas path parameters and the corrected rotational speed.

  9. Mapping local and global variability in plant trait distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc

    Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less

  10. Pharmacokinetic-Pharmacodynamic Modeling in Pediatric Drug Development, and the Importance of Standardized Scaling of Clearance.

    PubMed

    Germovsek, Eva; Barker, Charlotte I S; Sharland, Mike; Standing, Joseph F

    2018-04-19

    Pharmacokinetic/pharmacodynamic (PKPD) modeling is important in the design and conduct of clinical pharmacology research in children. During drug development, PKPD modeling and simulation should underpin rational trial design and facilitate extrapolation to investigate efficacy and safety. The application of PKPD modeling to optimize dosing recommendations and therapeutic drug monitoring is also increasing, and PKPD model-based dose individualization will become a core feature of personalized medicine. Following extensive progress on pediatric PK modeling, a greater emphasis now needs to be placed on PD modeling to understand age-related changes in drug effects. This paper discusses the principles of PKPD modeling in the context of pediatric drug development, summarizing how important PK parameters, such as clearance (CL), are scaled with size and age, and highlights a standardized method for CL scaling in children. One standard scaling method would facilitate comparison of PK parameters across multiple studies, thus increasing the utility of existing PK models and facilitating optimal design of new studies.

  11. Comparison of spatiotemporal prediction models of daily exposure of individuals to ambient nitrogen dioxide and ozone in Montreal, Canada.

    PubMed

    Buteau, Stephane; Hatzopoulou, Marianne; Crouse, Dan L; Smargiassi, Audrey; Burnett, Richard T; Logan, Travis; Cavellin, Laure Deville; Goldberg, Mark S

    2017-07-01

    In previous studies investigating the short-term health effects of ambient air pollution the exposure metric that is often used is the daily average across monitors, thus assuming that all individuals have the same daily exposure. Studies that incorporate space-time exposures of individuals are essential to further our understanding of the short-term health effects of ambient air pollution. As part of a longitudinal cohort study of the acute effects of air pollution that incorporated subject-specific information and medical histories of subjects throughout the follow-up, the purpose of this study was to develop and compare different prediction models using data from fixed-site monitors and other monitoring campaigns to estimate daily, spatially-resolved concentrations of ozone (O 3 ) and nitrogen dioxide (NO 2 ) of participants' residences in Montreal, 1991-2002. We used the following methods to predict spatially-resolved daily concentrations of O 3 and NO 2 for each geographic region in Montreal (defined by three-character postal code areas): (1) assigning concentrations from the nearest monitor; (2) spatial interpolation using inverse-distance weighting; (3) back-extrapolation from a land-use regression model from a dense monitoring survey, and; (4) a combination of a land-use and Bayesian maximum entropy model. We used a variety of indices of agreement to compare estimates of exposure assigned from the different methods, notably scatterplots of pairwise predictions, distribution of differences and computation of the absolute agreement intraclass correlation (ICC). For each pairwise prediction, we also produced maps of the ICCs by these regions indicating the spatial variability in the degree of agreement. We found some substantial differences in agreement across pairs of methods in daily mean predicted concentrations of O 3 and NO 2 . On a given day and postal code area the difference in the concentration assigned could be as high as 131ppb for O 3 and 108ppb for NO 2 . For both pollutants, better agreement was found between predictions from the nearest monitor and the inverse-distance weighting interpolation methods, with ICCs of 0.89 (95% confidence interval (CI): 0.89, 0.89) for O 3 and 0.81 (95%CI: 0.80, 0.81) for NO 2 , respectively. For this pair of methods the maximum difference on a given day and postal code area was 36ppb for O 3 and 74ppb for NO 2 . The back-extrapolation method showed a higher degree of disagreement with the nearest monitor approach, inverse-distance weighting interpolation, and the Bayesian maximum entropy model, which were strongly constrained by the sparse monitoring network. The maps showed that the patterns of agreement differed across the postal code areas and the variability depended on the pair of methods compared and the pollutants. For O 3 , but not NO 2 , postal areas showing greater disagreement were mostly located near the city centre and along highways, especially in maps involving the back-extrapolation method. In view of the substantial differences in daily concentrations of O 3 and NO 2 predicted by the different methods, we suggest that analyses of the health effects from air pollution should make use of multiple exposure assessment methods. Although we cannot make any recommendations as to which is the most valid method, models that make use of higher spatially resolved data, such as from dense exposure surveys or from high spatial resolution satellite data, likely provide the most valid estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Animal models and conserved processes

    PubMed Central

    2012-01-01

    Background The concept of conserved processes presents unique opportunities for using nonhuman animal models in biomedical research. However, the concept must be examined in the context that humans and nonhuman animals are evolved, complex, adaptive systems. Given that nonhuman animals are examples of living systems that are differently complex from humans, what does the existence of a conserved gene or process imply for inter-species extrapolation? Methods We surveyed the literature including philosophy of science, biological complexity, conserved processes, evolutionary biology, comparative medicine, anti-neoplastic agents, inhalational anesthetics, and drug development journals in order to determine the value of nonhuman animal models when studying conserved processes. Results Evolution through natural selection has employed components and processes both to produce the same outcomes among species but also to generate different functions and traits. Many genes and processes are conserved, but new combinations of these processes or different regulation of the genes involved in these processes have resulted in unique organisms. Further, there is a hierarchy of organization in complex living systems. At some levels, the components are simple systems that can be analyzed by mathematics or the physical sciences, while at other levels the system cannot be fully analyzed by reducing it to a physical system. The study of complex living systems must alternate between focusing on the parts and examining the intact whole organism while taking into account the connections between the two. Systems biology aims for this holism. We examined the actions of inhalational anesthetic agents and anti-neoplastic agents in order to address what the characteristics of complex living systems imply for inter-species extrapolation of traits and responses related to conserved processes. Conclusion We conclude that even the presence of conserved processes is insufficient for inter-species extrapolation when the trait or response being studied is located at higher levels of organization, is in a different module, or is influenced by other modules. However, when the examination of the conserved process occurs at the same level of organization or in the same module, and hence is subject to study solely by reductionism, then extrapolation is possible. PMID:22963674

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefeuvre, F.E.; Wrolstad, K.H.; Zou, Ke Shan

    Total and Unocal estimated sand-shale ratios in gas reservoirs from the upper Tertiary clastics of Myanmar. They separately used deterministic pre-stack and statistical post-stack seismic attribute analysis calibrated at two wells to objectively extrapolate the lithologies and reservoir properties several kilometers away from the wells. The two approaches were then integrated and lead to a unique distribution of the sands and shales in the reservoir which fit in the known regional geological model. For the sands, the fluid distributions (gas and brine) were also estimated as well as the porosity, water saturation, thickness and clay content of the sands. Thismore » was made possible by using precise elastic modeling based on the Biot-Gassmann equation in order to integrate the effects of reservoir properties on seismic signatures.« less

  14. Effective-range function methods for charged particle collisions

    NASA Astrophysics Data System (ADS)

    Gaspard, David; Sparenberg, Jean-Marc

    2018-04-01

    Different versions of the effective-range function method for charged particle collisions are studied and compared. In addition, a novel derivation of the standard effective-range function is presented from the analysis of Coulomb wave functions in the complex plane of the energy. The recently proposed effective-range function denoted as Δℓ [Ramírez Suárez and Sparenberg, Phys. Rev. C 96, 034601 (2017), 10.1103/PhysRevC.96.034601] and an earlier variant [Hamilton et al., Nucl. Phys. B 60, 443 (1973), 10.1016/0550-3213(73)90193-4] are related to the standard function. The potential interest of Δℓ for the study of low-energy cross sections and weakly bound states is discussed in the framework of the proton-proton S10 collision. The resonant state of the proton-proton collision is successfully computed from the extrapolation of Δℓ instead of the standard function. It is shown that interpolating Δℓ can lead to useful extrapolation to negative energies, provided scattering data are known below one nuclear Rydberg energy (12.5 keV for the proton-proton system). This property is due to the connection between Δℓ and the effective-range function by Hamilton et al. that is discussed in detail. Nevertheless, such extrapolations to negative energies should be used with caution because Δℓ is not analytic at zero energy. The expected analytic properties of the main functions are verified in the complex energy plane by graphical color-based representations.

  15. A Unified Treatment of the Acoustic and Elastic Scattered Waves from Fluid-Elastic Media

    NASA Astrophysics Data System (ADS)

    Denis, Max Fernand

    In this thesis, contributions are made to the numerical modeling of the scattering fields from fluid-filled poroelastic materials. Of particular interest are highly porous materials that demonstrate strong contrast to the saturating fluid. A Biot's analysis of porous medium serves as the starting point of the elastic-solid and pore-fluid governing equations of motion. The longitudinal scattering waves of the elastic-solid mode and the pore-fluid mode are modeled by the Kirchhoff-Helmholtz integral equation. The integral equation is evaluated using a series approximation, describing the successive perturbation of the material contrasts. To extended the series' validity into larger domains, rational fraction extrapolation methods are employed. The local Pade□ approximant procedure is a technique that allows one to extrapolate from a scattered field of small contrast into larger values, using Pade□ approximants. To ensure the accuracy of the numerical model, comparisons are made with the exact solution of scattering from a fluid sphere. Mean absolute error analyses, yield convergent and accurate results. In addition, the numerical model correctly predicts the Bragg peaks for a periodic lattice of fluid spheres. In the case of trabecular bones, the far-field scattering pressure attenuation is a superposition of the elastic-solid mode and the pore-fluid mode generated waves from the surrounding fluid and poroelastic boundaries. The attenuation is linearly dependent with frequency between 0.2 and 0.6MHz. The slope of the attenuation is nonlinear with porosity, and does not reflect the mechanical properties of the trabecular bone. The attenuation shows the anisotropic effects of the trabeculae structure. Thus, ultrasound can possibly be employed to non-invasively predict the principal structural orientation of trabecular bones.

  16. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    PubMed

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Gaussian process model for extrapolation of scattering observables for complex molecules: From benzene to benzonitrile

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Jie; Krems, Roman V.; Li, Zhiying

    2015-10-21

    We consider a problem of extrapolating the collision properties of a large polyatomic molecule A–H to make predictions of the dynamical properties for another molecule related to A–H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A–X. We assume that the effect of the −H →−X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can bemore » used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C{sub 6}H{sub 5}CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C{sub 6}H{sub 6} collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C{sub 6}H{sub 5}CN with He.« less

  18. Global mapping of highly pathogenic avian influenza H5N1 and H5Nx clade 2.3.4.4 viruses with spatial cross-validation

    PubMed Central

    Dhingra, Madhur S; Artois, Jean; Robinson, Timothy P; Linard, Catherine; Chaiban, Celia; Xenarios, Ioannis; Engler, Robin; Liechti, Robin; Kuznetsov, Dmitri; Xiao, Xiangming; Dobschuetz, Sophie Von; Claes, Filip; Newman, Scott H; Dauphin, Gwenaëlle; Gilbert, Marius

    2016-01-01

    Global disease suitability models are essential tools to inform surveillance systems and enable early detection. We present the first global suitability model of highly pathogenic avian influenza (HPAI) H5N1 and demonstrate that reliable predictions can be obtained at global scale. Best predictions are obtained using spatial predictor variables describing host distributions, rather than land use or eco-climatic spatial predictor variables, with a strong association with domestic duck and extensively raised chicken densities. Our results also support a more systematic use of spatial cross-validation in large-scale disease suitability modelling compared to standard random cross-validation that can lead to unreliable measure of extrapolation accuracy. A global suitability model of the H5 clade 2.3.4.4 viruses, a group of viruses that recently spread extensively in Asia and the US, shows in comparison a lower spatial extrapolation capacity than the HPAI H5N1 models, with a stronger association with intensively raised chicken densities and anthropogenic factors. DOI: http://dx.doi.org/10.7554/eLife.19571.001 PMID:27885988

  19. Efficient numerical methods for the random-field Ising model: Finite-size scaling, reweighting extrapolation, and computation of response functions.

    PubMed

    Fytas, Nikolaos G; Martín-Mayor, Víctor

    2016-06-01

    It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)PRLTAO0031-900710.1103/PhysRevLett.110.227201] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero- and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent α of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.

  20. Pole-strength of the earth from Magsat and magnetic determination of the core radius

    NASA Technical Reports Server (NTRS)

    Voorhies, G. V.; Benton, E. R.

    1982-01-01

    A model based on two days of Magsat data is used to numerically evaluate the unsigned magnetic flux linking the earth's surface, and a comparison of the 16.054 GWb value calculated with values from earlier geomagnetic field models reveals a smooth, monotonic, and recently-accelerating decrease in the earth's pole strength at a 50-year average rate of 8.3 MWb, or 0.052%/year. Hide's (1978) magnetic technique for determining the radius of the earth's electrically-conducting core is tested by (1) extrapolating main field models for 1960 and 1965 downward through the nearly-insulating mantle, and then separately comparing them to equivalent, extrapolated models of Magsat data. The two unsigned fluxes are found to equal the Magsat values at a radius which is within 2% of the core radius; and (2) the 1960 main field and secular variation and acceleration coefficients are used to derive models of 1930, 1940 and 1950. The same core magnetic radius value, within 2% of the seismic value, is obtained. It is concluded that the mantle is a nearly-perfect insulator, while the core is a perfect conductor, on the decade time scale.

  1. Higher Throughput Toxicokinetics to Allow Extrapolation (EPA-Japan Bilateral EDSP meeting)

    EPA Science Inventory

    As part of "Ongoing EDSP Directions & Activities" I will present CSS research on high throughput toxicokinetics, including in vitro data and models to allow rapid determination of the real world doses that may cause endocrine disruption.

  2. Neural network model for survival and growth of Salmonella 8,20:-:z6 in ground chicken thigh meat during cold storage: extrapolation to other serotypes

    USDA-ARS?s Scientific Manuscript database

    Mathematical models that predict behavior of human bacterial pathogens in food are valuable tools for assessing and managing this risk to public health. A study was undertaken to develop a model for predicting behavior of Salmonella 8,20:-:z6 in chicken meat during cold storage and to determine how...

  3. Assessing the effects of fire disturbances on ecosystems: A scientific agenda for research and management

    USGS Publications Warehouse

    Schmoldt, D.L.; Peterson, D.L.; Keane, R.E.; Lenihan, J.M.; McKenzie, D.; Weise, D.R.; Sandberg, D.V.

    1999-01-01

    A team of fire scientists and resource managers convened 17-19 April 1996 in Seattle, Washington, to assess the effects of fire disturbance on ecosystems. Objectives of this workshop were to develop scientific recommendations for future fire research and management activities. These recommendations included a series of numerically ranked scientific and managerial questions and responses focusing on (1) links among fire effects, fuels, and climate; (2) fire as a large-scale disturbance; (3) fire-effects modeling structures; and (4) managerial concerns, applications, and decision support. At the present time, understanding of fire effects and the ability to extrapolate fire-effects knowledge to large spatial scales are limited, because most data have been collected at small spatial scales for specific applications. Although we clearly need more large-scale fire-effects data, it will be more expedient to concentrate efforts on improving and linking existing models that simulate fire effects in a georeferenced format while integrating empirical data as they become available. A significant component of this effort should be improved communication between modelers and managers to develop modeling tools to use in a planning context. Another component of this modeling effort should improve our ability to predict the interactions of fire and potential climatic change at very large spatial scales. The priority issues and approaches described here provide a template for fire science and fire management programs in the next decade and beyond.

  4. The importance of inclusion of kinetic information in the extrapolation of high-to-low concentrations for human limit setting.

    PubMed

    Geraets, Liesbeth; Zeilmaker, Marco J; Bos, Peter M J

    2018-01-05

    Human health risk assessment of inhalation exposures generally includes a high-to-low concentration extrapolation. Although this is a common step in human risk assessment, it introduces various uncertainties. One of these uncertainties is related to the toxicokinetics. Many kinetic processes such as absorption, metabolism or excretion can be subject to saturation at high concentration levels. In the presence of saturable kinetic processes of the parent compound or metabolites, disproportionate increases in internal blood or tissue concentration relative to the external concentration administered may occur resulting in nonlinear kinetics. The present paper critically reviews human health risk assessment of inhalation exposure. More specific, it emphasizes the importance of kinetic information for the determination of a safe exposure in human risk assessment of inhalation exposures assessed by conversion from a high animal exposure to a low exposure in humans. For two selected chemicals, i.e. methyl tert-butyl ether and 1,2-dichloroethane, PBTK-modelling was used, for illustrative purposes, to follow the extrapolation and conversion steps as performed in existing risk assessments for these chemicals. Human health-based limit values based on an external dose metric without sufficient knowledge on kinetics might be too high to be sufficiently protective. Insight in the actual internal exposure, the toxic agent, the appropriate dose metric, and whether an effect is related to internal concentration or dose is important. Without this, application of assessment factors on an external dose metric and the conversion to continuous exposure results in an uncertain human health risk assessment of inhalation exposures. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. High throughput method to characterize acid-base properties of insoluble drug candidates in water.

    PubMed

    Benito, D E; Acquaviva, A; Castells, C B; Gagliardi, L G

    2018-05-30

    In drug design experimental characterization of acidic groups in candidate molecules is one of the more important steps prior to the in-vivo studies. Potentiometry combined with Yasuda-Shedlovsky extrapolation is one of the more important strategy to study drug candidates with low solubility in water, although, it requires a large number of sequences to determine pK a values at different solvent-mixture compositions to, finally, obtain the pK a in water (pwwK a ) by extrapolation. We have recently proposed a method which requires only two sequences of additions to study the effect of organic solvent content in liquid chromatography mobile phases on the acidity of the buffer compounds usually dissolved in it along wide ranges of compositions. In this work we propose to apply this method to study thermodynamic pwwK a of drug candidates with low solubilities in pure water. Using methanol/water solvent mixtures we study six pharmaceutical drugs at 25 °C. Four of them: ibuprofen, salicylic acid, atenolol and labetalol, were chosen as members of carboxylic, amine and phenol families, respectively. Since these compounds have known pwwK a values, they were used to validate the procedure, the accuracy of Yasuda-Shedlovsky and other empirical models to fit the behaviors, and to obtain pwwK a by extrapolation. Finally, the method is applied to determine unknown thermodynamic pwwK a values of two pharmaceutical drugs: atorvastatin calcium and the two dissociation constants of ethambutol. The procedure proved to be simple, very fast and accurate in all of the studied cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Animal Models in Cardiovascular Research: Hypertension and Atherosclerosis

    PubMed Central

    Ng, Chun-Yi; Jaarin, Kamsiah

    2015-01-01

    Hypertension and atherosclerosis are among the most common causes of mortality in both developed and developing countries. Experimental animal models of hypertension and atherosclerosis have become a valuable tool for providing information on etiology, pathophysiology, and complications of the disease and on the efficacy and mechanism of action of various drugs and compounds used in treatment. An animal model has been developed to study hypertension and atherosclerosis for several reasons. Compared to human models, an animal model is easily manageable, as compounding effects of dietary and environmental factors can be controlled. Blood vessels and cardiac tissue samples can be taken for detailed experimental and biomolecular examination. Choice of animal model is often determined by the research aim, as well as financial and technical factors. A thorough understanding of the animal models used and complete analysis must be validated so that the data can be extrapolated to humans. In conclusion, animal models for hypertension and atherosclerosis are invaluable in improving our understanding of cardiovascular disease and developing new pharmacological therapies. PMID:26064920

  7. Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?

    NASA Astrophysics Data System (ADS)

    Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.

    2016-02-01

    It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.

  8. Creatine supplementation and glycemic control: a systematic review.

    PubMed

    Pinto, Camila Lemos; Botelho, Patrícia Borges; Pimentel, Gustavo Duarte; Campos-Ferraz, Patrícia Lopes; Mota, João Felipe

    2016-09-01

    The focus of this review is the effects of creatine supplementation with or without exercise on glucose metabolism. A comprehensive examination of the past 16 years of study within the field provided a distillation of key data. Both in animal and human studies, creatine supplementation together with exercise training demonstrated greater beneficial effects on glucose metabolism; creatine supplementation itself demonstrated positive results in only a few of the studies. In the animal studies, the effects of creatine supplementation on glucose metabolism were even more distinct, and caution is needed in extrapolating these data to different species, especially to humans. Regarding human studies, considering the samples characteristics, the findings cannot be extrapolated to patients who have poorer glycemic control, are older, are on a different pharmacological treatment (e.g., exogenous insulin therapy) or are physically inactive. Thus, creatine supplementation is a possible nutritional therapy adjuvant with hypoglycemic effects, particularly when used in conjunction with exercise.

  9. Cost-effectiveness of pembrolizumab versus docetaxel for the treatment of previously treated PD-L1 positive advanced NSCLC patients in the United States.

    PubMed

    Huang, Min; Lou, Yanyan; Pellissier, James; Burke, Thomas; Liu, Frank Xiaoqing; Xu, Ruifeng; Velcheti, Vamsidhar

    2017-02-01

    This analysis aimed to evaluate the cost-effectiveness of pembrolizumab compared with docetaxel in patients with previously treated advanced non-squamous cell lung cancer (NSCLC) with PD-L1 positive tumors (total proportion score [TPS] ≥ 50%). The analysis was conducted from a US third-party payer perspective. A partitioned-survival model was developed using data from patients from the KEYNOTE 010 clinical trial. The model used Kaplan-Meier (KM) estimates of progression-free survival (PFS) and overall survival (OS) from the trial for patients treated with either pembrolizumab 2 mg/kg or docetaxel 75 mg/m 2 with extrapolation based on fitted parametric functions and long-term registry data. Quality-adjusted life years (QALYs) were derived based on EQ-5D data from KEYNOTE 010 using a time to death approach. Costs of drug acquisition/administration, adverse event management, and clinical management of advanced NSCLC were included in the model. The base-case analysis used a time horizon of 20 years. Costs and health outcomes were discounted at a rate of 3% per year. A series of one-way and probabilistic sensitivity analyses were performed to test the robustness of the results. Base case results project for PD-L1 positive (TPS ≥50%) patients treated with pembrolizumab a mean survival of 2.25 years. For docetaxel, a mean survival time of 1.07 years was estimated. Expected QALYs were 1.71 and 0.76 for pembrolizumab and docetaxel, respectively. The incremental cost per QALY gained with pembrolizumab vs docetaxel is $168,619/QALY, which is cost-effective in the US using a threshold of 3-times GDP per capita. Sensitivity analyses showed the results to be robust over plausible values of the majority of inputs. Results were most sensitive to extrapolation of overall survival. Pembrolizumab improves survival, increases QALYs, and can be considered as a cost-effective option compared to docetaxel in PD-L1 positive (TPS ≥50%) pre-treated advanced NSCLC patients in the US.

  10. Exploring precipitation pattern scaling methodologies and robustness among CMIP5 models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kravitz, Ben; Lynch, Cary; Hartin, Corinne

    Pattern scaling is a well-established method for approximating modeled spatial distributions of changes in temperature by assuming a time-invariant pattern that scales with changes in global mean temperature. We compare two methods of pattern scaling for annual mean precipitation (regression and epoch difference) and evaluate which method is better in particular circumstances by quantifying their robustness to interpolation/extrapolation in time, inter-model variations, and inter-scenario variations. Both the regression and epoch-difference methods (the two most commonly used methods of pattern scaling) have good absolute performance in reconstructing the climate model output, measured as an area-weighted root mean square error. We decomposemore » the precipitation response in the RCP8.5 scenario into a CO 2 portion and a non-CO 2 portion. Extrapolating RCP8.5 patterns to reconstruct precipitation change in the RCP2.6 scenario results in large errors due to violations of pattern scaling assumptions when this CO 2-/non-CO 2-forcing decomposition is applied. As a result, the methodologies discussed in this paper can help provide precipitation fields to be utilized in other models (including integrated assessment models or impacts assessment models) for a wide variety of scenarios of future climate change.« less

  11. Cross-Species Extrapolation of Uptake and Disposition of Neutral Organic Chemicals in Fish Using a Multispecies Physiologically-Based Toxicokinetic Model Framework.

    PubMed

    Brinkmann, Markus; Schlechtriem, Christian; Reininghaus, Mathias; Eichbaum, Kathrin; Buchinger, Sebastian; Reifferscheid, Georg; Hollert, Henner; Preuss, Thomas G

    2016-02-16

    The potential to bioconcentrate is generally considered to be an unwanted property of a substance. Consequently, chemical legislation, including the European REACH regulations, requires the chemical industry to provide bioconcentration data for chemicals that are produced or imported at volumes exceeding 100 tons per annum or if there is a concern that a substance is persistent, bioaccumulative, and toxic. For the filling of the existing data gap for chemicals produced or imported at levels that are below this stipulated volume, without the need for additional animal experiments, physiologically-based toxicokinetic (PBTK) models can be used to predict whole-body and tissue concentrations of neutral organic chemicals in fish. PBTK models have been developed for many different fish species with promising results. In this study, we developed PBTK models for zebrafish (Danio rerio) and roach (Rutilus rutilus) and combined them with existing models for rainbow trout (Onchorhynchus mykiss), lake trout (Salvelinus namaycush), and fathead minnow (Pimephales promelas). The resulting multispecies model framework allows for cross-species extrapolation of the bioaccumulative potential of neutral organic compounds. Predictions were compared with experimental data and were accurate for most substances. Our model can be used for probabilistic risk assessment of chemical bioaccumulation, with particular emphasis on cross-species evaluations.

  12. Exploring precipitation pattern scaling methodologies and robustness among CMIP5 models

    DOE PAGES

    Kravitz, Ben; Lynch, Cary; Hartin, Corinne; ...

    2017-05-12

    Pattern scaling is a well-established method for approximating modeled spatial distributions of changes in temperature by assuming a time-invariant pattern that scales with changes in global mean temperature. We compare two methods of pattern scaling for annual mean precipitation (regression and epoch difference) and evaluate which method is better in particular circumstances by quantifying their robustness to interpolation/extrapolation in time, inter-model variations, and inter-scenario variations. Both the regression and epoch-difference methods (the two most commonly used methods of pattern scaling) have good absolute performance in reconstructing the climate model output, measured as an area-weighted root mean square error. We decomposemore » the precipitation response in the RCP8.5 scenario into a CO 2 portion and a non-CO 2 portion. Extrapolating RCP8.5 patterns to reconstruct precipitation change in the RCP2.6 scenario results in large errors due to violations of pattern scaling assumptions when this CO 2-/non-CO 2-forcing decomposition is applied. As a result, the methodologies discussed in this paper can help provide precipitation fields to be utilized in other models (including integrated assessment models or impacts assessment models) for a wide variety of scenarios of future climate change.« less

  13. Modeling of transitional flows

    NASA Technical Reports Server (NTRS)

    Lund, Thomas S.

    1988-01-01

    An effort directed at developing improved transitional models was initiated. The focus of this work was concentrated on the critical assessment of a popular existing transitional model developed by McDonald and Fish in 1972. The objective of this effort was to identify the shortcomings of the McDonald-Fish model and to use the insights gained to suggest modifications or alterations of the basic model. In order to evaluate the transitional model, a compressible boundary layer code was required. Accordingly, a two-dimensional compressible boundary layer code was developed. The program was based on a three-point fully implicit finite difference algorithm where the equations were solved in an uncoupled manner with second order extrapolation used to evaluate the non-linear coefficients. Iteration was offered as an option if the extrapolation error could not be tolerated. The differencing scheme was arranged to be second order in both spatial directions on an arbitrarily stretched mesh. A variety of boundary condition options were implemented including specification of an external pressure gradient, specification of a wall temperature distribution, and specification of an external temperature distribution. Overall the results of the initial phase of this work indicate that the McDonald-Fish model does a poor job at predicting the details of the turbulent flow structure during the transition region.

  14. The Acceptance of Exceptionality: A Three Dimensional Model.

    ERIC Educational Resources Information Center

    Martin, Larry L.; Nivens, Maryruth K.

    A model extrapolates from E. Kubler-Ross' conception of the stages of grief to apply to parent and family reactions when an exceptionality is identified. A chart lists possible parent feelings and reactions, possible school reactions to the parent in grief, and the child's reactions during each of five stages: denial, rage and anger, bargaining,…

  15. Proper motion and secular variations of Keplerian orbital elements

    NASA Astrophysics Data System (ADS)

    Butkevich, Alexey G.

    2018-05-01

    High-precision observations require accurate modelling of secular changes in the orbital elements in order to extrapolate measurements over long time intervals, and to detect deviation from pure Keplerian motion caused, for example, by other bodies or relativistic effects. We consider the evolution of the Keplerian elements resulting from the gradual change of the apparent orbit orientation due to proper motion. We present rigorous formulae for the transformation of the orbit inclination, longitude of the ascending node and argument of the pericenter from one epoch to another, assuming uniform stellar motion and taking radial velocity into account. An approximate treatment, accurate to the second-order terms in time, is also given. The proper motion effects may be significant for long-period transiting planets. These theoretical results are applicable to the modelling of planetary transits and precise Doppler measurements as well as analysis of pulsar and eclipsing binary timing observations.

  16. Wind ripples in low density atmospheres

    NASA Technical Reports Server (NTRS)

    Miller, J. S.; Marshall, J. R.; Greeley, R.

    1987-01-01

    The effect of varying fluid density (rho) on particle transport was examined by conducting tests at atmospheric pressures between 1 and 0.004 bar in the Martian Surface Wind Tunnel (MARSWIT). This study specifically concerns the effect of varying rho on the character of wind ripples, and elicits information concerning generalized ripple models as well as specific geological circumstances for ripple formation such as those prevailing on Mars. Tests were conducted primarily with 95 micron quartz sand, and for each atmospheric pressure chosen, tests were conducted at two freestream wind speeds: 1.1 U*(t) and 1.5 U*(t), where U*(t) is saltation threshold. Preliminary analysis of the data suggests: (1) ballistic ripple wavelength is not at variance with model predictions; (2) an atmospheric pressure of approximately 0.2 bar could represent a discontinuity in ripple behavior; and (4) ripple formation on Mars may not be readily predicted by extrapolation of terrestrial observations.

  17. Species distributions models in wildlife planning: agricultural policy and wildlife management in the great plains

    USGS Publications Warehouse

    Fontaine, Joseph J.; Jorgensen, Christopher; Stuber, Erica F.; Gruber, Lutz F.; Bishop, Andrew A.; Lusk, Jeffrey J.; Zach, Eric S.; Decker, Karie L.

    2017-01-01

    We know economic and social policy has implications for ecosystems at large, but the consequences for a given geographic area or specific wildlife population are more difficult to conceptualize and communicate. Species distribution models, which extrapolate species-habitat relationships across ecological scales, are capable of predicting population changes in distribution and abundance in response to management and policy, and thus, are an ideal means for facilitating proactive management within a larger policy framework. To illustrate the capabilities of species distribution modeling in scenario planning for wildlife populations, we projected an existing distribution model for ring-necked pheasants (Phasianus colchicus) onto a series of alternative future landscape scenarios for Nebraska, USA. Based on our scenarios, we qualitatively and quantitatively estimated the effects of agricultural policy decisions on pheasant populations across Nebraska, in specific management regions, and at wildlife management areas. 

  18. Time Serial Analysis of the Induced LEO Environment within the ISS 6A

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Nealy, John E.; Tomov, B. T.; Cucinotta, Francis A.; Badavi, Frank F.; DeAngelis, Giovanni; Atwell, William; Leutke, N.

    2006-01-01

    Anisotropies in the low Earth orbit (LEO) radiation environment were found to influence the thermoluminescence detectors (TLD) dose within the (International Space Station) ISS 7A Service Module. Subsequently, anisotropic environmental models with improved dynamic time extrapolation have been developed including westward and northern drifts using AP8 Min & Max as estimates of the historic spatial distribution of trapped protons in the 1965 and 1970 era, respectively. In addition, a directional dependent geomagnetic cutoff model was derived for geomagnetic field configurations from the 1945 to 2020 time frame. A dynamic neutron albedo model based on our atmospheric radiation studies has likewise been required to explain LEO neutron measurements. The simultaneous measurements of dose and dose rate using four Liulin instruments at various locations in the US LAB and Node 1 has experimentally demonstrated anisotropic effects in ISS 6A and are used herein to evaluate the adequacy of these revised environmental models.

  19. A model for the data extrapolation of greenhouse gas emissions in the Brazilian hydroelectric system

    NASA Astrophysics Data System (ADS)

    Pinguelli Rosa, Luiz; Aurélio dos Santos, Marco; Gesteira, Claudio; Elias Xavier, Adilson

    2016-06-01

    Hydropower reservoirs are artificial water systems and comprise a small proportion of the Earth’s continental territory. However, they play an important role in the aquatic biogeochemistry and may affect the environment negatively. Since the 90s, as a result of research on organic matter decay in manmade flooded areas, some reports have associated greenhouse gas emissions with dam construction. Pioneering work carried out in the early period challenged the view that hydroelectric plants generate completely clean energy. Those estimates suggested that GHG emissions into the atmosphere from some hydroelectric dams may be significant when measured per unit of energy generated and should be compared to GHG emissions from fossil fuels used for power generation. The contribution to global warming of greenhouse gases emitted by hydropower reservoirs is currently the subject of various international discussions and debates. One of the most controversial issues is the extrapolation of data from different sites. In this study, the extrapolation from a site sample where measurements were made to the complete set of 251 reservoirs in Brazil, comprising a total flooded area of 32 485 square kilometers, was derived from the theory of self-organized criticality. We employed a power law for its statistical representation. The present article reviews the data generated at that time in order to demonstrate how, with the help of mathematical tools, we can extrapolate values from one reservoir to another without compromising the reliability of the results.

  20. What experimental approaches (eg, in vivo, in vitro, tissue retrieval) are effective in investigating the biologic effects of particles?

    PubMed Central

    Bostrom, Mathias; O'Keefe, Regis

    2009-01-01

    Understanding the complex cellular and tissue mechanisms and interactions resulting in periprosthetic osteolysis requires a number of experimental approaches, each of which has its own set of advantages and limitations. In vitro models allow for the isolation of individual cell populations and have furthered our understanding of particle-cell interactions; however, they are limited because they do not mimic the complex tissue environment in which multiple cell interactions occur. In vivo animal models investigate the tissue interactions associated with periprosthetic osteolysis, but the choice of species and whether the implant system is subjected to mechanical load or to unloaded conditions are critical in assessing whether these models can be extrapolated to the clinical condition. Rigid analysis of retrieved tissue from clinical cases of osteolysis offers a different approach to studying the biologic process of osteolysis, but it is limited in that the tissue analyzed represents the end-stage of this process and, thus, may not reflect this process adequately. PMID:18612016

  1. What experimental approaches (eg, in vivo, in vitro, tissue retrieval) are effective in investigating the biologic effects of particles?

    PubMed

    Bostrom, Mathias; O'Keefe, Regis

    2008-01-01

    Understanding the complex cellular and tissue mechanisms and interactions resulting in periprosthetic osteolysis requires a number of experimental approaches, each of which has its own set of advantages and limitations. In vitro models allow for the isolation of individual cell populations and have furthered our understanding of particle-cell interactions; however, they are limited because they do not mimic the complex tissue environment in which multiple cell interactions occur. In vivo animal models investigate the tissue interactions associated with periprosthetic osteolysis, but the choice of species and whether the implant system is subjected to mechanical load or to unloaded conditions are critical in assessing whether these models can be extrapolated to the clinical condition. Rigid analysis of retrieved tissue from clinical cases of osteolysis offers a different approach to studying the biologic process of osteolysis, but it is limited in that the tissue analyzed represents the end-stage of this process and, thus, may not reflect this process adequately.

  2. Simulation of photobioreaction for hydrogen production in membrane bioreactor with an optical fiber

    NASA Astrophysics Data System (ADS)

    Yang, Yanxia; Li, Jing

    2018-05-01

    A generalized lattice Boltzmann (LB) model for porous media is adopted to simulate the hydrodynamics and mass transport combined with biodegradation in membrane bioreactor with a circular optical fiber. The LB model is coupled with a multi-block scheme, as well as non-equilibrium extrapolation method for boundary condition treatment. The effect of porosity and permeability (represented by Darcy number Da) of biofilm on flow and concentration fields are investigated. The performance of biodegradation is evaluated by substrate consumption efficiency. Higher porosity and permeability of biofilm facilitate mass transport of substance and enhance the metabolic activity of bacteria in biofilm, which results in the optimal biodegradation performance is obtained under the condition of Da = 0.001 and ɛ =0.3. For further increasing of these parameters, the substrate consumption efficiency decreases due to the inhibition effect of substrate and shorter hydraulic retention time. Furthermore, the LB results coincide with experimental results, demonstrating that the LB model for porous media is available to optimize the membrane bioreactor for efficient biodegradation.

  3. Neutron Electric Dipole Moment from Gauge-String Duality.

    PubMed

    Bartolini, Lorenzo; Bigazzi, Francesco; Bolognesi, Stefano; Cotrone, Aldo L; Manenti, Andrea

    2017-03-03

    We compute the electric dipole moment of nucleons in the large N_{c} QCD model by Witten, Sakai, and Sugimoto with N_{f}=2 degenerate massive flavors. Baryons in the model are instantonic solitons of an effective five-dimensional action describing the whole tower of mesonic fields. We find that the dipole electromagnetic form factor of the nucleons, induced by a finite topological θ angle, exhibits complete vector meson dominance. We are able to evaluate the contribution of each vector meson to the final result-a small number of modes are relevant to obtain an accurate estimate. Extrapolating the model parameters to real QCD data, the neutron electric dipole moment is evaluated to be d_{n}=1.8×10^{-16}θ e cm. The electric dipole moment of the proton is exactly the opposite.

  4. Validation and Application of Pharmacokinetic Models for Interspecies Extrapolations in Toxicity Risk Assessments of Volatile Organics

    DTIC Science & Technology

    1989-07-21

    formulation of physiologically-based pharmacokinetic models. Adult male Sprague-Dawley rats and male beagle dogs will be administered equal doses...experiments in the 0 dog . Physiologically-based pharmacokinetic models will be developed and validated for oral and inhalation exposures to halocarbons...of conducting experiments in dogs . The original physiolo ic model for the rat will be scaled up to predict halocarbon pharmacokinetics in the dog . The

  5. Vibrational spectra and ab initio analysis of tert-butyl, trimethylsilyl, and trimethylgermyl derivatives of 3,3-dimethylcyclopropene . VI: Application of observed trends to stannyl derivatives

    NASA Astrophysics Data System (ADS)

    Panchenko, Yu. N.; De Maré, G. R.; Abramenkov, A. V.; Baird, M. S.; Tverezovsky, V. V.; Nizovtsev, A. V.; Bolesov, I. G.

    2004-09-01

    The effects of substitution of X=C by Si or Ge in X(CH 3) 3 moieties attached to the formal double bond of 3,3-dimethylcyclopropene are examined. Regularities in observed trends of vibrational frequencies implicating the moieties containing the X atom, as the X atomic mass is increased, are extrapolated to X=Sn. The results of this extrapolation made it possible to assign the known experimental vibrational frequencies of 3,3-dimethyl-1-(trimethylstannyl)cyclopropene and 3,3-dimethyl-1,2-bis(trimethylstannyl)cyclopropene.

  6. AXES OF EXTRAPOLATION IN RISK ASSESSMENTS

    EPA Science Inventory

    Extrapolation in risk assessment involves the use of data and information to estimate or predict something that has not been measured or observed. Reasons for extrapolation include that the number of combinations of environmental stressors and possible receptors is too large to c...

  7. A new methodology for modeling of direct landslide costs for transportation infrastructures

    NASA Astrophysics Data System (ADS)

    Klose, Martin; Terhorst, Birgit

    2014-05-01

    The world's transportation infrastructure is at risk of landslides in many areas across the globe. A safe and affordable operation of traffic routes are the two main criteria for transportation planning in landslide-prone areas. The right balancing of these often conflicting priorities requires, amongst others, profound knowledge of the direct costs of landslide damage. These costs include capital investments for landslide repair and mitigation as well as operational expenditures for first response and maintenance works. This contribution presents a new methodology for ex post assessment of direct landslide costs for transportation infrastructures. The methodology includes tools to compile, model, and extrapolate landslide losses on different spatial scales over time. A landslide susceptibility model enables regional cost extrapolation by means of a cost figure obtained from local cost compilation for representative case study areas. On local level, cost survey is closely linked with cost modeling, a toolset for cost estimation based on landslide databases. Cost modeling uses Landslide Disaster Management Process Models (LDMMs) and cost modules to simulate and monetize cost factors for certain types of landslide damage. The landslide susceptibility model provides a regional exposure index and updates the cost figure to a cost index which describes the costs per km of traffic route at risk of landslides. Both indexes enable the regionalization of local landslide losses. The methodology is applied and tested in a cost assessment for highways in the Lower Saxon Uplands, NW Germany, in the period 1980 to 2010. The basis of this research is a regional subset of a landslide database for the Federal Republic of Germany. In the 7,000 km² large Lower Saxon Uplands, 77 km of highway are located in potential landslide hazard area. Annual average costs of 52k per km of highway at risk of landslides are identified as cost index for a local case study area in this region. The cost extrapolation for the Lower Saxon Uplands results in annual average costs for highways of 4.02mn. This test application as well as a validation of selected modeling tools verifies the functionality of this methodology.

  8. Pollutant threshold concentration determination in marine ecosystems using an ecological interaction endpoint.

    PubMed

    Wang, Changyou; Liang, Shengkang; Guo, Wenting; Yu, Hua; Xing, Wenhui

    2015-09-01

    The threshold concentrations of pollutants are determined by extrapolating single-species effect data to community-level effects. This assumes the most sensitive endpoint of the life cycle of individuals and the species sensitivity distribution from single-species toxic effect tests, thus, ignoring the ecological interactions. The uncertainties due to this extrapolation can be partially overcome using the equilibrium point of a customized ecosystem. This method incorporates ecological interactions and integrates the effects on growth, survival, and ingestion into a single effect measure, the equilibrium point excursion in the customized ecosystem, in order to describe the toxic effects on plankton. A case study showed that the threshold concentration of copper calculated with the endpoint of the equilibrium point was 10 μg L(-1), which is significantly different from the threshold calculated with a single-species endpoint. The endpoint calculated using this method provides a more relevant measure of the ecological impact than any single individual-level endpoint. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Precursor of superfluidity in a strongly interacting Fermi gas with negative effective range

    NASA Astrophysics Data System (ADS)

    Tajima, Hiroyuki

    2018-04-01

    We investigate theoretically the effects of pairing fluctuations in an ultracold Fermi gas near a Feshbach resonance with a negative effective range. By employing a many-body T -matrix theory with a coupled fermion-boson model, we show that the single-particle density of states exhibits the so-called pseudogap phenomenon, which is a precursor of superfluidity induced by strong pairing fluctuations. We clarify the region where strong pairing fluctuations play a crucial role in single-particle properties, from the broad-resonance region to the narrow-resonance limit at the divergent two-body scattering length. We also extrapolate the effects of pairing fluctuations to the positive-effective-range region from our results near the narrow Feshbach resonance. Results shown in this paper are relevant to the connection between ultracold Fermi gases and low-density neutron matter from the viewpoint of finite-effective-range corrections.

  10. Soil warming response: field experiments to Earth system models

    NASA Astrophysics Data System (ADS)

    Todd-Brown, K. E.; Bradford, M.; Wieder, W. R.; Crowther, T. W.

    2017-12-01

    The soil carbon response to climate change is extremely uncertain at the global scale, in part because of the uncertainty in the magnitude of the temperature response. To address this uncertainty we collected data from 48 soil warming manipulations studies and examined the temperature response using two different methods. First, we constructed a mixed effects model and extrapolated the effect of soil warming on soil carbon stocks under anticipated shifts in surface temperature during the 21st century. We saw significant vulnerability of soil carbon stocks, especially in high carbon soils. To place this effect in the context of anticipated changes in carbon inputs and moisture shifts, we applied a one pool decay model with temperature sensitivities to the field data and imposed a post-hoc correction on the Earth system model simulations to integrate the field with the simulated temperature response. We found that there was a slight elevation in the overall soil carbon losses, but that the field uncertainty of the temperature sensitivity parameter was as large as the variation in the among model soil carbon projections. This implies that model-data integration is unlikely to constrain soil carbon simulations and highlights the importance of representing parameter uncertainty in these Earth system models to inform emissions targets.

  11. Predicting cancer rates in astronauts from animal carcinogenesis studies and cellular markers

    NASA Technical Reports Server (NTRS)

    Williams, J. R.; Zhang, Y.; Zhou, H.; Osman, M.; Cha, D.; Kavet, R.; Cuccinotta, F.; Dicello, J. F.; Dillehay, L. E.

    1999-01-01

    The radiation space environment includes particles such as protons and multiple species of heavy ions, with much of the exposure to these radiations occurring at extremely low average dose-rates. Limitations in databases needed to predict cancer hazards in human beings from such radiations are significant and currently do not provide confidence that such predictions are acceptably precise or accurate. In this article, we outline the need for animal carcinogenesis data based on a more sophisticated understanding of the dose-response relationship for induction of cancer and correlative cellular endpoints by representative space radiations. We stress the need for a model that can interrelate human and animal carcinogenesis data with cellular mechanisms. Using a broad model for dose-response patterns which we term the "subalpha-alpha-omega (SAO) model", we explore examples in the literature for radiation-induced cancer and for radiation-induced cellular events to illustrate the need for data that define the dose-response patterns more precisely over specific dose ranges, with special attention to low dose, low dose-rate exposure. We present data for multiple endpoints in cells, which vary in their radiosensitivity, that also support the proposed model. We have measured induction of complex chromosome aberrations in multiple cell types by two space radiations, Fe-ions and protons, and compared these to photons delivered at high dose-rate or low dose-rate. Our data demonstrate that at least three factors modulate the relative efficacy of Fe-ions compared to photons: (i) intrinsic radiosensitivity of irradiated cells; (ii) dose-rate; and (iii) another unspecified effect perhaps related to reparability of DNA lesions. These factors can produce respectively up to at least 7-, 6- and 3-fold variability. These data demonstrate the need to understand better the role of intrinsic radiosensitivity and dose-rate effects in mammalian cell response to ionizing radiation. Such understanding is critical in extrapolating databases between cellular response, animal carcinogenesis and human carcinogenesis, and we suggest that the SAO model is a useful tool for such extrapolation.

  12. Predicting river travel time from hydraulic characteristics

    USGS Publications Warehouse

    Jobson, H.E.

    2001-01-01

    Predicting the effect of a pollutant spill on downstream water quality is primarily dependent on the water velocity, longitudinal mixing, and chemical/physical reactions. Of these, velocity is the most important and difficult to predict. This paper provides guidance on extrapolating travel-time information from one within bank discharge to another. In many cases, a time series of discharge (such as provided by a U.S. Geological Survey stream gauge) will provide an excellent basis for this extrapolation. Otherwise, the accuracy of a travel time extrapolation based on a resistance equation can be greatly improved by assuming the total flow area is composed of two parts, an active and an inactive area. For 60 reaches of 12 rivers with slopes greater than about 0.0002, travel times could be predicted to within about 10% by computing the active flow area using the Manning equation with n = 0.035 and assuming a constant inactive area for each reach. The predicted travel times were not very sensitive to the assumed values of bed slope or channel width.

  13. Investigation of the Aerodynamic Performance of a DG808s UAS in Propeller Slipstream Using Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Chandra, Yatish

    Unmanned Aerial Systems (UASs) are relatively affordable and immediately available compared to commercial aircraft. Hence, their aerodynamics and design accuracies are often based on extrapolating from design standards and procedures widely used in the aerospace industry for commercial aircraft with most often, acceptable results. Engineering level software such as Advanced Aircraft Analysis (AAA) use general aviation aircraft data and later extrapolate them onto UASs for aerodynamic and flight dynamics modeling but are limited by their platform repository and relatively high Reynolds number evaluations. UASs however, are aircraft which fly at comparatively low speeds and low Reynolds number with close proximities between the components wherein such standards may not hold good. This thesis focuses on evaluating the accuracy and impact of such industry standards on the aerodynamics and flight dynamics of UASs. A DG808s UAS is chosen for the study which was previously modeled using the AAA software at The University of Kansas by the Flight Systems Team. Using the STAR-CCM+ code, performance data were compared and assessed with AAA. Aerodynamic simulations were carried out for two different configurations viz., aircraft with and without propeller slipstream effects. Data obtained for the non-powered simulations were found to be in good agreement with the AAA model. For the powered flight however, discrepancies between the AAA model and CFD data were observed with large values for the vertical tail side-force coefficient. A comparison with the system identification data from the flight tests was made to confirm and validate this vertical tail behavior with the help of rudder deflection inputs. A relationship between the propeller RPM and the aerodynamic model was established by simulating two different propeller speeds. Based on the STAR-CCM+ data and the resulting comparisons with AAA, updates necessary to the UAS aerodynamic and flight dynamics models currently used in the industry were discussed and concluded with a stress on dependency on higher fidelity methods such as Computational Fluid Dynamics.

  14. The public health benefits of insulation retrofits in existing housing in the United States

    PubMed Central

    Levy, Jonathan I; Nishioka, Yurika; Spengler, John D

    2003-01-01

    Background Methodological limitations make it difficult to quantify the public health benefits of energy efficiency programs. To address this issue, we developed a risk-based model to estimate the health benefits associated with marginal energy usage reductions and applied the model to a hypothetical case study of insulation retrofits in single-family homes in the United States. Methods We modeled energy savings with a regression model that extrapolated findings from an energy simulation program. Reductions of fine particulate matter (PM2.5) emissions and particle precursors (SO2 and NOx) were quantified using fuel-specific emission factors and marginal electricity analyses. Estimates of population exposure per unit emissions, varying by location and source type, were extrapolated from past dispersion model runs. Concentration-response functions for morbidity and mortality from PM2.5 were derived from the epidemiological literature, and economic values were assigned to health outcomes based on willingness to pay studies. Results In total, the insulation retrofits would save 800 TBTU (8 × 1014 British Thermal Units) per year across 46 million homes, resulting in 3,100 fewer tons of PM2.5, 100,000 fewer tons of NOx, and 190,000 fewer tons of SO2 per year. These emission reductions are associated with outcomes including 240 fewer deaths, 6,500 fewer asthma attacks, and 110,000 fewer restricted activity days per year. At a state level, the health benefits per unit energy savings vary by an order of magnitude, illustrating that multiple factors (including population patterns and energy sources) influence health benefit estimates. The health benefits correspond to $1.3 billion per year in externalities averted, compared with $5.9 billion per year in economic savings. Conclusion In spite of significant uncertainties related to the interpretation of PM2.5 health effects and other dimensions of the model, our analysis demonstrates that a risk-based methodology is viable for national-level energy efficiency programs. PMID:12740041

  15. A study of lens opacification for a Mars mission

    NASA Technical Reports Server (NTRS)

    Shinn, J. L.; Wilson, J. W.; Cox, A. B.; Lett, J. T.

    1991-01-01

    A method based on risk-related cross sections is used to estimate risks of 'stationary' cataracts caused by radiation exposures during extended missions in deep space. Estimates of the even more important risk of late degenerative cataractogenesis are made on the basis of the limited data available. Data on lenticular opacification in the New Zealand white rabbit, an animal model from which such results can be extrapolated to humans, are analyzed by the Langley cosmic ray shielding code (HZETRN) to generate estimates of stationary cataract formation resulting from a Mars mission. The effects of the composition of shielding material and the relationship between risk and LET are given, and the effects of target fragmentation on the risk coefficients are evaluated explicitly.

  16. Validation of a predictive model for survival and growth of Salmonella Typhimurium DT104 on chicken skin for extrapolation to a previous history of frozen storage

    USDA-ARS?s Scientific Manuscript database

    A predictive model for survival and growth of Salmonella Typhimurium DT104 on chicken skin was evaluated for its ability to predict survival and growth of the same organism after frozen storage for 6 days at -20 C. Experimental methods used to collect data for model development were the same as tho...

  17. Higgs compositeness in Sp(2N) gauge theories — The pure gauge model

    NASA Astrophysics Data System (ADS)

    Bennett, Ed; Ki Hong, Deog; Lee, Jong-Wan; David Lin, C.-J.; Lucini, Biagio; Piai, Maurizio; Vadacchino, Davide

    2018-03-01

    As a first step in the study of Sp(2N) composite Higgs models, we obtained a set of novel numerical results for the pure gauge Sp(4) lattice theory in 3+1 space-time dimensions. Results for the continuum extrapolations of the string tension and the glueball mass spectrum are presented and their values are compared with the same quantities in neighbouring SU(N) models.

  18. Adaptive Blending of Model and Observations for Automated Short-Range Forecasting: Examples from the Vancouver 2010 Olympic and Paralympic Winter Games

    NASA Astrophysics Data System (ADS)

    Bailey, Monika E.; Isaac, George A.; Gultepe, Ismail; Heckman, Ivan; Reid, Janti

    2014-01-01

    An automated short-range forecasting system, adaptive blending of observations and model (ABOM), was tested in real time during the 2010 Vancouver Olympic and Paralympic Winter Games in British Columbia. Data at 1-min time resolution were available from a newly established, dense network of surface observation stations. Climatological data were not available at these new stations. This, combined with output from new high-resolution numerical models, provided a unique and exciting setting to test nowcasting systems in mountainous terrain during winter weather conditions. The ABOM method blends extrapolations in time of recent local observations with numerical weather predictions (NWP) model predictions to generate short-range point forecasts of surface variables out to 6 h. The relative weights of the model forecast and the observation extrapolation are based on performance over recent history. The average performance of ABOM nowcasts during February and March 2010 was evaluated using standard scores and thresholds important for Olympic events. Significant improvements over the model forecasts alone were obtained for continuous variables such as temperature, relative humidity and wind speed. The small improvements to forecasts of variables such as visibility and ceiling, subject to discontinuous changes, are attributed to the persistence component of ABOM.

  19. Long-Period Tidal Variations in the Length of Day

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Erofeeva, Svetlana Y.

    2014-01-01

    A new model of long-period tidal variations in length of day is developed. The model comprises 80 spectral lines with periods between 18.6 years and 4.7 days, and it consistently includes effects of mantle anelasticity and dynamic ocean tides for all lines. The anelastic properties followWahr and Bergen; experimental confirmation for their results now exists at the fortnightly period, but there remains uncertainty when extrapolating to the longest periods. The ocean modeling builds on recent work with the fortnightly constituent, which suggests that oceanic tidal angular momentum can be reliably predicted at these periods without data assimilation. This is a critical property when modeling most long-period tides, for which little observational data exist. Dynamic ocean effects are quite pronounced at shortest periods as out-of-phase rotation components become nearly as large as in-phase components. The model is tested against a 20 year time series of space geodetic measurements of length of day. The current international standard model is shown to leave significant residual tidal energy, and the new model is found to mostly eliminate that energy, with especially large variance reduction for constituents Sa, Ssa, Mf, and Mt.

  20. Extrapolation procedures in Mott electron polarimetry

    NASA Technical Reports Server (NTRS)

    Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.

    1992-01-01

    In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.

  1. NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.

    PubMed

    Hinrichs, R N; McLean, S P

    1995-10-01

    This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.

  2. Cost-effectiveness Analysis in R Using a Multi-state Modeling Survival Analysis Framework: A Tutorial.

    PubMed

    Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F

    2017-05-01

    This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.

  3. Resolution enhancement by extrapolation of coherent diffraction images: a quantitative study on the limits and a numerical study of nonbinary and phase objects.

    PubMed

    Latychevskaia, T; Chushkin, Y; Fink, H-W

    2016-10-01

    In coherent diffractive imaging, the resolution of the reconstructed object is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by postextrapolation of coherent diffraction images, such as diffraction patterns or holograms. We demonstrate that a diffraction pattern can unambiguously be extrapolated from only a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal is linearly proportional to the oversampling ratio. Although there could be in principle other methods to achieve extrapolation, we devote our discussion to employing iterative phase retrieval methods and demonstrate their limits. We present two numerical studies; namely, the extrapolation of diffraction patterns of nonbinary and that of phase objects together with a discussion of the optimal extrapolation procedure. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  4. APPROACHES TO EXTRAPOLATING EFFECTS OF EDCS ACROSS BIOLOGICAL LEVELS OF ORGANIZATION IN FISH

    EPA Science Inventory

    A challenge in ecological risk assessments is to obtain, in a resource-effective manner, information that provides insight both into chemical mode/mechanism of action (MOA) and adverse effects in individual animals, which are indicative of potential population-level responses. T...

  5. Temperature-dependent absorption cross sections for hydrogen peroxide vapor

    NASA Technical Reports Server (NTRS)

    Nicovich, J. M.; Wine, P. H.

    1988-01-01

    Relative absorption cross sections for hydrogen peroxide vapor were measured over the temperature ranges 285-381 K for lambda = 230 nm-295 nm and 300-381 K for lambda = 193 nm-350 nm. The well established 298 K cross sections at 202.6 and 228.8 nm were used as an absolute calibration. A significant temperature dependence was observed at the important tropospheric photolysis wavelengths lambda over 300 nm. Measured cross sections were extrapolated to lower temperatures, using a simple model which attributes the observed temperature dependence to enhanced absorption by molecules possessing one quantum of O-O stretch vibrational excitation. Upper tropospheric photodissociation rates calculated using the extrapolated cross sections are about 25 percent lower than those calculated using currently recommended 298 K cross sections.

  6. Computer and laboratory simulation of interactions between spacecraft surfaces and charged-particle environments

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.

    1979-01-01

    Cases where the charged-particle environment acts on the spacecraft (e.g., spacecraft charging phenomena) and cases where a system on the spacecraft causes the interaction (e.g., high voltage space power systems) are considered. Both categories were studied in ground simulation facilities to understand the processes involved and to measure the pertinent parameters. Computer simulations are based on the NASA Charging Analyzer Program (NASCAP) code. Analytical models are developed in this code and verified against the experimental data. Extrapolation from the small test samples to space conditions are made with this code. Typical results from laboratory and computer simulations are presented for both types of interactions. Extrapolations from these simulations to performance in space environments are discussed.

  7. Evaluation of the Uncertainty in JP-7 Kinetics Models Applied to Scramjets

    NASA Technical Reports Server (NTRS)

    Norris, A. T.

    2017-01-01

    One of the challenges of designing and flying a scramjet-powered vehicle is the difficulty of preflight testing. Ground tests at realistic flight conditions introduce several sources of uncertainty to the flow that must be addressed. For example, the scales of the available facilities limit the size of vehicles that can be tested and so performance metrics for larger flight vehicles must be extrapolated from ground tests at smaller scales. To create the correct flow enthalpy for higher Mach number flows, most tunnels use a heater that introduces vitiates into the flow. At these conditions, the effects of the vitiates on the combustion process is of particular interest to the engine designer, where the ground test results must be extrapolated to flight conditions. In this paper, the uncertainty of the cracked JP-7 chemical kinetics used in the modeling of a hydrocarbon-fueled scramjet was investigated. The factors that were identified as contributing to uncertainty in the combustion process were the level of flow vitiation, the uncertainty of the kinetic model coefficients and the variation of flow properties between ground testing and flight. The method employed was to run simulations of small, unit problems and identify which variables were the principal sources of uncertainty for the mixture temperature. Then using this resulting subset of all the variables, the effects of the uncertainty caused by the chemical kinetics on a representative scramjet flow-path for both vitiated (ground) and nonvitiated (flight) flows were investigated. The simulations showed that only a few of the kinetic rate equations contribute to the uncertainty in the unit problem results, and when applied to the representative scramjet flowpath, the resulting temperature variability was on the order of 100 K. Both the vitiated and clean air results showed very similar levels of uncertainty, and the difference between the mean properties were generally within the range of uncertainty predicted.

  8. An approach to the origin of self-replicating system. I - Intermolecular interactions

    NASA Technical Reports Server (NTRS)

    Macelroy, R. D.; Coeckelenbergh, Y.; Rein, R.

    1978-01-01

    The present paper deals with the characteristics and potentialities of a recently developed computer-based molecular modeling system. Some characteristics of current coding systems are examined and are extrapolated to the apparent requirements of primitive prebiological coding systems.

  9. Prediction of a Therapeutic Dose for Buagafuran, a Potent Anxiolytic Agent by Physiologically Based Pharmacokinetic/Pharmacodynamic Modeling Starting from Pharmacokinetics in Rats and Human.

    PubMed

    Yang, Fen; Wang, Baolian; Liu, Zhihao; Xia, Xuejun; Wang, Weijun; Yin, Dali; Sheng, Li; Li, Yan

    2017-01-01

    Physiologically based pharmacokinetic (PBPK)/pharmacodynamic (PD) models can contribute to animal-to-human extrapolation and therapeutic dose predictions. Buagafuran is a novel anxiolytic agent and phase I clinical trials of buagafuran have been completed. In this paper, a potentially effective dose for buagafuran of 30 mg t.i.d. in human was estimated based on the human brain concentration predicted by a PBPK/PD modeling. The software GastroPlus TM was used to build the PBPK/PD model for buagafuran in rat which related the brain tissue concentrations of buagafuran and the times of animals entering the open arms in the pharmacological model of elevated plus-maze. Buagafuran concentrations in human plasma were fitted and brain tissue concentrations were predicted by using a human PBPK model in which the predicted plasma profiles were in good agreement with observations. The results provided supportive data for the rational use of buagafuran in clinic.

  10. Linking morphodynamic response with sediment mass balance on the Colorado River in Marble Canyon: issues of scale, geomorphic setting, and sampling design

    USGS Publications Warehouse

    Grams, Paul E.; Topping, David J.; Schmidt, John C.; Hazel, Joseph E.; Kaplinski, Matt

    2013-01-01

    Measurements of morphologic change are often used to infer sediment mass balance. Such measurements may, however, result in gross errors when morphologic changes over short reaches are extrapolated to predict changes in sediment mass balance for long river segments. This issue is investigated by examination of morphologic change and sediment influx and efflux for a 100 km segment of the Colorado River in Grand Canyon, Arizona. For each of four monitoring intervals within a 7 year study period, the direction of sand-storage response within short morphologic monitoring reaches was consistent with the flux-based sand mass balance. Both budgeting methods indicate that sand storage was stable or increased during the 7 year period. Extrapolation of the morphologic measurements outside the monitoring reaches does not, however, provide a reasonable estimate of the magnitude of sand-storage change for the 100 km study area. Extrapolation results in large errors, because there is large local variation in site behavior driven by interactions between the flow and local bed topography. During the same flow regime and reach-average sediment supply, some locations accumulate sand while others evacuate sand. The interaction of local hydraulics with local channel geometry exerts more control on local morphodynamic response than sand supply over an encompassing river segment. Changes in the upstream supply of sand modify bed responses but typically do not completely offset the effect of local hydraulics. Thus, accurate sediment budgets for long river segments inferred from reach-scale morphologic measurements must incorporate the effect of local hydraulics in a sampling design or avoid extrapolation altogether.

  11. Acute toxicity value extrapolation with fish and aquatic invertebrates

    USGS Publications Warehouse

    Buckler, Denny R.; Mayer, Foster L.; Ellersieck, Mark R.; Asfaw, Amha

    2005-01-01

    Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled “Ecological Risk Analysis” (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more sensitive to contaminants than fish are.

  12. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  13. Carrying capacity for species richness as context for conservation: a case study of North American birds

    Treesearch

    Andrew J. Hansen; Linda Bowers Phillips; Curtis H. Flather; Jim Robinson-Cox

    2011-01-01

    We evaluated the leading hypotheses on biophysical factors affecting species richness for Breeding Bird Survey routes from areas with little influence of human activities.We then derived a best model based on information theory, and used this model to extrapolate SK across North America based on the biophysical predictor variables. The predictor variables included the...

  14. Agent-Based Computational Modeling of Cell Culture: Understanding Dosimetry In Vitro as Part of In Vitro to In Vivo Extrapolation

    EPA Science Inventory

    Quantitative characterization of cellular dose in vitro is needed for alignment of doses in vitro and in vivo. We used the agent-based software, CompuCell3D (CC3D), to provide a stochastic description of cell growth in culture. The model was configured so that isolated cells assu...

  15. Short hold times in dynamic vapor sorption measurements mischaracterize the equilibrium moisture content of wood

    Treesearch

    Samuel V. Glass; Charles R. Boardman; Samuel L. Zelinka

    2017-01-01

    Recently, the dynamic vapor sorption (DVS) technique has been used to measure sorption isotherms and develop moisture-mechanics models for wood and cellulosic materials. This method typically involves measuring the time-dependent mass response of a sample following step changes in relative humidity (RH), fitting a kinetic model to the data, and extrapolating the...

  16. Kinetics of process of product separation in closed system with recirculation

    NASA Astrophysics Data System (ADS)

    Prokopenko, V. S.; Orekhova, T. N.; Goncharov, E. I.; Odobesko, I. A.

    2018-03-01

    The object of an article is the extrapolation of the process of classifying material while passing in a model with the separation of the products of milling in the cleaning system includes a separator, concentrator, cyclone and a recycle loop. The model allows for the given parameters to predict the coarseness of grading of the finished product.

  17. Implementing Vocational Education in the Schools: An Alternative Curriculum.

    ERIC Educational Resources Information Center

    Illinois State Board of Education, Springfield. Dept. of Adult, Vocational and Technical Education.

    This extrapolation paper is intended: (1) to present a model of a pretechnical curriculum that has as its focus the self-empowerment of the individual and (2) to describe how the curriculum could be implemented in the schools. The first part of the paper discusses the need for a pretechnical curriculum in terms of a model for self-empowerment.…

  18. Sixth international radiopharmaceutical dosimetry symposium: Proceedings. Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.-Stelson, A.T.; Stabin, M.G.; Sparks, R.B.

    1999-01-01

    This conference was held May 7--10 in Gatlinburg, Tennessee. The purpose of this conference was to provide a multidisciplinary forum for exchange of state-of-the-art information on radiopharmaceutical dosimetry. Attention is focused on the following: quantitative analysis and treatment planning; cellular and small-scale dosimetry; dosimetric models; radiopharmaceutical kinetics and dosimetry; and animal models, extrapolation, and uncertainty.

  19. Viscoelastic shear properties of human vocal fold mucosa: theoretical characterization based on constitutive modeling.

    PubMed

    Chan, R W; Titze, I R

    2000-01-01

    The viscoelastic shear properties of human vocal fold mucosa (cover) were previously measured as a function of frequency [Chan and Titze, J. Acoust. Soc. Am. 106, 2008-2021 (1999)], but data were obtained only in a frequency range of 0.01-15 Hz, an order of magnitude below typical frequencies of vocal fold oscillation (on the order of 100 Hz). This study represents an attempt to extrapolate the data to higher frequencies based on two viscoelastic theories, (1) a quasilinear viscoelastic theory widely used for the constitutive modeling of the viscoelastic properties of biological tissues [Fung, Biomechanics (Springer-Verlag, New York, 1993), pp. 277-292], and (2) a molecular (statistical network) theory commonly used for the rheological modeling of polymeric materials [Zhu et al., J. Biomech. 24, 1007-1018 (1991)]. Analytical expressions of elastic and viscous shear moduli, dynamic viscosity, and damping ratio based on the two theories with specific model parameters were applied to curve-fit the empirical data. Results showed that the theoretical predictions matched the empirical data reasonably well, allowing for parametric descriptions of the data and their extrapolations to frequencies of phonation.

  20. Forecasting United States heartworm Dirofilaria immitis prevalence in dogs.

    PubMed

    Bowman, Dwight D; Liu, Yan; McMahan, Christopher S; Nordone, Shila K; Yabsley, Michael J; Lund, Robert B

    2016-10-10

    This paper forecasts next year's canine heartworm prevalence in the United States from 16 climate, geographic and societal factors. The forecast's construction and an assessment of its performance are described. The forecast is based on a spatial-temporal conditional autoregressive model fitted to over 31 million antigen heartworm tests conducted in the 48 contiguous United States during 2011-2015. The forecast uses county-level data on 16 predictive factors, including temperature, precipitation, median household income, local forest and surface water coverage, and presence/absence of eight mosquito species. Non-static factors are extrapolated into the forthcoming year with various statistical methods. The fitted model and factor extrapolations are used to estimate next year's regional prevalence. The correlation between the observed and model-estimated county-by-county heartworm prevalence for the 5-year period 2011-2015 is 0.727, demonstrating reasonable model accuracy. The correlation between 2015 observed and forecasted county-by-county heartworm prevalence is 0.940, demonstrating significant skill and showing that heartworm prevalence can be forecasted reasonably accurately. The forecast presented herein can a priori alert veterinarians to areas expected to see higher than normal heartworm activity. The proposed methods may prove useful for forecasting other diseases.

  1. Static and dynamic characterization of alluvial deposits in the Tiber River Valley: New data for assessing potential ground motion in the City of Rome

    NASA Astrophysics Data System (ADS)

    Bozzano, F.; Caserta, A.; Govoni, A.; Marra, F.; Martino, S.

    2008-01-01

    The paper presents the results of a case study conducted on the Holocene alluvial deposits of the Tiber River valley, in the city of Rome. The main test site selected for the study, Valco S. Paolo, is located about 2 km South of Rome's historical centre. The alluvial deposits were dynamically characterized in a comprehensive way via site investigations and geotechnical laboratory tests. Normalized shear modulus decay and damping curves (G/G0 and D/D0 vs γ) were obtained for the dominantly fine-grained levels. The curves demonstrate that these levels have a more marked shear stiffness decay if compared with the underlying Pliocene bedrock. Decay curves from laboratory tests for the Tiber alluvia correlated well with the trend of the function proposed by Hardin and Drnevich, making it possible to derive their specific interpolation function coefficients. Use was made of the extrapolation of the findings from the Valco S. Paolo test site to a large part of Rome's historical centre by means of two other test sites, supported by an engineering-geology model of the complex spatial distribution of the Tiber alluvia. The experimental Valco S. Paolo Vs profile was extrapolated to the other test sites on the basis of a stratigraphic criterion; the analysis of seismic noise measurements, obtained for the three test sites, validated the engineering-geology based extrapolation and showed that the main rigidity contrast occurs inside the alluvial body (at the contact with the underlying basal gravel-level G) and not between the alluvia and the Plio-Pleistocene bedrock, composed of highly consistent clay (Marne Vaticane). The 1D modeling of local seismic response to the maximum expected earthquakes in the city of Rome confirms that the deposits have one principal mode of vibration at about 1 Hz. However, the simulation also evidenced that the silty-clay deposits (level C), making up the most part of the Tiber alluvial body, play a key role in characterizing the soil column deformation profile since it can be affected by non linear effects induced by the maximum expected earthquake when some stratigraphic conditions are satisfied.

  2. Image-based optimization of coronal magnetic field models for improved space weather forecasting

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.; MacNeice, P. J.

    2017-12-01

    The existing space weather forecasting frameworks show a significant dependence on the accuracy of the photospheric magnetograms and the extrapolation models used to reconstruct the magnetic filed in the solar corona. Minor uncertainties in the magnetic field magnitude and direction near the Sun, when propagated through the heliosphere, can lead to unacceptible prediction errors at 1 AU. We argue that ground based and satellite coronagraph images can provide valid geometric constraints that could be used for improving coronal magnetic field extrapolation results, enabling more reliable forecasts of extreme space weather events such as major CMEs. In contrast to the previously developed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions up to 1-2 solar radii above the photosphere. By applying the developed image processing techniques to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code developed S. Jones at al. (ApJ 2016, 2017). Our tracing results are shown to be in a good qualitative agreement with the large-scale configuration of the optical corona, and lead to a more consistent reconstruction of the large-scale coronal magnetic field geometry, and potentially more accurate global heliospheric simulation results. Several upcoming data products for the space weather forecasting community will be also discussed.

  3. Function approximation and documentation of sampling data using artificial neural networks.

    PubMed

    Zhang, Wenjun; Barrion, Albert

    2006-11-01

    Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.

  4. Low temperature measurement of the vapor pressures of planetary molecules

    NASA Technical Reports Server (NTRS)

    Kraus, George F.

    1989-01-01

    Interpretation of planetary observations and proper modeling of planetary atmospheres are critically upon accurate laboratory data for the chemical and physical properties of the constitutes of the atmospheres. It is important that these data are taken over the appropriate range of parameters such as temperature, pressure, and composition. Availability of accurate, laboratory data for vapor pressures and equilibrium constants of condensed species at low temperatures is essential for photochemical and cloud models of the atmospheres of the outer planets. In the absence of such data, modelers have no choice but to assume values based on an educated guess. In those cases where higher temperature data are available, a standard procedure is to extrapolate these points to the lower temperatures using the Clausius-Clapeyron equation. Last summer the vapor pressures of acetylene (C2H2) hydrogen cyanide (HCN), and cyanoacetylene (HC3N) was measured using two different methods. At the higher temperatures 1 torr and 10 torr capacitance manometers were used. To measure very low pressures, a technique was used which is based on the infrared absorption of thin film (TFIR). This summer the vapor pressure of acetylene was measured the TFIR method. The vapor pressure of hydrogen sulfide (H2S) was measured using capacitance manometers. Results for H2O agree with literature data over the common range of temperature. At the lower temperatures the data lie slightly below the values predicted by extrapolation of the Clausius-Clapeyron equation. Thin film infrared (TFIR) data for acetylene lie significantly below the values predicted by extrapolation. It is hoped to bridge the gap between the low end of the CM data and the upper end of the TFIR data in the future using a new spinning rotor gauge.

  5. Extrapolative capability of two models that estimating soil water retention curve between saturation and oven dryness.

    PubMed

    Lu, Sen; Ren, Tusheng; Lu, Yili; Meng, Ping; Sun, Shiyou

    2014-01-01

    Accurate estimation of soil water retention curve (SWRC) at the dry region is required to describe the relation between soil water content and matric suction from saturation to oven dryness. In this study, the extrapolative capability of two models for predicting the complete SWRC from limited ranges of soil water retention data was evaluated. When the model parameters were obtained from SWRC data in the 0-1500 kPa range, the FX model (Fredlund and Xing, 1994) estimations agreed well with measurements from saturation to oven dryness with RMSEs less than 0.01. The GG model (Groenevelt and Grant, 2004) produced larger errors at the dry region, with significantly larger RMSEs and MEs than the FX model. Further evaluations indicated that when SWRC measurements in the 0-100 kPa suction range was applied for model establishment, the FX model was capable of producing acceptable SWRCs across the entire water content range. For a higher accuracy, the FX model requires soil water retention data at least in the 0- to 300-kPa range to extend the SWRC to oven dryness. Comparing with the Khlosi et al. (2006) model, which requires measurements in the 0-500 kPa range to reproduce the complete SWRCs, the FX model has the advantage of requiring less SWRC measurements. Thus the FX modeling approach has the potential to eliminate the processes for measuring soil water retention in the dry range.

  6. Methods to determine the growth domain in a multidimensional environmental space.

    PubMed

    Le Marc, Yvan; Pin, Carmen; Baranyi, József

    2005-04-15

    Data from a database on microbial responses to the food environment (ComBase, see www.combase.cc) were used to study the boundary of growth several pathogens (Aeromonas hydrophila, Escherichia coli, Listeria monocytogenes, Yersinia enterocolitica). Two methods were used to evaluate the growth/no growth interface. The first one is an application of the Minimum Convex Polyhedron (MCP) introduced by Baranyi et al. [Baranyi, J., Ross, T., McMeekin, T., Roberts, T.A., 1996. The effect of parameterisation on the performance of empirical models used in Predictive Microbiology. Food Microbiol. 13, 83-91.]. The second method applies logistic regression to define the boundary of growth. The combination of these two different techniques can be a useful tool to handle the problem of extrapolation of predictive models at the growth limits.

  7. Radiation environment for ATS-F. [including ambient trapped particle fluxes

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1974-01-01

    The ambient trapped particle fluxes incident on the ATS-F satellite were determined. Several synchronous circular flight paths were evaluated and the effect of parking longitude on vehicle encountered intensities was investigated. Temporal variations in the electron environment were considered and partially accounted for. Magnetic field calculations were performed with a current field model extrapolated to a later epoch with linear time terms. Orbital flux integrations were performed with the latest proton and electron environment models using new improved computational methods. The results are presented in graphical and tabular form; they are analyzed, explained, and discussed. Estimates of energetic solar proton fluxes are given for a one year mission at selected integral energies ranging from 10 to 100 Mev, calculated for a year of maximum solar activity during the next solar cycle.

  8. On forecasting mortality.

    PubMed

    Olshansky, S J

    1988-01-01

    Official forecasts of mortality made by the U.S. Office of the Actuary throughout this century have consistently underestimated observed mortality declines. This is due, in part, to their reliance on the static extrapolation of past trends, an atheoretical statistical method that pays scant attention to the behavioral, medical, and social factors contributing to mortality change. A "multiple cause-delay model" more realistically portrays the effects on mortality of the presence of more favorable risk factors at the population level. Such revised assumptions produce large increases in forecasts of the size of the elderly population, and have a dramatic impact on related estimates of population morbidity, disability, and health care costs.

  9. 76 FR 33752 - Notice of Availability of the External Review Draft of the Guidance for Applying Quantitative...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-09

    ... External Review Draft of the Guidance for Applying Quantitative Data To Develop Data-Derived Extrapolation... Applying Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies... Applying Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies...

  10. Quantitative Systems Pharmacology Modeling of Acid Sphingomyelinase Deficiency and the Enzyme Replacement Therapy Olipudase Alfa Is an Innovative Tool for Linking Pathophysiology and Pharmacology.

    PubMed

    Kaddi, Chanchala D; Niesner, Bradley; Baek, Rena; Jasper, Paul; Pappas, John; Tolsma, John; Li, Jing; van Rijn, Zachary; Tao, Mengdi; Ortemann-Renon, Catherine; Easton, Rachael; Tan, Sharon; Puga, Ana Cristina; Schuchman, Edward H; Barrett, Jeffrey S; Azer, Karim

    2018-06-19

    Acid sphingomyelinase deficiency (ASMD) is a rare lysosomal storage disorder with heterogeneous clinical manifestations, including hepatosplenomegaly and infiltrative pulmonary disease, and is associated with significant morbidity and mortality. Olipudase alfa (recombinant human acid sphingomyelinase) is an enzyme replacement therapy under development for the non-neurological manifestations of ASMD. We present a quantitative systems pharmacology (QSP) model supporting the clinical development of olipudase alfa. The model is multiscale and mechanistic, linking the enzymatic deficiency driving the disease to molecular-level, cellular-level, and organ-level effects. Model development was informed by natural history, and preclinical and clinical studies. By considering patient-specific pharmacokinetic (PK) profiles and indicators of disease severity, the model describes pharmacodynamic (PD) and clinical end points for individual patients. The ASMD QSP model provides a platform for quantitatively assessing systemic pharmacological effects in adult and pediatric patients, and explaining variability within and across these patient populations, thereby supporting the extrapolation of treatment response from adults to pediatrics. © 2018 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  11. Scientific Objectives of the Critical Viscosity Experiment

    NASA Technical Reports Server (NTRS)

    Berg, R. F.; Moldover, M. R.

    1993-01-01

    In microgravity, the Critical Viscosity Experiment will measure the viscosity of xenon 15 times closer to the critical point than is possible on earth. The results are expected to include the first direct observation of the predicted power-law divergence of viscosity in a pure fluid and they will test calculations of the value of the exponent associated with the divergence. The results, when combined with Zeno's decay-rate data, will strengthen the test of mode coupling theory. Without microgravity viscosity data, the Zeno test will require an extrapolation of existing 1-g viscosity data by as much as factor of 100 in reduced temperature. By necessity, the extrapolation would use an incompletely verified theory of viscosity crossover. With the microgravity viscosity data, the reliance on crossover models will be negligible allowing a more reliable extrapolation. During the past year, new theoretical calculations for the viscosity exponent finally achieved consistency with the best experimental data for pure fluids. This report gives the justification for the proposed microgravity Critical Viscosity Experiment in this new context. This report also combines for the first time the best available light scattering data with our recent viscosity data to demonstrate the current status of tests of mode coupling theory.

  12. Augmenting Species Diversity in Water Quality Criteria Derivation using Interspecies Correlation Models

    EPA Science Inventory

    The specific requirements for taxa diversity of the 1985 guidelines have limited the number of ambient water quality criteria (AWQC) developed for aquatic life protection. The EPA developed the Web-based Interspecies Correlation Estimation (Web-ICE) tool to allow extrapolation of...

  13. Reducing uncertainty in species sensitivity distributions with interspecies toxicity estimation models

    EPA Science Inventory

    Determining harmful levels of contaminants for a wide range of species is limited by available toxicological data. Ecological risk assessments typically depend on empirical data from only a few species that represent a narrow range of the life history spectrum. Extrapolation mo...

  14. Genomic Indicators in the blood predict drug-induced liver injury

    EPA Science Inventory

    Hepatotoxicity and other forms of liver injury stemming from exposure to toxicants and idiosyncratic drug reactions are major concerns during the drug discovery process. Animal model systems have been utilized in an attempt to extrapolate the risk of harmful agents to humans and...

  15. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  16. An absolute photometric system at 10 and 20 microns

    NASA Technical Reports Server (NTRS)

    Rieke, G. H.; Lebofsky, M. J.; Low, F. J.

    1985-01-01

    Two new direct calibrations at 10 and 20 microns are presented in which terrestrial flux standards are referred to infrared standard stars. These measurements give both good agreement and higher accuracy when compared with previous direct calibrations. As a result, the absolute calibrations at 10 and 20 microns have now been determined with accuracies of 3 and 8 percent, respectively. A variety of absolute calibrations based on extrapolation of stellar spectra from the visible to 10 microns are reviewed. Current atmospheric models of A-type stars underestimate their fluxes by about 10 percent at 10 microns, whereas models of solar-type stars agree well with the direct calibrations. The calibration at 20 microns can probably be determined to about 5 percent by extrapolation from the more accurate result at 10 microns. The photometric system at 10 and 20 microns is updated to reflect the new absolute calibration, to base its zero point directly on the colors of A0 stars, and to improve the accuracy in the comparison of the standard stars.

  17. DMRG study of the Kagome Antiferromagnetic Heisenberg Model

    NASA Astrophysics Data System (ADS)

    Yan, Simeng; White, Steven

    2010-03-01

    We have used DMRG to study the S=1/2 Heisenberg model on the Kagome lattice, using cylindrical boundary conditions and large clusters. We have focused on the spin gap and the presence or absence of the Valence Bond Crystal (VBC) order with a 36 unit cell as studied by Marston and Zeng, Singh and Huse, and others. Our results are probably the highest accuracy results for large clusters to date. Our extrapolated results find a finite spin gap with a value of about 0.05 J. To determine whether VBC order occurs, we calculated the ground states of a variety of clusters, some of which allow the 36 site VBC order, and others which do not allow it. For narrower cylinders (width < 12) , the VBC patterns are found to vanish as the number of kept states increases. For wider systems, we do observe VBC ground states, but it is not always clear that the calculations have converged. The extrapolated energies of the two types of states are very close, within about 1%.

  18. Interaction Between Domperidone and Ketoconazole: Toward Prediction of Consequent QTc Prolongation Using Purely In Vitro Information

    PubMed Central

    Mishra, H; Polak, S; Jamei, M; Rostami-Hodjegan, A

    2014-01-01

    We aimed to investigate the application of combined mechanistic pharmacokinetic (PK) and pharmacodynamic (PD) modeling and simulation in predicting the domperidone (DOM) triggered pseudo-electrocardiogram modification in the presence of a CYP3A inhibitor, ketoconazole (KETO), using in vitro–in vivo extrapolation. In vitro metabolic and inhibitory data were incorporated into physiologically based pharmacokinetic (PBPK) models within Simcyp to simulate time course of plasma DOM and KETO concentrations when administered alone or in combination with KETO (DOM+KETO). Simulated DOM concentrations in plasma were used to predict changes in gender-specific QTcF (Fridericia correction) intervals within the Cardiac Safety Simulator platform taking into consideration DOM, KETO, and DOM+KETO triggered inhibition of multiple ionic currents in population. Combination of in vitro–in vivo extrapolation, PBPK, and systems pharmacology of electric currents in the heart was able to predict the direction and magnitude of PK and PD changes under coadministration of the two drugs although some disparities were detected. PMID:25116274

  19. TEAM-HF Cost-Effectiveness Model: A Web-Based Program Designed to Evaluate the Cost-Effectiveness of Disease Management Programs in Heart Failure

    PubMed Central

    Reed, Shelby D.; Neilson, Matthew P.; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H.; Polsky, Daniel E.; Graham, Felicia L.; Bowers, Margaret T.; Paul, Sara C.; Granger, Bradi B.; Schulman, Kevin A.; Whellan, David J.; Riegel, Barbara; Levy, Wayne C.

    2015-01-01

    Background Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. Methods We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics, use of evidence-based medications, and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model (SHFM). Projections of resource use and quality of life are modeled using relationships with time-varying SHFM scores. The model can be used to evaluate parallel-group and single-cohort designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. Results The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. Conclusion The TEAM-HF Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. PMID:26542504

  20. Truncated Calogero-Sutherland models on a circle

    NASA Astrophysics Data System (ADS)

    Tummuru, Tarun R.; Jain, Sudhir R.; Khare, Avinash

    2017-12-01

    We investigate a quantum many-body system with particles moving in a circle and subject to two-body and three-body potentials. This class of models, in which the range of interaction r can be set to a certain number of neighbors, extrapolates from a system with interactions up to next-to-nearest neighbors and the celebrated Calogero-Sutherland model. The exact ground state energy and a part of the excitation spectrum have been obtained.

Top