Sample records for calculating expectation values

  1. 42 CFR 403.253 - Calculation of benefits.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... calculated on a net level reserve basis, using appropriate values to account for lapse, mortality, morbidity, and interest, that on the valuation date represents— (A) The present value of expected incurred benefits over the loss ratio calculation period; less— (B) The present value of expected net premiums over...

  2. 42 CFR 403.253 - Calculation of benefits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... calculated on a net level reserve basis, using appropriate values to account for lapse, mortality, morbidity, and interest, that on the valuation date represents— (A) The present value of expected incurred benefits over the loss ratio calculation period; less— (B) The present value of expected net premiums over...

  3. 42 CFR 403.253 - Calculation of benefits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... calculated on a net level reserve basis, using appropriate values to account for lapse, mortality, morbidity, and interest, that on the valuation date represents— (A) The present value of expected incurred benefits over the loss ratio calculation period; less— (B) The present value of expected net premiums over...

  4. 42 CFR 403.253 - Calculation of benefits.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... calculated on a net level reserve basis, using appropriate values to account for lapse, mortality, morbidity, and interest, that on the valuation date represents— (A) The present value of expected incurred benefits over the loss ratio calculation period; less— (B) The present value of expected net premiums over...

  5. Calculation of Expectation Values of Operators in the Complex Scaling Method

    DOE PAGES

    Papadimitriou, G.

    2016-06-14

    In the complex scaling method (CSM) provides with a way to obtain resonance parameters of particle unstable states by rotating the coordinates and momenta of the original Hamiltonian. It is convenient to use an L 2 integrable basis to resolve the complex rotated or complex scaled Hamiltonian H θ , with θ being the angle of rotation in the complex energy plane. Within the CSM, resonance and scattering solutions have fall-off asymptotics. Furthermore, one of the consequences is that, expectation values of operators in a resonance or scattering complex scaled solution are calculated by complex rotating the operators. In thismore » work we are exploring applications of the CSM on calculations of expectation values of quantum mechanical operators by using the regularized backrotation technique and calculating hence the expectation value using the unrotated operator. Moreover, the test cases involve a schematic two-body Gaussian model and also applications using realistic interactions.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Özdemir, Semra Bayat; Demiralp, Metin

    The determination of the energy states is highly studied issue in the quantum mechanics. Based on expectation values dynamics, energy states can be observed. But conditions and calculations vary depending on the created system. In this work, a symmetric exponential anharmonic oscillator is considered and development of a recursive approximation method is studied to find its ground energy state. The use of majorant values facilitates the approximate calculation of expectation values.

  7. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  8. Adjusting Estimates of the Expected Value of Information for Implementation: Theoretical Framework and Practical Application.

    PubMed

    Andronis, Lazaros; Barton, Pelham M

    2016-04-01

    Value of information (VoI) calculations give the expected benefits of decision making under perfect information (EVPI) or sample information (EVSI), typically on the premise that any treatment recommendations made in light of this information will be implemented instantly and fully. This assumption is unlikely to hold in health care; evidence shows that obtaining further information typically leads to "improved" rather than "perfect" implementation. To present a method of calculating the expected value of further research that accounts for the reality of improved implementation. This work extends an existing conceptual framework by introducing additional states of the world regarding information (sample information, in addition to current and perfect information) and implementation (improved implementation, in addition to current and optimal implementation). The extension allows calculating the "implementation-adjusted" EVSI (IA-EVSI), a measure that accounts for different degrees of implementation. Calculations of implementation-adjusted estimates are illustrated under different scenarios through a stylized case study in non-small cell lung cancer. In the particular case study, the population values for EVSI and IA-EVSI were £ 25 million and £ 8 million, respectively; thus, a decision assuming perfect implementation would have overestimated the expected value of research by about £ 17 million. IA-EVSI was driven by the assumed time horizon and, importantly, the specified rate of change in implementation: the higher the rate, the greater the IA-EVSI and the lower the difference between IA-EVSI and EVSI. Traditionally calculated measures of population VoI rely on unrealistic assumptions about implementation. This article provides a simple framework that accounts for improved, rather than perfect, implementation and offers more realistic estimates of the expected value of research. © The Author(s) 2015.

  9. Stock price prediction using geometric Brownian motion

    NASA Astrophysics Data System (ADS)

    Farida Agustini, W.; Restu Affianti, Ika; Putri, Endah RM

    2018-03-01

    Geometric Brownian motion is a mathematical model for predicting the future price of stock. The phase that done before stock price prediction is determine stock expected price formulation and determine the confidence level of 95%. On stock price prediction using geometric Brownian Motion model, the algorithm starts from calculating the value of return, followed by estimating value of volatility and drift, obtain the stock price forecast, calculating the forecast MAPE, calculating the stock expected price and calculating the confidence level of 95%. Based on the research, the output analysis shows that geometric Brownian motion model is the prediction technique with high rate of accuracy. It is proven with forecast MAPE value ≤ 20%.

  10. Mathematical modelling of risk reduction in reinsurance

    NASA Astrophysics Data System (ADS)

    Balashov, R. B.; Kryanev, A. V.; Sliva, D. E.

    2017-01-01

    The paper presents a mathematical model of efficient portfolio formation in the reinsurance markets. The presented approach provides the optimal ratio between the expected value of return and the risk of yield values below a certain level. The uncertainty in the return values is conditioned by use of expert evaluations and preliminary calculations, which result in expected return values and the corresponding risk levels. The proposed method allows for implementation of computationally simple schemes and algorithms for numerical calculation of the numerical structure of the efficient portfolios of reinsurance contracts of a given insurance company.

  11. Linear canonical transformations of coherent and squeezed states in the Wigner phase space. II - Quantitative analysis

    NASA Technical Reports Server (NTRS)

    Han, D.; Kim, Y. S.; Noz, Marilyn E.

    1989-01-01

    It is possible to calculate expectation values and transition probabilities from the Wigner phase-space distribution function. Based on the canonical transformation properties of the Wigner function, an algorithm is developed for calculating these quantities in quantum optics for coherent and squeezed states. It is shown that the expectation value of a dynamical variable can be written in terms of its vacuum expectation value of the canonically transformed variable. Parallel-axis theorems are established for the photon number and its variant. It is also shown that the transition probability between two squeezed states can be reduced to that of the transition from one squeezed state to vacuum.

  12. Incorporating molecular breeding values with variable call rates into genetic evaluations

    USDA-ARS?s Scientific Manuscript database

    A partial genotype for an animal can result from panels with low call rates used to calculate a molecular breeding value. A molecular breeding value can still be calculated using a partial genotype by replacing the missing marker covariates with their mean value. This approach is expected to chang...

  13. A calculation for radial expectation values of helium like actinide ions (Z=89-93)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ürer, G., E-mail: gurer@sakarya.edu.tr; Arslan, M., E-mail: murat.arslan4@ogr.sakarya.edu.tr; Balkaya, E., E-mail: eda.balkaya@ogr.sakarya.edu.tr

    2016-03-25

    Radial expectation values, , for helium like actinides (Z{sub Ac}=89, Z{sub Th}=90, Z{sub Pa}=91, Z{sub U}=92, and Z{sub Np}=93) are reported using the Multiconfiguration Hartree-Fock (MCHF) within the framework Breit-Pauli corrections. Atomic data as energy levels, wavelengths, weighted oscillator strengths, and transition probabilities for allowed and forbidden transitions need these calculations. The obtained results are compared available works.

  14. Estimating investment returns from growing red pine.

    Treesearch

    Allen L. Lundgren

    1966-01-01

    This paper describes how to estimate present values of incomes and costs in growing red pine trees for sale as cordwood or sawtimber, and how to calculate expectation values and rates of return for a wide range of timber-growing conditions, using the tables of expectation value indexes and interest rate multipliers provided. It illustrates how to compare investment...

  15. Value of information analysis optimizing future trial design from a pilot study on catheter securement devices.

    PubMed

    Tuffaha, Haitham W; Reynolds, Heather; Gordon, Louisa G; Rickard, Claire M; Scuffham, Paul A

    2014-12-01

    Value of information analysis has been proposed as an alternative to the standard hypothesis testing approach, which is based on type I and type II errors, in determining sample sizes for randomized clinical trials. However, in addition to sample size calculation, value of information analysis can optimize other aspects of research design such as possible comparator arms and alternative follow-up times, by considering trial designs that maximize the expected net benefit of research, which is the difference between the expected cost of the trial and the expected value of additional information. To apply value of information methods to the results of a pilot study on catheter securement devices to determine the optimal design of a future larger clinical trial. An economic evaluation was performed using data from a multi-arm randomized controlled pilot study comparing the efficacy of four types of catheter securement devices: standard polyurethane, tissue adhesive, bordered polyurethane and sutureless securement device. Probabilistic Monte Carlo simulation was used to characterize uncertainty surrounding the study results and to calculate the expected value of additional information. To guide the optimal future trial design, the expected costs and benefits of the alternative trial designs were estimated and compared. Analysis of the value of further information indicated that a randomized controlled trial on catheter securement devices is potentially worthwhile. Among the possible designs for the future trial, a four-arm study with 220 patients/arm would provide the highest expected net benefit corresponding to 130% return-on-investment. The initially considered design of 388 patients/arm, based on hypothesis testing calculations, would provide lower net benefit with return-on-investment of 79%. Cost-effectiveness and value of information analyses were based on the data from a single pilot trial which might affect the accuracy of our uncertainty estimation. Another limitation was that different follow-up durations for the larger trial were not evaluated. The value of information approach allows efficient trial design by maximizing the expected net benefit of additional research. This approach should be considered early in the design of randomized clinical trials. © The Author(s) 2014.

  16. Sustainable breeding objectives and possible selection response: Finding the balance between economics and breeders' preferences.

    PubMed

    Fuerst-Waltl, Birgit; Fuerst, Christian; Obritzhauser, Walter; Egger-Danner, Christa

    2016-12-01

    To optimize breeding objectives of Fleckvieh and Brown Swiss cattle, economic values were re-estimated using updated prices, costs, and population parameters. Subsequently, the expected selection responses for the total merit index (TMI) were calculated using previous and newly derived economic values. The responses were compared for alternative scenarios that consider breeders' preferences. A dairy herd with milk production, bull fattening, and rearing of replacement stock was modeled. The economic value of a trait was derived by calculating the difference in herd profit before and after genetic improvement. Economic values for each trait were derived while keeping all other traits constant. The traits considered were dairy, beef, and fitness traits, the latter including direct health traits. The calculation of the TMI and the expected selection responses was done using selection index methodology with estimated breeding values instead of phenotypic deviations. For the scenario representing the situation up to 2016, all traits included in the TMI were considered with their respective economic values before the update. Selection response was also calculated for newly derived economic values and some alternative scenarios, including the new trait vitality index (subindex comprising stillbirth and rearing losses). For Fleckvieh, the relative economic value for the trait groups milk, beef, and fitness were 38, 16, and 46%, respectively, up to 2016, and 39, 13, and 48%, respectively, for the newly derived economic values. Approximately the same selection response may be expected for the milk trait group, whereas the new weightings resulted in a substantially decreased response in beef traits. Within the fitness block, all traits, with the exception of fertility, showed a positive selection response. For Brown Swiss, the relative economic values for the main trait groups milk, beef, and fitness were 48, 5, and 47% before 2016, respectively, whereas for the newly derived scenario they were 40, 14, and 39%. For both Brown Swiss and Fleckvieh, the fertility complex was expected to further deteriorate, whereas all other expected selection responses for fitness traits were positive. Several additional and alternative scenarios were calculated as a basis for discussion with breeders. A decision was made to implement TMI with relative economic values for milk, beef, and fitness with 38, 18, and 44% for Fleckvieh and 50, 5, and 45% for Brown Swiss, respectively. In both breeds, no positive expected selection response was predicted for fertility, although this trait complex received a markedly higher weight than that derived economically. An even higher weight for fertility could not be agreed on due to the effect on selection response of other traits. Hence, breeders decided to direct more attention toward the preselection of bulls with regard to fertility. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Model for economic evaluation of high energy gas fracturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engi, D.

    1984-05-01

    The HEGF/NPV model has been developed and adapted for interactive microcomputer calculations of the economic consequences of reservoir stimulation by high energy gas fracturing (HEGF) in naturally fractured formations. This model makes use of three individual models: a model of the stimulated reservoir, a model of the gas flow in this reservoir, and a model of the discounted expected net cash flow (net present value, or NPV) associated with the enhanced gas production. Nominal values of the input parameters, based on observed data and reasonable estimates, are used to calculate the initial expected increase in the average daily rate ofmore » production resulting from the Meigs County HEGF stimulation experiment. Agreement with the observed initial increase in rate is good. On the basis of this calculation, production from the Meigs County Well is not expected to be profitable, but the HEGF/NPV model probably provides conservative results. Furthermore, analyses of the sensitivity of the expected NPV to variations in the values of certain reservoir parameters suggest that the use of HEGF stimulation in somewhat more favorable formations is potentially profitable. 6 references, 4 figures, 3 tables.« less

  18. Why Contextual Preference Reversals Maximize Expected Value

    PubMed Central

    2016-01-01

    Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391

  19. Method for controlling gas metal arc welding

    DOEpatents

    Smartt, Herschel B.; Einerson, Carolyn J.; Watkins, Arthur D.

    1989-01-01

    The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections.

  20. Semiempirical and DFT Investigations of the Dissociation of Alkyl Halides

    ERIC Educational Resources Information Center

    Waas, Jack R.

    2006-01-01

    Enthalpy changes corresponding to the gas phase heats of dissociation of 12 organic halides were calculated using two semiempirical methods, the Hartree-Fock method, and two DFT methods. These calculated values were compared to experimental values where possible. All five methods agreed generally with the expected empirically known trends in the…

  1. The FASB explores accounting for future cash flows.

    PubMed

    Luecke, R W; Meeting, D T

    2001-03-01

    The FASB's Statement of Financial Accounting Concepts No. 7, Using Cash Flow Information and Present Value in Accounting Measurements (Statement No. 7), presents the board's views regarding how cash-flow information and present values should be used in accounting for future cash flows when information on fair values is not available. Statement No. 7 presents new concepts regarding how an asset's present value should be calculated and when the interest method of allocation should be used. The FASB proposes a present-value method that takes into account the degree of uncertainty associated with future cash flows among different assets and liabilities. The FASB also suggests that rather than use estimated cash flows (in which a single set of cash flows and a single interest rate is used to reflect the risk associated with an asset or liability), accountants should use expected cash flows (in which all expectations about possible cash flows are used) in calculating present values.

  2. Method for controlling gas metal arc welding

    DOEpatents

    Smartt, H.B.; Einerson, C.J.; Watkins, A.D.

    1987-08-10

    The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections. 3 figs., 1 tab.

  3. Economics in "Global Health 2035": a sensitivity analysis of the value of a life year estimates.

    PubMed

    Chang, Angela Y; Robinson, Lisa A; Hammitt, James K; Resch, Stephen C

    2017-06-01

    In "Global health 2035: a world converging within a generation," The Lancet Commission on Investing in Health (CIH) adds the value of increased life expectancy to the value of growth in gross domestic product (GDP) when assessing national well-being. To value changes in life expectancy, the CIH relies on several strong assumptions to bridge gaps in the empirical research. It finds that the value of a life year (VLY) averages 2.3 times GDP per capita for low- and middle-income countries (LMICs) assuming the changes in life expectancy they experienced from 2000 to 2011 are permanent. The CIH VLY estimate is based on a specific shift in population life expectancy and includes a 50 percent reduction for children ages 0 through 4. We investigate the sensitivity of this estimate to the underlying assumptions, including the effects of income, age, and life expectancy, and the sequencing of the calculations. We find that reasonable alternative assumptions regarding the effects of income, age, and life expectancy may reduce the VLY estimates to 0.2 to 2.1 times GDP per capita for LMICs. Removing the reduction for young children increases the VLY, while reversing the sequencing of the calculations reduces the VLY. Because the VLY is sensitive to the underlying assumptions, analysts interested in applying this approach elsewhere must tailor the estimates to the impacts of the intervention and the characteristics of the affected population. Analysts should test the sensitivity of their conclusions to reasonable alternative assumptions. More work is needed to investigate options for improving the approach.

  4. Pseudospectral calculation of helium wave functions, expectation values, and oscillator strength

    NASA Astrophysics Data System (ADS)

    Grabowski, Paul E.; Chernoff, David F.

    2011-10-01

    We show that the pseudospectral method is a powerful tool for finding precise solutions of Schrödinger’s equation for two-electron atoms with general angular momentum. Realizing the method’s full promise for atomic calculations requires special handling of singularities due to two-particle Coulomb interactions. We give a prescription for choosing coordinates and subdomains whose efficacy we illustrate by solving several challenging problems. One test centers on the determination of the nonrelativistic electric dipole oscillator strength for the helium 11S→21P transition. The result achieved, 0.27616499(27), is comparable to the best in the literature. The formally equivalent length, velocity, and acceleration expressions for the oscillator strength all yield roughly the same accuracy. We also calculate a diverse set of helium ground-state expectation values, reaching near state-of-the-art accuracy without the necessity of implementing any special-purpose numerics. These successes imply that general matrix elements are directly and reliably calculable with pseudospectral methods. A striking result is that all the relevant quantities tested in this paper—energy eigenvalues, S-state expectation values and a bound-bound dipole transition between the lowest energy S and P states—converge exponentially with increasing resolution and at roughly the same rate. Each individual calculation samples and weights the configuration space wave function uniquely but all behave in a qualitatively similar manner. These results suggest that the method has great promise for similarly accurate treatment of few-particle systems.

  5. Detection and quantification system for monitoring instruments

    DOEpatents

    Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA

    2008-08-12

    A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.

  6. Uncertainty, robustness, and the value of information in managing an expanding Arctic goose population

    USGS Publications Warehouse

    Johnson, Fred A.; Jensen, Gitte H.; Madsen, Jesper; Williams, Byron K.

    2014-01-01

    We explored the application of dynamic-optimization methods to the problem of pink-footed goose (Anser brachyrhynchus) management in western Europe. We were especially concerned with the extent to which uncertainty in population dynamics influenced an optimal management strategy, the gain in management performance that could be expected if uncertainty could be eliminated or reduced, and whether an adaptive or robust management strategy might be most appropriate in the face of uncertainty. We combined three alternative survival models with three alternative reproductive models to form a set of nine annual-cycle models for pink-footed geese. These models represent a wide range of possibilities concerning the extent to which demographic rates are density dependent or independent, and the extent to which they are influenced by spring temperatures. We calculated state-dependent harvest strategies for these models using stochastic dynamic programming and an objective function that maximized sustainable harvest, subject to a constraint on desired population size. As expected, attaining the largest mean objective value (i.e., the relative measure of management performance) depended on the ability to match a model-dependent optimal strategy with its generating model of population dynamics. The nine models suggested widely varying objective values regardless of the harvest strategy, with the density-independent models generally producing higher objective values than models with density-dependent survival. In the face of uncertainty as to which of the nine models is most appropriate, the optimal strategy assuming that both survival and reproduction were a function of goose abundance and spring temperatures maximized the expected minimum objective value (i.e., maxi–min). In contrast, the optimal strategy assuming equal model weights minimized the expected maximum loss in objective value. The expected value of eliminating model uncertainty was an increase in objective value of only 3.0%. This value represents the difference between the best that could be expected if the most appropriate model were known and the best that could be expected in the face of model uncertainty. The value of eliminating uncertainty about the survival process was substantially higher than that associated with the reproductive process, which is consistent with evidence that variation in survival is more important than variation in reproduction in relatively long-lived avian species. Comparing the expected objective value if the most appropriate model were known with that of the maxi–min robust strategy, we found the value of eliminating uncertainty to be an expected increase of 6.2% in objective value. This result underscores the conservatism of the maxi–min rule and suggests that risk-neutral managers would prefer the optimal strategy that maximizes expected value, which is also the strategy that is expected to minimize the maximum loss (i.e., a strategy based on equal model weights). The low value of information calculated for pink-footed geese suggests that a robust strategy (i.e., one in which no learning is anticipated) could be as nearly effective as an adaptive one (i.e., a strategy in which the relative credibility of models is assessed through time). Of course, an alternative explanation for the low value of information is that the set of population models we considered was too narrow to represent key uncertainties in population dynamics. Yet we know that questions about the presence of density dependence must be central to the development of a sustainable harvest strategy. And while there are potentially many environmental covariates that could help explain variation in survival or reproduction, our admission of models in which vital rates are drawn randomly from reasonable distributions represents a worst-case scenario for management. We suspect that much of the value of the various harvest strategies we calculated is derived from the fact that they are state dependent, such that appropriate harvest rates depend on population abundance and weather conditions, as well as our focus on an infinite time horizon for sustainability.

  7. Economics in “Global Health 2035”: a sensitivity analysis of the value of a life year estimates

    PubMed Central

    Chang, Angela Y; Robinson, Lisa A; Hammitt, James K; Resch, Stephen C

    2017-01-01

    Background In “Global health 2035: a world converging within a generation,” The Lancet Commission on Investing in Health (CIH) adds the value of increased life expectancy to the value of growth in gross domestic product (GDP) when assessing national well–being. To value changes in life expectancy, the CIH relies on several strong assumptions to bridge gaps in the empirical research. It finds that the value of a life year (VLY) averages 2.3 times GDP per capita for low– and middle–income countries (LMICs) assuming the changes in life expectancy they experienced from 2000 to 2011 are permanent. Methods The CIH VLY estimate is based on a specific shift in population life expectancy and includes a 50 percent reduction for children ages 0 through 4. We investigate the sensitivity of this estimate to the underlying assumptions, including the effects of income, age, and life expectancy, and the sequencing of the calculations. Findings We find that reasonable alternative assumptions regarding the effects of income, age, and life expectancy may reduce the VLY estimates to 0.2 to 2.1 times GDP per capita for LMICs. Removing the reduction for young children increases the VLY, while reversing the sequencing of the calculations reduces the VLY. Conclusion Because the VLY is sensitive to the underlying assumptions, analysts interested in applying this approach elsewhere must tailor the estimates to the impacts of the intervention and the characteristics of the affected population. Analysts should test the sensitivity of their conclusions to reasonable alternative assumptions. More work is needed to investigate options for improving the approach. PMID:28400950

  8. Expected net present value of sample information: from burden to investment.

    PubMed

    Hall, Peter S; Edlin, Richard; Kharroubi, Samer; Gregory, Walter; McCabe, Christopher

    2012-01-01

    The Expected Value of Information Framework has been proposed as a method for identifying when health care technologies should be immediately reimbursed and when any reimbursement should be withheld while awaiting more evidence. This framework assesses the value of obtaining additional evidence to inform a current reimbursement decision. This represents the burden of not having the additional evidence at the time of the decision. However, when deciding whether to reimburse now or await more evidence, decision makers need to know the value of investing in more research to inform a future decision. Assessing this value requires consideration of research costs, research time, and what happens to patients while the research is undertaken and after completion. The investigators describe a development of the calculation of the expected value of sample information that assesses the value of investing in further research, including an only-in-research strategy and an only-with-research strategy.

  9. Wind velocity-change (gust rise) criteria for wind turbine design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cliff, W.C.; Fichtl, G.H.

    1978-07-01

    A closed-form equation is derived for root mean square (rms) value of velocity change (gust rise) that occurs over the swept area of wind turbine rotor systems and an equation for rms value of velocity change that occurs at a single point in space. These formulas confirm the intuitive assumption that a large system will encounter a less severe environment than a small system when both are placed at the same location. Assuming a normal probability density function for the velocity differences, an equation is given for calculating the expected number of velocity differences that will occur in 1 hrmore » and will be larger than an arbitrary value. A formula is presented that gives the expected number of velocity differences larger than an arbitrary value that will be encountered during the design life of a wind turbine. In addition, a method for calculating the largest velocity difference expected during the life of a turbine and a formula for estimating the risk of exceeding a given velocity difference during the life of the structure are given. The equations presented are based upon general atmospheric boundary-layer conditions and do not include information regarding events such as tornados, hurricanes, etc.« less

  10. Systematic bias of correlation coefficient may explain negative accuracy of genomic prediction.

    PubMed

    Zhou, Yao; Vales, M Isabel; Wang, Aoxue; Zhang, Zhiwu

    2017-09-01

    Accuracy of genomic prediction is commonly calculated as the Pearson correlation coefficient between the predicted and observed phenotypes in the inference population by using cross-validation analysis. More frequently than expected, significant negative accuracies of genomic prediction have been reported in genomic selection studies. These negative values are surprising, given that the minimum value for prediction accuracy should hover around zero when randomly permuted data sets are analyzed. We reviewed the two common approaches for calculating the Pearson correlation and hypothesized that these negative accuracy values reflect potential bias owing to artifacts caused by the mathematical formulas used to calculate prediction accuracy. The first approach, Instant accuracy, calculates correlations for each fold and reports prediction accuracy as the mean of correlations across fold. The other approach, Hold accuracy, predicts all phenotypes in all fold and calculates correlation between the observed and predicted phenotypes at the end of the cross-validation process. Using simulated and real data, we demonstrated that our hypothesis is true. Both approaches are biased downward under certain conditions. The biases become larger when more fold are employed and when the expected accuracy is low. The bias of Instant accuracy can be corrected using a modified formula. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Tandem mass spectrometry of human tryptic blood peptides calculated by a statistical algorithm and captured by a relational database with exploration by a general statistical analysis system.

    PubMed

    Bowden, Peter; Beavis, Ron; Marshall, John

    2009-11-02

    A goodness of fit test may be used to assign tandem mass spectra of peptides to amino acid sequences and to directly calculate the expected probability of mis-identification. The product of the peptide expectation values directly yields the probability that the parent protein has been mis-identified. A relational database could capture the mass spectral data, the best fit results, and permit subsequent calculations by a general statistical analysis system. The many files of the Hupo blood protein data correlated by X!TANDEM against the proteins of ENSEMBL were collected into a relational database. A redundant set of 247,077 proteins and peptides were correlated by X!TANDEM, and that was collapsed to a set of 34,956 peptides from 13,379 distinct proteins. About 6875 distinct proteins were only represented by a single distinct peptide, 2866 proteins showed 2 distinct peptides, and 3454 proteins showed at least three distinct peptides by X!TANDEM. More than 99% of the peptides were associated with proteins that had cumulative expectation values, i.e. probability of false positive identification, of one in one hundred or less. The distribution of peptides per protein from X!TANDEM was significantly different than those expected from random assignment of peptides.

  12. Implementing Generalized Additive Models to Estimate the Expected Value of Sample Information in a Microsimulation Model: Results of Three Case Studies.

    PubMed

    Rabideau, Dustin J; Pei, Pamela P; Walensky, Rochelle P; Zheng, Amy; Parker, Robert A

    2018-02-01

    The expected value of sample information (EVSI) can help prioritize research but its application is hampered by computational infeasibility, especially for complex models. We investigated an approach by Strong and colleagues to estimate EVSI by applying generalized additive models (GAM) to results generated from a probabilistic sensitivity analysis (PSA). For 3 potential HIV prevention and treatment strategies, we estimated life expectancy and lifetime costs using the Cost-effectiveness of Preventing AIDS Complications (CEPAC) model, a complex patient-level microsimulation model of HIV progression. We fitted a GAM-a flexible regression model that estimates the functional form as part of the model fitting process-to the incremental net monetary benefits obtained from the CEPAC PSA. For each case study, we calculated the expected value of partial perfect information (EVPPI) using both the conventional nested Monte Carlo approach and the GAM approach. EVSI was calculated using the GAM approach. For all 3 case studies, the GAM approach consistently gave similar estimates of EVPPI compared with the conventional approach. The EVSI behaved as expected: it increased and converged to EVPPI for larger sample sizes. For each case study, generating the PSA results for the GAM approach required 3 to 4 days on a shared cluster, after which EVPPI and EVSI across a range of sample sizes were evaluated in minutes. The conventional approach required approximately 5 weeks for the EVPPI calculation alone. Estimating EVSI using the GAM approach with results from a PSA dramatically reduced the time required to conduct a computationally intense project, which would otherwise have been impractical. Using the GAM approach, we can efficiently provide policy makers with EVSI estimates, even for complex patient-level microsimulation models.

  13. Improving deep convolutional neural networks with mixed maxout units.

    PubMed

    Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  14. Chern-Simons expectation values and quantum horizons from loop quantum gravity and the Duflo map.

    PubMed

    Sahlmann, Hanno; Thiemann, Thomas

    2012-03-16

    We report on a new approach to the calculation of Chern-Simons theory expectation values, using the mathematical underpinnings of loop quantum gravity, as well as the Duflo map, a quantization map for functions on Lie algebras. These new developments can be used in the quantum theory for certain types of black hole horizons, and they may offer new insights for loop quantum gravity, Chern-Simons theory and the theory of quantum groups.

  15. Fluctuations in the inflationary universe

    NASA Astrophysics Data System (ADS)

    Hawking, S. W.; Moss, I. G.

    1983-08-01

    In the usual treatment of the inflationary universe, it is assumed that the expectation value of some component of the Higgs field develops a non-zero symmetry breaking value Φ0. However, in the models normally considered, the expectation value of Φ will be zero at all times because Φ and -Φ are equally probable. To overcome this difficulty, we calculate the effective action as a function of <Φ2> rather than <Φ>. This also solves the infra-red problem associated with a Coleman-Weinberg condition in de Sitter space. The expectation value of Φ2 grows linearly with time at first and then as (t2 - t-1). The irregularities in the resulting universe are smaller than those predicted by previous authors, though in the case of the standard SU(5) GUT they are still bigger than the limit set by the microwave background.

  16. Phylogenetic diversity, functional trait diversity and extinction: avoiding tipping points and worst-case losses

    PubMed Central

    Faith, Daniel P.

    2015-01-01

    The phylogenetic diversity measure, (‘PD’), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. PMID:25561672

  17. The Expected Net Present Value of Developing Weight Management Drugs in the Context of Drug Safety Litigation.

    PubMed

    Chawla, Anita; Carls, Ginger; Deng, Edmund; Tuttle, Edward

    2015-07-01

    Following withdrawals, failures, and significant litigation settlements, drug product launches in the anti-obesity category slowed despite a large and growing unmet need. Litigation concerns, a more risk-averse regulatory policy, and the difficulty of developing a product with a compelling risk-benefit profile in this category may have limited innovators' expected return on investment and restricted investment in this therapeutic area. The objective of the study was to estimate perceived manufacturer risk associated with product safety litigation and increased development costs vs. revenue expectations on anticipated return on investment and to determine which scenarios might change a manufacturer's investment decision. Expected net present value of a weight-management drug entering pre-clinical trials was calculated for a range of scenarios representing evolving expectations of development costs, revenue, and litigation risk over the past 25 years. These three factors were based on published estimates, historical data, and analogs from other therapeutic areas. The main driver in expected net present value calculations is expected revenue, particularly if one assumes that litigation risk and demand are positively correlated. Changes in development costs associated with increased regulatory concern with potential safety issues for the past 25 years likely did not impact investment decisions. Regulatory policy and litigation risk both played a role in anti-obesity drug development; however, product revenue-reflecting efficacy at acceptable levels of safety-was by far the most important factor. To date, relatively modest sales associated with recent product introductions suggest that developing a product that is sufficiently efficacious with an acceptable level of safety continues to be the primary challenge in this market.

  18. Science Notes: Dilution of a Weak Acid

    ERIC Educational Resources Information Center

    Talbot, Christopher; Wai, Chooi Khee

    2014-01-01

    This "Science note" arose out of practical work involving the dilution of ethanoic acid, the measurement of the pH of the diluted solutions and calculation of the acid dissociation constant, K[subscript a], for each diluted solution. The students expected the calculated values of K[subscript a] to be constant but they found that the…

  19. Neural computing thermal comfort index PMV for the indoor environment intelligent control system

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Chen, Yifei

    2013-03-01

    Providing indoor thermal comfort and saving energy are two main goals of indoor environmental control system. An intelligent comfort control system by combining the intelligent control and minimum power control strategies for the indoor environment is presented in this paper. In the system, for realizing the comfort control, the predicted mean vote (PMV) is designed as the control goal, and with chastening formulas of PMV, it is controlled to optimize for improving indoor comfort lever by considering six comfort related variables. On the other hand, a RBF neural network based on genetic algorithm is designed to calculate PMV for better performance and overcoming the nonlinear feature of the PMV calculation better. The formulas given in the paper are presented for calculating the expected output values basing on the input samples, and the RBF network model is trained depending on input samples and the expected output values. The simulation result is proved that the design of the intelligent calculation method is valid. Moreover, this method has a lot of advancements such as high precision, fast dynamic response and good system performance are reached, it can be used in practice with requested calculating error.

  20. Life expectancy in bipolar disorder.

    PubMed

    Kessing, Lars Vedel; Vradi, Eleni; Andersen, Per Kragh

    2015-08-01

    Life expectancy in patients with bipolar disorder has been reported to be decreased by 11 to 20 years. These calculations are based on data for individuals at the age of 15 years. However, this may be misleading for patients with bipolar disorder in general as most patients have a later onset of illness. The aim of the present study was to calculate the remaining life expectancy for patients of different ages with a diagnosis of bipolar disorder. Using nationwide registers of all inpatient and outpatient contacts to all psychiatric hospitals in Denmark from 1970 to 2012 we calculated remaining life expectancies for values of age 15, 25, 35 ⃛ 75 years among all individuals alive in year 2000. For the typical male or female patient aged 25 to 45 years, the remaining life expectancy was decreased by 12.0-8.7 years and 10.6-8.3 years, respectively. The ratio between remaining life expectancy in bipolar disorder and that of the general population decreased with age, indicating that patients with bipolar disorder start losing life-years during early and mid-adulthood. Life expectancy in bipolar disorder is decreased substantially, but less so than previously reported. Patients start losing life-years during early and mid-adulthood. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  2. Measuring utilities of severe facial disfigurement and composite tissue allotransplantation of the face in patients with severe face and neck burns from the perspectives of the general public, medical experts and patients.

    PubMed

    Chuback, Jennifer; Yarascavitch, Blake; Yarascavitch, Alec; Kaur, Manraj Nirmal; Martin, Stuart; Thoma, Achilleas

    2015-11-01

    In an otherwise healthy patient with severe facial disfigurement secondary to burns, composite tissue allotransplantation (CTA) results in life-long immunosuppressive therapy and its associated risk. In this study, we assess the net gain of CTA of face (in terms of utilities) from the perspectives of patient, general public and medical expert, in comparison to the risks. Using the standard gamble (SG) and time-trade off (TTO) techniques, utilities were obtained from members of general public, patients with facial burns, and medical experts (n=25 for each group). The gain (or loss) in utility and quality adjusted life years (QALY) were estimated using face-to-face interviews. A sensitivity analysis using variable life expectancy was conducted. From the patient perspective, severe facial burn was associated with a health utility value of 0.53, and 27.1 QALYs as calculated by SG, and a health utility value of 0.57, and 28.9 QALYs as calculated by TTO. In comparison, CTA of the face was associated with a health utility value of 0.64, and 32.3 QALYs (or 18.2 QALYs years per sensitivity analysis) as calculated by SG, and a health utility value of 0.67, and 34.1 QALYs (or 19.2QALYs per sensitivity analysis) as calculated by TTO. However, a loss of 8.9 QALYs (by SG method) to 9.5 QALYs (by TTO method) was observed when the life expectancy was decreased in the sensitivity analysis. Similar results were obtained from the general population and medical experts perspectives. We found that severe facial disfigurement is associated with a significant reduction in the health-related quality of life, and CTA has the potential to improve this. Further, we found that a trade-off exists between the life expectancy and gain in the QALYs, i.e. if life expectancy following CTA of face is reduced, the gain in QALY is also diminished. This trade-off needs to be validated in future studies. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.

  3. Flood Risk Due to Hurricane Flooding

    NASA Astrophysics Data System (ADS)

    Olivera, Francisco; Hsu, Chih-Hung; Irish, Jennifer

    2015-04-01

    In this study, we evaluated the expected economic losses caused by hurricane inundation. We used surge response functions, which are physics-based dimensionless scaling laws that give surge elevation as a function of the hurricane's parameters (i.e., central pressure, radius, forward speed, approach angle and landfall location) at specified locations along the coast. These locations were close enough to avoid significant changes in surge elevations between consecutive points, and distant enough to minimize calculations. The probability of occurrence of a surge elevation value at a given location was estimated using a joint probability distribution of the hurricane parameters. The surge elevation, at the shoreline, was assumed to project horizontally inland within a polygon of influence. Individual parcel damage was calculated based on flood water depth and damage vs. depth curves available for different building types from the HAZUS computer application developed by the Federal Emergency Management Agency (FEMA). Parcel data, including property value and building type, were obtained from the county appraisal district offices. The expected economic losses were calculated as the sum of the products of the estimated parcel damages and their probability of occurrence for the different storms considered. Anticipated changes for future climate scenarios were considered by accounting for projected hurricane intensification, as indicated by sea surface temperature rise, and sea level rise, which modify the probability distribution of hurricane central pressure and change the baseline of the damage calculation, respectively. Maps of expected economic losses have been developed for Corpus Christi in Texas, Gulfport in Mississippi and Panama City in Florida. Specifically, for Port Aransas, in the Corpus Christi area, it was found that the expected economic losses were in the range of 1% to 4% of the property value for current climate conditions, of 1% to 8% for the 2030's and of 1% to 14% for the 2080's.

  4. Improving deep convolutional neural networks with mixed maxout units

    PubMed Central

    Liu, Fu-xian; Li, Long-yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737

  5. Vacuum polarization effects on flat branes due to a global monopole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezerra de Mello, E.R.

    2006-05-15

    In this paper we analyze the vacuum polarization effects associated with a massless scalar field in the higher-dimensional spacetime. Specifically we calculate the renormalized vacuum expectation value of the square of the field, <{phi}{sup 2}(x)>{sub Ren}, induced by a global monopole in the 'braneworld' scenario. In this context the global monopole lives in a n=3-dimensional submanifold of the higher-dimensional (bulk) spacetime, and our universe is represented by a transverse flat (p-1)-dimensional brane. In order to develop this analysis we calculate the general Green function admitting that the scalar field propagates in the bulk. Also a general curvature coupling parameter betweenmore » the field and the geometry is assumed. We explicitly show that the vacuum polarization effects depend crucially on the values attributed to p. We also investigate the general structure of the renormalized vacuum expectation value of the energy-momentum tensor, {sub Ren}, for p=3.« less

  6. The expected value of possession in professional rugby league match-play.

    PubMed

    Kempton, Thomas; Kennedy, Nicholas; Coutts, Aaron J

    2016-01-01

    This study estimated the expected point value for starting possessions in different field locations during rugby league match-play and calculated the mean expected points for each subsequent play during the possession. It also examined the origin of tries scored according to the method of gaining possession. Play-by-play data were taken from all 768 regular-season National Rugby League (NRL) matches during 2010-2013. A probabilistic model estimated the expected point outcome based on the net difference in points scored by a team in possession in a given situation. An iterative method was used to approximate the value of each situation based on actual scoring outcomes. Possessions commencing close to the opposition's goal-line had the highest expected point equity, which decreased as the location of the possession moved towards the team's own goal-line. Possessions following an opposition error, penalty or goal-line dropout had the highest likelihood of a try being scored on the set subsequent to their occurrence. In contrast, possessions that follow an opposition completed set or a restart were least likely to result in a try. The expected point values framework from our model has applications for informing playing strategy and assessing individual and team performance in professional rugby league.

  7. Valuing Trial Designs from a Pharmaceutical Perspective Using Value-Based Pricing.

    PubMed

    Breeze, Penny; Brennan, Alan

    2015-11-01

    Our aim was to adapt the traditional framework for expected net benefit of sampling (ENBS) to be more compatible with drug development trials from the pharmaceutical perspective. We modify the traditional framework for conducting ENBS and assume that the price of the drug is conditional on the trial outcomes. We use a value-based pricing (VBP) criterion to determine price conditional on trial data using Bayesian updating of cost-effectiveness (CE) model parameters. We assume that there is a threshold price below which the company would not market the new intervention. We present a case study in which a phase III trial sample size and trial duration are varied. For each trial design, we sampled 10,000 trial outcomes and estimated VBP using a CE model. The expected commercial net benefit is calculated as the expected profits minus the trial costs. A clinical trial with shorter follow-up, and larger sample size, generated the greatest expected commercial net benefit. Increasing the duration of follow-up had a modest impact on profit forecasts. Expected net benefit of sampling can be adapted to value clinical trials in the pharmaceutical industry to optimise the expected commercial net benefit. However, the analyses can be very time consuming for complex CE models. © 2014 The Authors. Health Economics published by John Wiley & Sons Ltd.

  8. Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model

    PubMed Central

    Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.

    2013-01-01

    One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874

  9. Phylogenetic diversity, functional trait diversity and extinction: avoiding tipping points and worst-case losses.

    PubMed

    Faith, Daniel P

    2015-02-19

    The phylogenetic diversity measure, ('PD'), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  10. Analytical probabilistic modeling of RBE-weighted dose for ion therapy.

    PubMed

    Wieser, H P; Hennig, P; Wahl, N; Bangert, M

    2017-11-10

    Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order [Formula: see text] to [Formula: see text] for the expectation value and from [Formula: see text] to [Formula: see text] for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are [Formula: see text]99.15% for the expectation value and [Formula: see text]94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.

  11. Analytical probabilistic modeling of RBE-weighted dose for ion therapy

    NASA Astrophysics Data System (ADS)

    Wieser, H. P.; Hennig, P.; Wahl, N.; Bangert, M.

    2017-12-01

    Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order O(V × B^2) to O(V × B) for the expectation value and from O(V × B^4) to O(V × B^2) for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are > 99.15% for the expectation value and > 94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.

  12. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  13. Probabilistic Plan Management

    DTIC Science & Technology

    2009-11-17

    set of chains , the step adds scheduled methods that have an a priori likelihood of a failure outcome (Lines 3-5). It identifies the max eul value of the...activity meeting its objective, as well as its expected contribution to the schedule. By explicitly calculating these values , PADS is able to summarize the...variables. One of the main difficulties of this model is convolving the probability density functions and value functions while solving the model; this

  14. Network approach for decision making under risk—How do we choose among probabilistic options with the same expected value?

    PubMed Central

    Chen, Yi-Shin

    2018-01-01

    Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing ‘goal’ and ‘time’ factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight. PMID:29702665

  15. Network approach for decision making under risk-How do we choose among probabilistic options with the same expected value?

    PubMed

    Pan, Wei; Chen, Yi-Shin

    2018-01-01

    Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing 'goal' and 'time' factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight.

  16. Quantifying the Value of Perfect Information in Emergency Vaccination Campaigns.

    PubMed

    Bradbury, Naomi V; Probert, William J M; Shea, Katriona; Runge, Michael C; Fonnesbeck, Christopher J; Keeling, Matt J; Ferrari, Matthew J; Tildesley, Michael J

    2017-02-01

    Foot-and-mouth disease outbreaks in non-endemic countries can lead to large economic costs and livestock losses but the use of vaccination has been contentious, partly due to uncertainty about emergency FMD vaccination. Value of information methods can be applied to disease outbreak problems such as FMD in order to investigate the performance improvement from resolving uncertainties. Here we calculate the expected value of resolving uncertainty about vaccine efficacy, time delay to immunity after vaccination and daily vaccination capacity for a hypothetical FMD outbreak in the UK. If it were possible to resolve all uncertainty prior to the introduction of control, we could expect savings of £55 million in outbreak cost, 221,900 livestock culled and 4.3 days of outbreak duration. All vaccination strategies were found to be preferable to a culling only strategy. However, the optimal vaccination radius was found to be highly dependent upon vaccination capacity for all management objectives. We calculate that by resolving the uncertainty surrounding vaccination capacity we would expect to return over 85% of the above savings, regardless of management objective. It may be possible to resolve uncertainty about daily vaccination capacity before an outbreak, and this would enable decision makers to select the optimal control action via careful contingency planning.

  17. Quantifying the Value of Perfect Information in Emergency Vaccination Campaigns

    PubMed Central

    Probert, William J. M.; Shea, Katriona; Fonnesbeck, Christopher J.; Ferrari, Matthew J.; Tildesley, Michael J.

    2017-01-01

    Foot-and-mouth disease outbreaks in non-endemic countries can lead to large economic costs and livestock losses but the use of vaccination has been contentious, partly due to uncertainty about emergency FMD vaccination. Value of information methods can be applied to disease outbreak problems such as FMD in order to investigate the performance improvement from resolving uncertainties. Here we calculate the expected value of resolving uncertainty about vaccine efficacy, time delay to immunity after vaccination and daily vaccination capacity for a hypothetical FMD outbreak in the UK. If it were possible to resolve all uncertainty prior to the introduction of control, we could expect savings of £55 million in outbreak cost, 221,900 livestock culled and 4.3 days of outbreak duration. All vaccination strategies were found to be preferable to a culling only strategy. However, the optimal vaccination radius was found to be highly dependent upon vaccination capacity for all management objectives. We calculate that by resolving the uncertainty surrounding vaccination capacity we would expect to return over 85% of the above savings, regardless of management objective. It may be possible to resolve uncertainty about daily vaccination capacity before an outbreak, and this would enable decision makers to select the optimal control action via careful contingency planning. PMID:28207777

  18. Cosmological perturbations in inflation and in de Sitter space

    NASA Astrophysics Data System (ADS)

    Pimentel, Guilherme Leite

    This thesis focuses on various aspects of inflationary fluctuations. First, we study gravitational wave fluctuations in de Sitter space. The isometries of the spacetime constrain to a few parameters the Wheeler-DeWitt wavefunctional of the universe, to cubic order in fluctuations. At cubic order, there are three independent terms in the wavefunctional. From the point of view of the bulk action, one term corresponds to Einstein gravity, and a new term comes from a cubic term in the curvature tensor. The third term is a pure phase and does not give rise to a new shape for expectation values of graviton fluctuations. These results can be seen as the leading order non-gaussian contributions in a slow-roll expansion for inflationary observables. We also use the wavefunctional approach to explain a universal consistency condition of n-point expectation values in single field inflation. This consistency condition relates a soft limit of an n-point expectation value to ( n-1)-point expectation values. We show how these conditions can be easily derived from the wavefunctional point of view. Namely, they follow from the momentum constraint of general relativity, which is equivalent to the constraint of spatial diffeomorphism invariance. We also study expectation values beyond tree level. We show that subhorizon fluctuations in loop diagrams do not generate a mass term for superhorizon fluctuations. Such a mass term could spoil the predictivity of inflation, which is based on the existence of properly defined field variables that become constant once their wavelength is bigger than the size of the horizon. Such a mass term would be seen in the two point expectation value as a contribution that grows linearly with time at late times. The absence of this mass term is closely related to the soft limits studied in previous chapters. It is analogous to the absence of a mass term for the photon in quantum electrodynamics, due to gauge symmetry. Finally, we use the tools of holography and entanglement entropy to study superhorizon correlations in quantum field theories in de Sitter space. The entropy has interesting terms that have no equivalent in flat space field theories. These new terms are due to particle creation in an expanding universe. The entropy is calculated directly for free massive scalar theories. For theories with holographic duals, it is determined by the area of some extremal surface in the bulk geometry. We calculate the entropy for different classes of holographic duals. For one of these classes, the holographic dual geometry is an asymptotically Anti-de Sitter space that decays into a crunching cosmology, an open Friedmann-Robertson-Walker universe. The extremal surface used in the calculation of the entropy lies almost entirely on the slice of maximal scale factor of the crunching cosmology.

  19. Tension fatigue of glass/epoxy and graphite/epoxy tapered laminates

    NASA Technical Reports Server (NTRS)

    Murri, Gretchen B.; Obrien, T. Kevin; Salpekar, Satish A.

    1990-01-01

    Symmetric tapered laminates with internally dropped plies were tested with two different layups and two materials, S2/SP250 glass/epoxy and IM6/1827I graphite/epoxy. The specimens were loaded in cyclic tension until they delaminated unstably. Each combination of material and layup had a unique failure mode. Calculated values of strain energy release rate, G, from a finite element analysis model of delamination along the taper, and for delamination from a matrix ply crack, were used with mode I fatigue characterization data from tests of the tested materials to calculate expected delamination onset loads. Calculated values were compared to the experimental results. The comparison showed that when the calculated G was chosen according to the observed delamination failures, the agreement between the calculated and measured delamination onset loads was reasonable for each combination of layup and material.

  20. Patients With Thumb Carpometacarpal Arthritis Have Quantifiable Characteristic Expectations That Can Be Measured With a Survey.

    PubMed

    Kang, Lana; Hashmi, Sohaib Z; Nguyen, Joseph; Lee, Steve K; Weiland, Andrew J; Mancuso, Carol A

    2016-01-01

    Although patient expectations associated with major orthopaedic conditions have shown clinically relevant and variable effects on outcomes, expectations associated with thumb carpometacarpal (CMC) arthritis have not been identified, described, or analyzed before, to our knowledge. We asked: (1) Do patients with thumb CMC arthritis express characteristic expectations that are quantifiable and have measurable frequency? (2) Can a survey on expectations developed from patient-derived data quantitate expectations in patients with thumb CMC arthritis? The study was a prospective cohort study. The first phase was a 12-month-period involving interviews of 42 patients with thumb CMC arthritis to define their expectations of treatment. The interview process used techniques and principles of qualitative methodology including open-ended interview questions, unrestricted time, and study size determined by data saturation. Verbatim responses provided content for the draft survey. The second phase was a 12-month period assessing the survey for test-retest reliability with the recruitment of 36 participants who completed the survey twice. The survey was finalized from clinically relevant content, frequency of endorsement, weighted kappa values for concordance of responses, and intraclass coefficient and Cronbach's alpha for interrater reliability and internal consistency. Thirty-two patients volunteered 256 characteristic expectations, which consisted of 21 discrete categories. Expectations with similar concepts were combined by eliminating redundancy while maintaining original terminology. These were reduced to 19 items that comprised a one-page survey. This survey showed high concordance, interrater reliability, and internal consistency, with weighted kappa values between 0.58 and 0.78 (95% CI, 0.39-0.78; p < 0.001); intraclass correlation coefficient of 0.94 (95% CI, 0.94-0.98; p < 0.001), and Cronbach's alpha values of 0.94 and 0.95 (95% CI, 0.91-0.96; p < 0.001). The thumb CMC arthritis expectations survey score is convertible to an overall score between 0 to 100 points calculated on the basis of the number of expectations and the degree of improvement expected, with higher scores indicating higher expectations. Patients with thumb CMC arthritis volunteer a characteristic and quantifiable set of expectations. Using responses recorded verbatim from patient interviews, a clinically relevant, valid, and reliable expectations survey was developed that measures the physical and psychosocial expectations of patients seeking treatment for CMC arthritis. The survey provides a calculable score that can record patients' expectations. Clinical application of this survey includes identification of factors that influence fulfilment of these expectations. Level II, prospective study.

  1. Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.

    PubMed

    Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S

    2012-11-01

    One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.

  2. A simple formulation and solution to the replacement problem: a practical tool to assess the economic cow value, the value of a new pregnancy, and the cost of a pregnancy loss.

    PubMed

    Cabrera, V E

    2012-08-01

    This study contributes to the research literature by providing a new formulation for the cow replacement problem, and it also contributes to the Extension deliverables by providing a user-friendly decision support system tool that would more likely be adopted and applied for practical decision making. The cow value, its related values of a new pregnancy and a pregnancy loss, and their associated replacement policies determine profitability in dairy farming. One objective of this study was to present a simple, interactive, dynamic, and robust formulation of the cow value and the replacement problem, including expectancy of the future production of the cow and the genetic gain of the replacement. The proven hypothesis of this study was that all the above requirements could be achieved by using a Markov chain algorithm. The Markov chain model allowed (1) calculation of a forward expected value of a studied cow and its replacement; (2) use of a single model (the Markov chain) to calculate both the replacement policies and the herd statistics; (3) use of a predefined, preestablished farm reproductive replacement policy; (4) inclusion of a farmer's assessment of the expected future performance of a cow; (5) inclusion of a farmer's assessment of genetic gain with a replacement; and (6) use of a simple spreadsheet or an online system to implement the decision support system. Results clearly demonstrated that the decision policies found with the Markov chain model were consistent with more complex dynamic programming models. The final user-friendly decision support tool is available at http://dairymgt.info/ → Tools → The Economic Value of a Dairy Cow. This tool calculates the cow value instantaneously and is highly interactive, dynamic, and robust. When a Wisconsin dairy farm was studied using the model, the solution policy called for replacing nonpregnant cows 11 mo after calving or months in milk (MIM) if in the first lactation and 9 MIM if in later lactations. The cow value for an average second-lactation cow was as follows: (1) when nonpregnant, (a) $897 in MIM = 1 and (b) $68 in MIM = 8; (2) when the cow just became pregnant,(a) $889 for a pregnancy in MIM = 3 and (b) $298 for a pregnancy in MIM = 8; and (3) the value of a pregnancy loss when a cow became pregnant in MIM = 5 was (a) $221 when the loss was in the first month of pregnancy and (b) $897 when the loss was in the ninth month of pregnancy. The cow value indicated pregnant cows should be kept. The expected future production of a cow with respect to a similar average cow was an important determinant in the cow replacement decision. The expected production in the rest of the lactation was more important for nonpregnant cows, and the expected production in successive lactations was more important for pregnant cows. A 120% expected milk production for a cow with MIM = 16 and 6 mo pregnant in the present lactation or in successive lactations determined between 1.52 and 6.48 times the cow value, respectively, of an average production cow. The cow value decreased by $211 for every 1 percentage point of expected genetic gain of the replacement. A break-even analysis of the cow value with respect to expected milk production of an average second-parity cow indicated that (1) nonpregnant cows in MIM = 1 and 8 could still remain in the herd if they produced at least 84 and 98% in the present lactation or if they produced at least 78 and 97% in future lactations, respectively; and (2) cows becoming pregnant in MIM = 5 would require at least 64% of milk production in the rest of the lactation or 93% in successive lactations to remain in the herd. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Design of magnetic system to produce intense beam of polarized molecules of H2 and D2

    NASA Astrophysics Data System (ADS)

    Yurchenko, A. V.; Nikolenko, D. M.; Rachek, I. A.; Shestakov, Yu V.; Toporkov, D. K.; Zorin, A. V.

    2017-12-01

    A magnetic-separating system is designed to produce polarized molecular high-density beams of H2/D2. The distribution of the magnetic field inside the aperture of the multipole magnet was calculated using the Mermaid software package. The calculation showed that the characteristic value of the magnetic field is 40 kGs, the field gradient is about 60 kGs/cm. A numerical calculation of the trajectories of the motion of molecules with different spin projections in this magnetic system is performed. The article discusses the possibility of using the magnetic system designed for the creation of a high-intensity source of polarized molecules. The expected intensity of this source is calculated. The expected flux of molecules focused in the receiver tube is 3.5·1016 mol/s for the hydrogen molecule and 2.0·1015 mol/s for the deuterium molecule.

  4. Income tax considerations for forest landowners in the South: a case study on tax planning

    Treesearch

    Philip D. Bailey; Harry L. Jr. Haney; Debra S. Callihan; John L. Greene

    1999-01-01

    Federal and state income taxes are calculated for hypothetical owners of nonindustrial private forests (NIPF) across 14 southern states to illustrate the effects of differential state tax treatment. The income tax liability is calculated in a year in which the timber owners harvest $200,000 worth of timber. After-tax land expectation values for a forest landowner are...

  5. Carbon and oxygen isotopic disequilibrium during calcification of Globigerina bulloides in the Southern ocean

    NASA Astrophysics Data System (ADS)

    K, P.; Ghosh, P.; N, A.

    2015-12-01

    Oxygen and carbon isotopes in planktonic foraminifera Globigerina bulloides recovered from the water column of 0-1000 m depth across the meridional transect i.e. 10°N to 53°S of Indian ocean were compared with the available data from the core-top samples across the same transect. We also recorded in situ temperatures of the water column based on probe (CTD) profiles. The δ18O and δ13C values measured in the core top samples matches with the tow results. The equilibrium δ18O of calcite calculated from known temperature and δ18O of water column allowed us to compare the observed δ18O of formaminieral shell with the expected equilibrium values. Our comparison of carbonate composition in the samples between 10°N till 40°S showed excellent match with the expected equilibrium δ18O values established from the water collected at depth range of ~75-200m, however beyond 40°S the disequilibrium was pronounced with heavier δ18O (enriched by ~1.5‰) recorded in the carbonate as compared with the expected equilibrium δ18O values established from water. This observation was further verified with δ13C measurement of shell carbonates comparing with the equilibrium δ13C of calcite calculated with known temperature and δ13C of dissolved inorganic carbon in the water column. The δ13C of the shell carbonate was found heavier as compared to the expected equilibrium δ13C. Both δ18O and δ13C showed simultaneous enrichment signature in the region beyond 40°S suggesting role of processes such as leaching along with dissolution of shell carbonate in a relatively acidic condition.

  6. 46 CFR 2.10-105 - Prepayment of annual vessel inspection fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the design life or remaining expected service life of the vessel. (b) To prepay the annual vessel... present value using the following formula: ER13MR95.000 Where: PV is the Present Value of the series of... i is the interest rate for 10-year Treasury notes at the time of prepayment calculation π is the...

  7. 46 CFR 2.10-105 - Prepayment of annual vessel inspection fees.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the design life or remaining expected service life of the vessel. (b) To prepay the annual vessel... present value using the following formula: ER13MR95.000 Where: PV is the Present Value of the series of... i is the interest rate for 10-year Treasury notes at the time of prepayment calculation π is the...

  8. 46 CFR 2.10-105 - Prepayment of annual vessel inspection fees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the design life or remaining expected service life of the vessel. (b) To prepay the annual vessel... present value using the following formula: ER13MR95.000 Where: PV is the Present Value of the series of... i is the interest rate for 10-year Treasury notes at the time of prepayment calculation π is the...

  9. 46 CFR 2.10-105 - Prepayment of annual vessel inspection fees.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the design life or remaining expected service life of the vessel. (b) To prepay the annual vessel... present value using the following formula: ER13MR95.000 Where: PV is the Present Value of the series of... i is the interest rate for 10-year Treasury notes at the time of prepayment calculation π is the...

  10. 46 CFR 2.10-105 - Prepayment of annual vessel inspection fees.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the design life or remaining expected service life of the vessel. (b) To prepay the annual vessel... present value using the following formula: ER13MR95.000 Where: PV is the Present Value of the series of... i is the interest rate for 10-year Treasury notes at the time of prepayment calculation π is the...

  11. A statistical study of merging galaxies: Theory and observations

    NASA Technical Reports Server (NTRS)

    Chatterjee, Tapan K.

    1990-01-01

    A study of the expected frequency of merging galaxies is conducted, using the impulsive approximation. Results indicate that if we consider mergers involving galaxy pairs without halos in a single crossing time or orbital period, the expected frequency of mergers is two orders of magnitude below the observed value for the present epoch. If we consider mergers involving several orbital periods or crossing times, the expected frequency goes up by an order of magnitude. Preliminary calculation indicate that if we consider galaxy mergers between pairs with massive halos, the merger is very much hastened.

  12. Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model.

    PubMed

    Wako, Hiroshi; Abe, Haruo

    2016-01-01

    The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding.

  13. Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model

    PubMed Central

    Wako, Hiroshi; Abe, Haruo

    2016-01-01

    The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding. PMID:28409079

  14. A Review of Methods for Analysis of the Expected Value of Information.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2017-10-01

    In recent years, value-of-information analysis has become more widespread in health economic evaluations, specifically as a tool to guide further research and perform probabilistic sensitivity analysis. This is partly due to methodological advancements allowing for the fast computation of a typical summary known as the expected value of partial perfect information (EVPPI). A recent review discussed some approximation methods for calculating the EVPPI, but as the research has been active over the intervening years, that review does not discuss some key estimation methods. Therefore, this paper presents a comprehensive review of these new methods. We begin by providing the technical details of these computation methods. We then present two case studies in order to compare the estimation performance of these new methods. We conclude that a method based on nonparametric regression offers the best method for calculating the EVPPI in terms of accuracy, computational time, and ease of implementation. This means that the EVPPI can now be used practically in health economic evaluations, especially as all the methods are developed in parallel with R functions and a web app to aid practitioners.

  15. Variation in Differential and Total Cross Sections Due to Different Radial Wave Functions

    ERIC Educational Resources Information Center

    Williamson, W., Jr.; Greene, T.

    1976-01-01

    Three sets of analytical wave functions are used to calculate the Na (3s---3p) transition differential and total electron excitation cross sections by Born approximations. Results show expected large variations in values. (Author/CP)

  16. Efficiency degradation due to tracking errors for point focusing solar collectors

    NASA Technical Reports Server (NTRS)

    Hughes, R. O.

    1978-01-01

    An important parameter in the design of point focusing solar collectors is the intercept factor which is a measure of efficiency and of energy available for use in the receiver. Using statistical methods, an expression of the expected value of the intercept factor is derived for various configurations and control law implementations. The analysis assumes that a radially symmetric flux distribution (not necessarily Gaussian) is generated at the focal plane due to the sun's finite image and various reflector errors. The time-varying tracking errors are assumed to be uniformly distributed within the threshold limits and allows the expected value calculation.

  17. Review article: Medical decision models of Helicobacter pylori therapy to prevent gastric cancer.

    PubMed

    Sonnenberg, A; Inadomi, J M

    1998-02-01

    The aim of the present article is to study the utility of Helicobacter pylori eradication programmes in decreasing the incidence of gastric cancer. Three types of decision models are employed to pursue this aim, i.e. decision tree, present value, and declining exponential approximation of life expectancy (DEALE). 1) A decision tree allows one to model the interaction of multiple variables in great detail and to calculate the marginal cost, as well as the marginal cost-benefit ratio, of a preventive strategy. The cost of gastric cancer, the efficacy of H. pylori therapy in preventing cancer, and the cumulative probability of developing gastric cancer exert the largest influence on the marginal cost of cancer prevention. The high cost of future gastric cancer and a high efficacy of therapy make screening for H. pylori and its eradication the preferred strategy. 2) The present value is an economic method to adjust future costs or benefits to their current value using a discount rate and the length of time between now and a given time point in the future. It accounts for the depreciation of money and all material values over time. During childhood, the present value of future gastric cancer is very low. Vaccination of children to prevent gastric cancer would need to be very inexpensive to be practicable. Cancer prevention becomes a feasible option, only if the time period between the preventive measures and the occurrence of gastric cancer can be made relatively short. 3) The DEALE provides a means to calculate the increase in life expectancy that would occur, if death from a particular disease became preventable. Life expectancy of the general population is hardly affected by gastric cancer. For life expectancy to increase appreciably by vaccination or antibiotic therapy directed against H. pylori infection, these interventions would need to be focused towards a sub-population with an a priori high risk for gastric cancer.

  18. Uncertainty, imprecision, and the precautionary principle in climate change assessment.

    PubMed

    Borsuk, M E; Tomassini, L

    2005-01-01

    Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.

  19. WE-DE-201-11: Sensitivity and Specificity of Verification Methods Based On Total Reference Air Kerma (TRAK) Or On User Provided Dose Points for Graphically Planned Skin HDR Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, A; Devlin, P; Bhagwat, M

    Purpose: To investigate the sensitivity and specificity of a novel verification methodology for image-guided skin HDR brachytherapy plans using a TRAK-based reasonableness test, compared to a typical manual verification methodology. Methods: Two methodologies were used to flag treatment plans necessitating additional review due to a potential discrepancy of 3 mm between planned dose and clinical target in the skin. Manual verification was used to calculate the discrepancy between the average dose to points positioned at time of planning representative of the prescribed depth and the expected prescription dose. Automatic verification was used to calculate the discrepancy between TRAK of themore » clinical plan and its expected value, which was calculated using standard plans with varying curvatures, ranging from flat to cylindrically circumferential. A plan was flagged if a discrepancy >10% was observed. Sensitivity and specificity were calculated using as a criteria for true positive that >10% of plan dwells had a distance to prescription dose >1 mm different than prescription depth (3 mm + size of applicator). All HDR image-based skin brachytherapy plans treated at our institution in 2013 were analyzed. Results: 108 surface applicator plans to treat skin of the face, scalp, limbs, feet, hands or abdomen were analyzed. Median number of catheters was 19 (range, 4 to 71) and median number of dwells was 257 (range, 20 to 1100). Sensitivity/specificity were 57%/78% for manual and 70%/89% for automatic verification. Conclusion: A check based on expected TRAK value is feasible for irregularly shaped, image-guided skin HDR brachytherapy. This test yielded higher sensitivity and specificity than a test based on the identification of representative points, and can be implemented with a dedicated calculation code or with pre-calculated lookup tables of ideally shaped, uniform surface applicators.« less

  20. Method of Calibrating a Force Balance

    NASA Technical Reports Server (NTRS)

    Parker, Peter A. (Inventor); Rhew, Ray D. (Inventor); Johnson, Thomas H. (Inventor); Landman, Drew (Inventor)

    2015-01-01

    A calibration system and method utilizes acceleration of a mass to generate a force on the mass. An expected value of the force is calculated based on the magnitude and acceleration of the mass. A fixture is utilized to mount the mass to a force balance, and the force balance is calibrated to provide a reading consistent with the expected force determined for a given acceleration. The acceleration can be varied to provide different expected forces, and the force balance can be calibrated for different applied forces. The acceleration may result from linear acceleration of the mass or rotational movement of the mass.

  1. Decomposing cross-country differences in quality adjusted life expectancy: the impact of value sets.

    PubMed

    Heijink, Richard; van Baal, Pieter; Oppe, Mark; Koolman, Xander; Westert, Gert

    2011-06-23

    The validity, reliability and cross-country comparability of summary measures of population health (SMPH) have been persistently debated. In this debate, the measurement and valuation of nonfatal health outcomes have been defined as key issues. Our goal was to quantify and decompose international differences in health expectancy based on health-related quality of life (HRQoL). We focused on the impact of value set choice on cross-country variation. We calculated Quality Adjusted Life Expectancy (QALE) at age 20 for 15 countries in which EQ-5D population surveys had been conducted. We applied the Sullivan approach to combine the EQ-5D based HRQoL data with life tables from the Human Mortality Database. Mean HRQoL by country-gender-age was estimated using a parametric model. We used nonparametric bootstrap techniques to compute confidence intervals. QALE was then compared across the six country-specific time trade-off value sets that were available. Finally, three counterfactual estimates were generated in order to assess the contribution of mortality, health states and health-state values to cross-country differences in QALE. QALE at age 20 ranged from 33 years in Armenia to almost 61 years in Japan, using the UK value set. The value sets of the other five countries generated different estimates, up to seven years higher. The relative impact of choosing a different value set differed across country-gender strata between 2% and 20%. In 50% of the country-gender strata the ranking changed by two or more positions across value sets. The decomposition demonstrated a varying impact of health states, health-state values, and mortality on QALE differences across countries. The choice of the value set in SMPH may seriously affect cross-country comparisons of health expectancy, even across populations of similar levels of wealth and education. In our opinion, it is essential to get more insight into the drivers of differences in health-state values across populations. This will enhance the usefulness of health-expectancy measures.

  2. Computational molecular spectroscopy of X ˜ 2 Π NCS: Electronic properties and ro-vibrationally averaged structure

    NASA Astrophysics Data System (ADS)

    Hirano, Tsuneo; Nagashima, Umpei; Jensen, Per

    2018-04-01

    For NCS in the X ˜ 2 Π electronic ground state, three-dimensional potential energy surfaces (3D PESs) have been calculated ab initio at the core-valence, full-valence MR-SDCI+Q/[aug-cc-pCVQZ (N, C, S)] level of theory. The ab initio 3D PESs are employed in second-order-perturbation-theory and DVR3D calculations to obtain various molecular constants and ro-vibrationally averaged structures. The 3D PESs show that the X ˜ 2 Π NCS has its potential minimum at a linear configuration, and hence it is a "linear molecule." The equilibrium structure has re (N-C) = 1.1778 Å, re (C-S) = 1.6335 Å, and ∠e (N-C-S) = 180°. The ro-vibrationally averaged structure, determined as expectation values over DVR3D wavefunctions, has 〈 r (N-C)〉0 = 1.1836 Å, 〈 r (C-S)〉0 = 1.6356 Å, and 〈 ∠ (N-C-S)〉0 = 172.5°. Using these expectation values as the initial guess, a bent r0 structure having an 〈 ∠ (N-C-S)〉0 of 172.2° is deduced from the experimentally reported B0 values for NC32S and NC34S. Our previous prediction that a linear molecule, in any ro-vibrational state including the ro-vibrational ground state, is to be "observed" as being bent on ro-vibrational average, has been confirmed here theoretically through the expectation value for the bond-angle deviation from linearity, 〈 ρ bar 〉 , and experimentally through the interpretation of the experimentally derived rotational-constant values.

  3. Breakdown and Limit of Continuum Diffusion Velocity for Binary Gas Mixtures from Direct Simulation

    NASA Astrophysics Data System (ADS)

    Martin, Robert Scott; Najmabadi, Farrokh

    2011-05-01

    This work investigates the breakdown of the continuum relations for diffusion velocity in inert binary gas mixtures. Values of the relative diffusion velocities for components of a gas mixture may be calculated using of Chapman-Enskog theory and occur not only due to concentration gradients, but also pressure and temperature gradients in the flow as described by Hirschfelder. Because Chapman-Enskog theory employs a linear perturbation around equilibrium, it is expected to break down when the velocity distribution deviates significantly from equilibrium. This breakdown of the overall flow has long been an area of interest in rarefied gas dynamics. By comparing the continuum values to results from Bird's DS2V Monte Carlo code, we propose a new limit on the continuum approach specific to binary gases. To remove the confounding influence of an inconsistent molecular model, we also present the application of the variable hard sphere (VSS) model used in DS2V to the continuum diffusion velocity calculation. Fitting sample asymptotic curves to the breakdown, a limit, Vmax, that is a fraction of an analytically derived limit resulting from the kinetic temperature of the mixture is proposed. With an expected deviation of only 2% between the physical values and continuum calculations within ±Vmax/4, we suggest this as a conservative estimate on the range of applicability for the continuum theory.

  4. Ground difference compensating system

    DOEpatents

    Johnson, Kris W.; Akasam, Sivaprasad

    2005-10-25

    A method of ground level compensation includes measuring a voltage of at least one signal with respect to a primary ground potential and measuring, with respect to the primary ground potential, a voltage level associated with a secondary ground potential. A difference between the voltage level associated with the secondary ground potential and an expected value is calculated. The measured voltage of the at least one signal is adjusted by an amount corresponding to the calculated difference.

  5. Structured settlement annuities, part 2: mortality experience 1967--95 and the estimation of life expectancy in the presence of excess mortality.

    PubMed

    Singer, R B; Schmidt, C J

    2000-01-01

    the mortality experience for structured settlement (SS) annuitants issued both standard (Std) and substandard (SStd) has been reported twice previously by the Society of Actuaries (SOA), but the 1995 mortality described here has not previously been published. We describe in detail the 1995 SS mortality, and we also discuss the methodology of calculating life expectancy (e), contrasting three different life-table models. With SOA permission, we present in four tables the unpublished results of its 1995 SS mortality experience by Std and SStd issue, sex, and a combination of 8 age and 6 duration groups. Overall results on mortality expected from the 1983a Individual Annuity Table showed a mortality ratio (MR) of about 140% for Std cases and about 650% for all SStd cases. Life expectancy in a group with excess mortality may be computed by either adding the decimal excess death rate (EDR) to q' for each year of attained age to age 109 or multiplying q' by the decimal MR for each year to age 109. An example is given for men age 60 with localized prostate cancer; annual EDRs from a large published cancer study are used at duration 0-24 years, and the last EDR is assumed constant to age 109. This value of e is compared with e from constant initial values of EDR or MR after the first year. Interrelations of age, sex, e, and EDR and MR are discussed and illustrated with tabular data. It is shown that a constant MR for life-table calculation of e consistently overestimates projected annual mortality at older attained ages and underestimates e. The EDR method, approved for reserve calculations, is also recommended for use in underwriting conversion tables.

  6. Reliability study of biometrics "do not contact" in myopia.

    PubMed

    Migliorini, R; Fratipietro, M; Comberiati, A M; Pattavina, L; Arrico, L

    The aim of the study is a comparison between the actually achieved after surgery condition versus the expected refractive condition of the eye as calculated via a biometer. The study was conducted in a random group of 38 eyes of patients undergoing surgery by phacoemulsification. The mean absolute error was calculated between the predicted values from the measurements with the optical biometer and those obtained in the post-operative error which was at around 0.47% Our study shows results not far from those reported in the literature, and in relation, to the mean absolute error is among the lowest values at 0.47 ± 0.11 SEM.

  7. Fuel cell stack monitoring and system control

    DOEpatents

    Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.

    2004-02-17

    A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell.

  8. Impact of the ozone monitoring instrument row anomaly on the long-term record of aerosol products

    NASA Astrophysics Data System (ADS)

    Torres, Omar; Bhartia, Pawan K.; Jethva, Hiren; Ahn, Changwoo

    2018-05-01

    Since about three years after the launch the Ozone Monitoring Instrument (OMI) on the EOS-Aura satellite, the sensor's viewing capability has been affected by what is believed to be an internal obstruction that has reduced OMI's spatial coverage. It currently affects about half of the instrument's 60 viewing positions. In this work we carry out an analysis to assess the effect of the reduced spatial coverage on the monthly average values of retrieved aerosol optical depth (AOD), single scattering albedo (SSA) and the UV Aerosol Index (UVAI) using the 2005-2007 three-year period prior to the onset of the row anomaly. Regional monthly average values calculated using viewing positions 1 through 30 were compared to similarly obtained values using positions 31 through 60, with the expectation of finding close agreement between the two calculations. As expected, mean monthly values of AOD and SSA obtained with these two scattering-angle dependent subsets of OMI observations agreed over regions where carbonaceous or sulphate aerosol particles are the predominant aerosol type. However, over arid regions, where desert dust is the main aerosol type, significant differences between the two sets of calculated regional mean values of AOD were observed. As it turned out, the difference in retrieved desert dust AOD between the scattering-angle dependent observation subsets was due to the incorrect representation of desert dust scattering phase function. A sensitivity analysis using radiative transfer calculations demonstrated that the source of the observed AOD bias was the spherical shape assumption of desert dust particles. A similar analysis in terms of UVAI yielded large differences in the monthly mean values for the two sets of calculations over cloudy regions. On the contrary, in arid regions with minimum cloud presence, the resulting UVAI monthly average values for the two sets of observations were in very close agreement. The discrepancy under cloudy conditions was found to be caused by the parameterization of clouds as opaque Lambertian reflectors. When properly accounting for cloud scattering effects using Mie theory, the observed UVAI angular bias was significantly reduced. The analysis discussed here has uncovered important algorithmic deficiencies associated with the model representation of the angular dependence of scattering effects of desert dust aerosols and cloud droplets. The resulting improvements in the handling of desert dust and cloud scattering have been incorporated in an improved version of the OMAERUV algorithm.

  9. Algorithm for astronomical, point source, signal to noise ratio calculations

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.; Schroeder, D. J.

    1984-01-01

    An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.

  10. Aquifer-test evaluation and potential effects of increased ground-water pumpage at the Stovepipe Wells Hotel area, Death Valley National Monument, California

    USGS Publications Warehouse

    Woolfenden, L.R.; Martin, Peter; Baharie, Brian

    1988-01-01

    Ground-water use in the Stovepipe Wells Hotel area in Death Valley National Monument is expected to increase significantly if the nonpotable, as well as potable, water supply is treated by reverse osmosis. During the peak tourist season, October through March, ground-water pumpage could increase by 37,500 gallons per day, or 76%. The effects of this additional pumpage on water levels in the area, particularly near a strand of phreatophytes about 10,000 feet east of the well field, are of concern. In order to evaluate the effects of increased pumpage on water levels in the Stovepipe Wells Hotel area well field, two aquifer tests were performed at the well field to determine the transmissivity and storage coefficients of the aquifer. Analysis of the aquifer test determined that a transmissivity of 1,360 feet squared per day was representative of the aquifer. The estimated value of transmissivity and the storage-coefficient values that are representative of confined (1.2 x .0004) and unconfined (0.25) conditions were used in the Theis equation to calculate the additional drawdown that might occur after 1, 10, and 50 years of increased pumpage. The drawdown calculated by using the lower storage-coefficient value represents the maximum additional drawdown that might be expected from the assumed increase in pumpage; the drawdown calculated by using the higher storage-coefficient value represents the minimum additional drawdown. Calculated additional drawdowns after 50 years of pumping range from 7.8 feet near the pumped well to 2.4 feet at the phreatophyte stand assuming confined conditions, and from 5.7 feet near the pumped well to 0.3 foot at the phreatophyte stand assuming unconfined conditions. Actual drawdowns probably will be somewhere between these values. Drawdowns measured in observation wells during 1973-85, in response to an average pumpage of 34,200 gallons per day at the Stovepipe Wells Hotel well field, are similar to the drawdowns calculated by the Theis equation for the assumed increase in pumpage. (Author 's abstract)

  11. Microscopic study of spin cut-off factors of nuclear level densities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gholami, M.; Kildir, M.; Behkami, A. N.

    Level densities and spin cut-off factors have been investigated within the microscopic approach based on the BCS Hamiltonian. In particular, the spin cut-off parameters have been calculated at neutron binding energies over a large range of nuclear mass using the BCS theory. The spin cut-off parameters {sigma}{sup 2}(E) have also been obtained from the Gilbert and Cameron expression and from rigid body calculations. The results were compared with their corresponding macroscopic values. It was found that the values of {sigma}{sup 2}(E) did not increase smoothly with A as expected based on macroscopic theory. Instead, the values of {sigma}{sup 2}(E) showmore » structure reflecting the angular momentum of the shell model orbitals near the Fermi energy.« less

  12. A Tale of Two Limpets (Patella vulgata and Patella stellaeformis): Evaluating a New Proxy for Late Holocene Climate Change in Coastal Areas

    NASA Astrophysics Data System (ADS)

    Fenger, T. L.; Surge, D. M.; Schoene, B. R.; Carter, J. G.; Milner, N.

    2006-12-01

    Shells of the European limpet, Patella vulgata, from Late Holocene archaeological deposits potentially contain critical information about climate change in coastal areas. Before deciphering climate information preserved in these zooarchaeological records, we studied the controls on oxygen isotope ratios (δ18O) in modern specimens. We tested the hypothesis that P. vulgata precipitates its shell in isotopic equilibrium with ambient water by comparing δ18OSHELL with expected values. Expected δ18OSHELL was constructed using the calcite-water fractionation equation, observed sea surface temperature (SST), and assuming δ18OWATER is +0.10‰ (VSMOW). Comparison between expected and measured δ18OSHELL revealed a +1.51±0.21‰ (VPDB) offset from expected values. Consequently, estimated SST calculated from δ18OSHELL was 6.50±2.45°C lower than observed SST. However, because the offset was relatively uniform, an adjustment can be made to account for this predictable vital effect and past SST can be reliably reconstructed. To further investigate the source of offset in this genus, we analyzed a fully marine tropical species (Patella stellaeformis) to minimize seasonal variation in environmental factors that influence δ18OSHELL. P. stellaeformis was evaluated to determine whether it has a similar offset from equilibrium as P. vulgata. We tested the hypotheses that: (1) δ18OSHELL in tropical species also displays vital effects; and (2) the offset from equilibrium (if any) would be constant and predictable. Our results indicated: (1) aragonite comprises most of P. stellaeformis' shell; and (2) δ18OSHELL is statistically indistinguishable from expected values calculated using the aragonite-water fractionation equation (Kolmogorov-Smirnov test statistic=0.61, D0.05[56, 57]=1.36) in contrast with our observations in P. vulgata. Differences in mineralogy or growth rates at different latitudes may play a role in mechanisms that influence vital effects.

  13. Circuit analysis method for thin-film solar cell modules

    NASA Technical Reports Server (NTRS)

    Burger, D. R.

    1985-01-01

    The design of a thin-film solar cell module is dependent on the probability of occurrence of pinhole shunt defects. Using known or assumed defect density data, dichotomous population statistics can be used to calculate the number of defects expected in a module. Probability theory is then used to assign the defective cells to individual strings in a selected series-parallel circuit design. Iterative numerical calculation is used to calcuate I-V curves using cell test values or assumed defective cell values as inputs. Good and shunted cell I-V curves are added to determine the module output power and I-V curve. Different levels of shunt resistance can be selected to model different defect levels.

  14. Hydrogen and helium under high pressure - A case for a classical theory of dense matter

    NASA Astrophysics Data System (ADS)

    Celebonovic, Vladan

    1989-06-01

    When subject to high pressure, H2 and He-3 are expected to undergo phase transitions, and to become metallic at a sufficiently high pressure. Using a semiclassical theory of dense matter proposed by Savic and Kasanin, calculations of phase transition and metallization pressure have been performed for these two materials. In hydrogen, metallization occurs at p(M) = (3.0 + or - 0.2) Mbar, while for helium the corresponding value is (106 + or - 1) Mbar. A phase transition occurs in helium at p(tr) = (10.0 + or - 0.4) Mbar. These values are close to the results obtainable by more rigorous methods. Possibilities of experimental verification of the calculations are briefly discussed.

  15. Planning the FUSE Mission Using the SOVA Algorithm

    NASA Technical Reports Server (NTRS)

    Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly

    2011-01-01

    Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search.

  16. Considerations of net present value in policy making regarding diagnostic and therapeutic technologies.

    PubMed

    Califf, Robert M; Rasiel, Emma B; Schulman, Kevin A

    2008-11-01

    The pharmaceutical and medical device industries function in a business environment in which shareholders expect companies to optimize profit within legal and ethical standards. A fundamental tool used to optimize decision making is the net present value calculation, which estimates the current value of cash flows relating to an investment. We examined 3 prototypical research investment decisions that have been the source of public scrutiny to illustrate how policy decisions can be better understood when their impact on societally desirable investments by industry are viewed from the standpoint of their impact on net present value. In the case of direct, comparative clinical trials, a simple net present value calculation provides insight into why companies eschew such investments. In the case of pediatric clinical trials, the Pediatric Extension Rule changed the net present value calculation from unattractive to potentially very attractive by allowing patent extensions; thus, the dramatic increase in pediatric clinical trials can be explained by the financial return on investment. In the case of products for small markets, the fixed costs of development make this option financially unattractive. Policy decisions can be better understood when their impact on societally desirable investments by the pharmaceutical and medical device industries are viewed from the standpoint of their impact on net present value.

  17. Exposure of farm workers to electromagnetic radiation from cellular network radio base stations situated on rural agricultural land.

    PubMed

    Pascuzzi, Simone; Santoro, Francesco

    2015-01-01

    The electromagnetic field (EMF) levels generated by mobile telephone radio base stations (RBS) situated on rural-agricultural lands were assessed in order to evaluate the exposure of farm workers in the surrounding area. The expected EMF at various distances from a mobile telephone RBS was calculated using an ad hoc numerical forecast model. Subsequently, the electric fields around some RBS on agricultural lands were measured, in order to obtain a good approximation of the effective conditions at the investigated sites. The viability of this study was tested according to the Italian Regulations concerning general and occupational public exposure to time-varying EMFs. The calculated E-field values were obtained with the RBS working constantly at full power, but during the in situ measurements the actual power emitted by RBS antennas was lower than the maximum level, and the E-field values actually registered were much lower than the calculated values.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogen, K.T.; Conrado, C.L.; Robison, W.L.

    A detailed analysis of uncertainty and interindividual variability in estimated doses was conducted for a rehabilitation scenario for Bikini Island at Bikini Atoll, in which the top 40 cm of soil would be removed in the housing and village area, and the rest of the island is treated with potassium fertilizer, prior to an assumed resettlement date of 1999. Predicted doses were considered for the following fallout-related exposure pathways: ingested Cesium-137 and Strontium-90, external gamma exposure, and inhalation and ingestion of Americium-241 + Plutonium-239+240. Two dietary scenarios were considered: (1) imported foods are available (IA), and (2) imported foods aremore » unavailable (only local foods are consumed) (IUA). Corresponding calculations of uncertainty in estimated population-average dose showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to uncertainty in this dose are estimated to be approximately 2-fold higher and lower than its population-average value, respectively (under both IA and IUA assumptions). Corresponding calculations of interindividual variability in the expected value of dose with respect to uncertainty showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to interindividual variability in this dose are estimated to be approximately 2-fold higher and lower than its expected value, respectively (under both IA and IUA assumptions). For reference, the expected values of population-average dose at age 70 were estimated to be 1.6 and 5.2 cSv under the IA and IUA dietary assumptions, respectively. Assuming that 200 Bikini resettlers would be exposed to local foods (under both IA and IUA assumptions), the maximum 1-y dose received by any Bikini resident is most likely to be approximately 2 and 8 mSv under the IA and IUA assumptions, respectively.« less

  19. Quantum Theory of Jaynes' Principle, Bayes' Theorem, and Information

    NASA Astrophysics Data System (ADS)

    Haken, Hermann

    2014-12-01

    After a reminder of Jaynes' maximum entropy principle and of my quantum theoretical extension, I consider two coupled quantum systems A,B and formulate a quantum version of Bayes' theorem. The application of Feynman's disentangling theorem allows me to calculate the conditional density matrix ρ (A|B) , if system A is an oscillator (or a set of them), linearly coupled to an arbitrary quantum system B. Expectation values can simply be calculated by means of the normalization factor of ρ (A|B) that is derived.

  20. Trial densities for the extended Thomas-Fermi model

    NASA Astrophysics Data System (ADS)

    Yu, An; Jimin, Hu

    1996-02-01

    A new and simplified form of nuclear densities is proposed for the extended Thomas-Fermi method (ETF) and applied to calculate the ground-state properties of several spherical nuclei, with results comparable or even better than other conventional density profiles. With the expectation value method (EVM) for microscopic corrections we checked our new densities for spherical nuclei. The binding energies of ground states almost reproduce the Hartree-Fock (HF) calculations exactly. Further applications to nuclei far away from the β-stability line are discussed.

  1. Higher Rank ABJM Wilson Loops from Matrix Models

    NASA Astrophysics Data System (ADS)

    Cookmeyer, Jonathan; Liu, James; Zayas, Leopoldo

    2017-01-01

    We compute the expectation values of 1/6 supersymmetric Wilson Loops in ABJM theory in higher rank representations. Using standard matrix model techniques, we calculate the expectation value in the rank m fully symmetric and fully antisymmetric representation where m is scaled with N. To leading order, we find agreement with the classical action of D6 and D2 branes in AdS4 ×CP3 respectively. Further, we compute the first subleading order term, which, on the AdS side, makes a prediction for the one-loop effective action of the corresponding D6 and D2 branes. Supported by the National Science Foundation under Grant No. PHY 1559988 and the US Department of Energy under Grant No. DE-SC0007859.

  2. Real time evolution at finite temperatures with operator space matrix product states

    NASA Astrophysics Data System (ADS)

    Pižorn, Iztok; Eisler, Viktor; Andergassen, Sabine; Troyer, Matthias

    2014-07-01

    We propose a method to simulate the real time evolution of one-dimensional quantum many-body systems at finite temperature by expressing both the density matrices and the observables as matrix product states. This allows the calculation of expectation values and correlation functions as scalar products in operator space. The simulations of density matrices in inverse temperature and the local operators in the Heisenberg picture are independent and result in a grid of expectation values for all intermediate temperatures and times. Simulations can be performed using real arithmetics with only polynomial growth of computational resources in inverse temperature and time for integrable systems. The method is illustrated for the XXZ model and the single impurity Anderson model.

  3. Real time detection of farm-level swine mycobacteriosis outbreak using time series modeling of the number of condemned intestines in abattoirs.

    PubMed

    Adachi, Yasumoto; Makita, Kohei

    2015-09-01

    Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.

  4. Model for forecasting Olea europaea L. airborne pollen in South-West Andalusia, Spain

    NASA Astrophysics Data System (ADS)

    Galán, C.; Cariñanos, Paloma; García-Mozo, Herminia; Alcázar, Purificación; Domínguez-Vilches, Eugenio

    Data on predicted average and maximum airborne pollen concentrations and the dates on which these maximum values are expected are of undoubted value to allergists and allergy sufferers, as well as to agronomists. This paper reports on the development of predictive models for calculating total annual pollen output, on the basis of pollen and weather data compiled over the last 19 years (1982-2000) for Córdoba (Spain). Models were tested in order to predict the 2000 pollen season; in addition, and in view of the heavy rainfall recorded in spring 2000, the 1982-1998 data set was used to test the model for 1999. The results of the multiple regression analysis show that the variables exerting the greatest influence on the pollen index were rainfall in March and temperatures over the months prior to the flowering period. For prediction of maximum values and dates on which these values might be expected, the start of the pollen season was used as an additional independent variable. Temperature proved the best variable for this prediction. Results improved when the 5-day moving average was taken into account. Testing of the predictive model for 1999 and 2000 yielded fairly similar results. In both cases, the difference between expected and observed pollen data was no greater than 10%. However, significant differences were recorded between forecast and expected maximum and minimum values, owing to the influence of rainfall during the flowering period.

  5. Assessing the Value of Information of Geophysical Data For Groundwater Management

    NASA Astrophysics Data System (ADS)

    Trainor, W. J.; Caers, J. K.; Mukerji, T.; Auken, E.; Knight, R. J.

    2008-12-01

    Effective groundwater management requires hydrogeologic models informed by various data sources. The long-term goal of our research is to develop methodologies that quantify the value of information (VOI) of geophysical data for water managers. We present an initial sensitivity study on assessing the reliability of airborne electro-magnetic (EM) data for detecting channel orientation. The reliability results are used to calculate VOI regarding decisions of artificial recharge to mitigate seawater intrusion. To demonstrate how a hydrogeologic problem can be framed in decision analysis terms, a hypothetical example is built, where water managers are considering artificial recharge to remediate seawater intrusion. Is the cost of recharge justified given the large uncertainty of subsurface heterogeneity that may interfere in a successful recharge? Thus, the decision is should recharge be performed, and if yes, where should recharge wells be located? This decision is difficult because of the large uncertainty of the aquifer heterogeneity that influences flow. The expected value of all possible outcomes to the decision without gathering additional EM information is the prior value VPRIOR. The value of information (VOI) is calculated as the expected gain in value after including the relevant new information, or the difference between the value after a free experiment (VFE) and the value prior (VPRIOR): VOI = VFE - VPRIOR Airborne EM has been used to detect confining clay layers and flow barriers. However, geophysical information rarely identifies the subsurface perfectly. Many challenges impact data quality and the resulting models (interpretation uncertainty). To evaluate how well airborne EM data detect the orientation of subsurface channel systems, 125 alternative binary, fluvial lithology models are generated, each categorized into one of three subsurface scenarios: northwest, southwest and mixed channel orientation. Using rock property relations, the lithology models are converted into electrical resistivity models for EM forward modeling, to generate time-domain EM data. Noise is added to the late times of the EM data to better represent typical airborne acquisition. Inversions are performed to obtain 125 inverted resistivity images. From the images, we calculate the angle of maximum spatial correlation at every cell, and compare it with the truth - the original lithology model. These synthetic models serve as a proxy to estimate misclassification probabilities of channel orientation from actual EM data. The misclassification probabilities are then used in the VOI calculations. Results are presented demonstrating how the reliability measure and the pumping schedule can impact VOI. Lastly, reliability and VOI are calculated and compared for land-based EM data, which has different spatial sampling and resolution than air-borne data.

  6. Dosimetric Consistency of Co-60 Teletherapy Unit- a ten years Study.

    PubMed

    Baba, Misba H; Mohib-Ul-Haq, M; Khan, Aijaz A

    2013-01-01

    The goal of the Radiation standards and Dosimetry is to ensure that the output of the Teletherapy Unit is within ±2% of the stated one and the output of the treatment dose calculation methods are within ±5%. In the present paper, we studied the dosimetry of Cobalt-60 (Co-60) Teletherapy unit at Sher-I-Kashmir Institute of Medical Sciences (SKIMS) for last 10 years. Radioactivity is the phenomenon of disintegration of unstable nuclides called radionuclides. Among these radionuclides, Cobalt-60, incorporated in Telecobalt Unit, is commonly used in therapeutic treatment of cancer. Cobalt-60 being unstable decays continuously into Ni-60 with half life of 5.27 years thereby resulting in the decrease in its activity, hence dose rate (output). It is, therefore, mandatory to measure the dose rate of the Cobalt-60 source regularly so that the patient receives the same dose every time as prescribed by the radiation oncologist. The under dosage may lead to unsatisfactory treatment of cancer and over dosage may cause radiation hazards. Our study emphasizes the consistency between actual output and output obtained using decay method. The methodology involved in the present study is the calculations of actual dose rate of Co-60 Teletherapy Unit by two techniques i.e. Source to Surface Distance (SSD) and Source to Axis Distance (SAD), used for the External Beam Radiotherapy, of various cancers, using the standard methods. Thereby, a year wise comparison has been made between average actual dosimetric output (dose rate) and the average expected output values (obtained by using decay method for Co-60.). The present study shows that there is a consistency in the average output (dose rate) obtained by the actual dosimetry values and the expected output values obtained using decay method. The values obtained by actual dosimetry are within ±2% of the expected values. The results thus obtained in a year wise comparison of average output by actual dosimetry done regularly as a part of Quality Assurance of the Telecobalt Radiotherapy Unit and its deviation from the expected output data is within the permissible limits. Thus our study shows a trend towards uniformity and a better dose delivery.

  7. Load controller and method to enhance effective capacity of a photovotaic power supply using a dynamically determined expected peak loading

    DOEpatents

    Perez, Richard

    2003-04-01

    A load controller and method are provided for maximizing effective capacity of a non-controllable, renewable power supply coupled to a variable electrical load also coupled to a conventional power grid. Effective capacity is enhanced by monitoring power output of the renewable supply and loading, and comparing the loading against the power output and a load adjustment threshold determined from an expected peak loading. A value for a load adjustment parameter is calculated by subtracting the renewable supply output and the load adjustment parameter from the current load. This value is then employed to control the variable load in an amount proportional to the value of the load control parameter when the parameter is within a predefined range. By so controlling the load, the effective capacity of the non-controllable, renewable power supply is increased without any attempt at operational feedback control of the renewable supply. The expected peak loading of the variable load can be dynamically determined within a defined time interval with reference to variations in the variable load.

  8. Analytical probabilistic proton dose calculation and range uncertainties

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Hennig, P.; Oelfke, U.

    2014-03-01

    We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.

  9. [Isonymy analysis in a sample of parents of cystic fibrosis patients from Antioquia, Colombia].

    PubMed

    Rodríguez-Acevedo, Astrid; Morales, Olga; Durango, Harold; Pineda-Trujillo, Nicolás

    2012-01-01

    Cystic fibrosis (CTFR) is one of the most common autosomal recessive disorders in European descendants. Geographic distribution of CFTR gene mutations vary worldwide. The degree of isonimy was evaluated in a sample of parents with children affected by cystic fibrosis. Observed and expected isonimy as well as endogamy components (Fr, Fn, Ft, and the values α and B) were calculated for 35 parents of children diagnosed with cystic fibrosis. These parameters were calculated for both the total population of Antioquia Province and for an eastern subpopulation of Antioquia. The values obtained for Fr, Fn, Ft, α and B were 0.01, 0.007, 0.019, 268 and 0.44, respectively for the total population of Antioquia. For the eastern subpopulation, the values were 0.026, 0.0017, 0.027, 135 and 0.62. The most frequent last-names in the total sample (n=70) were Gómez (6%), Alzate (4%), and González (3.7 %), whilst for the eastern subpopulation (n=32) were Gómez (8%) and Marín (6%). A high percentage of last-names was shared, as is reflected in the isonimy values. Similarly, the presence of a reduced number of last-names in an important percentage of the population is reflected in the Fr values obtained for both analyses, which suggest homogeneity. Thus, it is expected a low number of CFTR mutations in the children from Antioquia with cystic fibrosis.

  10. When do we need more data? A primer on calculating the value of information for applied ecologists

    USGS Publications Warehouse

    Canessa, Stefano; Guillera-Arroita, Gurutzeta; Lahoz-Monfort, José J.; Southwell, Darren M; Armstrong, Doug P.; Chadès, Iadine; Lacy, Robert C; Converse, Sarah J.

    2015-01-01

    The VoI depends on our current knowledge, the quality of the information collected and the expected outcomes of the available management actions. Collecting information can require significant investments of resources; VoI analysis assists managers in deciding whether these investments are justified.

  11. The Return on the Investment in Library Education.

    ERIC Educational Resources Information Center

    Van House, Nancy A.

    1985-01-01

    Measures change in social and private net present value of expected lifetime earnings attributable to M.L.S. degree under current market conditions and calculates effect of changes in placement rates and of two-year MLS degrees. Implications for profession's ability to attract capable individuals and for its sex composition are discussed. (33…

  12. Economics of site preparation and release treatments using herbicides in Central Georgia

    Treesearch

    Rodney L. Busby; James H. Miller; M. Boyd Edwards

    1998-01-01

    Abstract. Land expectation values (LEV) of site preparation and release treatments using herbicides in central Georgia are calculated and compared Loblolly pine growth and hardwood competition levels were measured at age 6 for the site preparation treatments and age 8 for the release treatments. These measurements were projected to final harvest...

  13. Load controller and method to enhance effective capacity of a photovoltaic power supply using a dynamically determined expected peak loading

    DOEpatents

    Perez, Richard

    2005-05-03

    A load controller and method are provided for maximizing effective capacity of a non-controllable, renewable power supply coupled to a variable electrical load also coupled to a conventional power grid. Effective capacity is enhanced by monitoring power output of the renewable supply and loading, and comparing the loading against the power output and a load adjustment threshold determined from an expected peak loading. A value for a load adjustment parameter is calculated by subtracting the renewable supply output and the load adjustment parameter from the current load. This value is then employed to control the variable load in an amount proportional to the value of the load control parameter when the parameter is within a predefined range. By so controlling the load, the effective capacity of the non-controllable, renewable power supply is increased without any attempt at operational feedback control of the renewable supply.

  14. Cost-effectiveness of breech version by acupuncture-type interventions on BL 67, including moxibustion, for women with a breech foetus at 33 weeks gestation: a modelling approach.

    PubMed

    van den Berg, Ineke; Kaandorp, Guido C; Bosch, Johanna L; Duvekot, Johannes J; Arends, Lidia R; Hunink, M G Myriam

    2010-04-01

    To assess, using a modelling approach, the effectiveness and costs of breech version with acupuncture-type interventions on BL67 (BVA-T), including moxibustion, compared to expectant management for women with a foetal breech presentation at 33 weeks gestation. A decision tree was developed to predict the number of caesarean sections prevented by BVA-T compared to expectant management to rectify breech presentation. The model accounted for external cephalic versions (ECV), treatment compliance, and costs for 10,000 simulated breech presentations at 33 weeks gestational age. Event rates were taken from Dutch population data and the international literature, and the relative effectiveness of BVA-T was based on a specific meta-analysis. Sensitivity analyses were conducted to evaluate the robustness of the results. We calculated percentages of breech presentations at term, caesarean sections, and costs from the third-party payer perspective. Odds ratios (OR) and cost differences of BVA-T versus expectant management were calculated. (Probabilistic) sensitivity analysis and expected value of perfect information analysis were performed. The simulated outcomes demonstrated 32% breech presentations after BVA-T versus 53% with expectant management (OR 0.61, 95% CI 0.43, 0.83). The percentage caesarean section was 37% after BVA-T versus 50% with expectant management (OR 0.73, 95% CI 0.59, 0.88). The mean cost-savings per woman was euro 451 (95% CI euro 109, euro 775; p=0.005) using moxibustion. Sensitivity analysis showed that if 16% or more of women offered moxibustion complied, it was more effective and less costly than expectant management. To prevent one caesarean section, 7 women had to use BVA-T. The expected value of perfect information from further research was euro0.32 per woman. The results suggest that offering BVA-T to women with a breech foetus at 33 weeks gestation reduces the number of breech presentations at term, thus reducing the number of caesarean sections, and is cost-effective compared to expectant management, including external cephalic version. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  15. The economic value of transportation energy contingency planning: An objective model for analyzing the economics of domestic renewable energy for supply augmentation

    NASA Astrophysics Data System (ADS)

    Shaten, Richard Jay

    1998-12-01

    Petroleum provides 90% of transportation energy needs. Domestic production is decreasing and global demand is increasing. Risk of escalating prices and supply interruptions are compounded by environmental and military externalities and lost opportunities from the failure to develop alternative domestic resources. Within the context of "energy contingency planning" municipalities should evaluate crisis mitigation strategies. Supply augmentation using domestic renewable fuels is proposed to avert future financial liabilities. A method for calculating the economic value of this strategy is demonstrated. An objective function and associated constraints represent the cost of preparing for each of three possible scenarios: status quo, inflationary and crisis. Constraints ensure that municipal fuel needs are met. Environmental costs may be included. Optimal solutions determine the fuel supply mix for each scenario. A 3 x 3 matrix presents the range of actual costs resulting from preparing for each scenario and subsequent three possible outcomes. The distribution of probabilities of the outcomes is applied to the cost matrix and an "expected value" of preparing for each scenario is calculated. An unanticipated crisis outcome results in. The expected value of the cost of preparing for a crisis is cast as an insurance premium against potential economic liability. Policy makers accept the crisis preparation fuel mix if: (a) they agree with the calculated penalty cost, or (b) they accept the burden of the insurance premium. Green Bay Wisconsin was chosen as a sample municipality. Results show that a perceived 10% chance of crisis requires an annual tax of 4.00 per household to avert economic impacts of 50 million. At a perceived 50% chance of crisis preparing for the crisis would begin to save the municipality money.

  16. Helical magnetic structure and the anomalous and topological Hall effects in epitaxial B20 Fe1 -yCoyGe films

    NASA Astrophysics Data System (ADS)

    Spencer, Charles S.; Gayles, Jacob; Porter, Nicholas A.; Sugimoto, Satoshi; Aslam, Zabeada; Kinane, Christian J.; Charlton, Timothy R.; Freimuth, Frank; Chadov, Stanislav; Langridge, Sean; Sinova, Jairo; Felser, Claudia; Blügel, Stefan; Mokrousov, Yuriy; Marrows, Christopher H.

    2018-06-01

    Epitaxial films of the B20-structure compound Fe1 -yCoyGe were grown by molecular beam epitaxy on Si (111) substrates. The magnetization varied smoothly from the bulklike values of one Bohr magneton per Fe atom for FeGe to zero for nonmagnetic CoGe. The chiral lattice structure leads to a Dzyaloshinskii-Moriya interaction (DMI), and the films' helical magnetic ground state was confirmed using polarized neutron reflectometry measurements. The pitch of the spin helix, measured by this method, varies with Co content y and diverges at y ˜0.45 . This indicates a zero crossing of the DMI, which we reproduced in calculations using first-principles methods. We also measured the longitudinal and Hall resistivity of our films as a function of magnetic field, temperature, and Co content y . The Hall resistivity is expected to contain contributions from the ordinary, anomalous, and topological Hall effects. Both the anomalous and topological Hall resistivities show peaks around y ˜0.5 . Our first-principles calculations show a peak in the topological Hall constant at this value of y , related to the strong spin polarization predicted for intermediate values of y . Our calculations predict half-metallicity for y =0.6 , consistent with the experimentally observed linear magnetoresistance at this composition, and potentially related to the other unusual transport properties for intermediate value of y . While it is possible to reconcile theory with experiment for the various Hall effects for FeGe, the large topological Hall resistivities for y ˜0.5 are much larger than expected when the very small emergent fields associated with the divergence in the DMI are taken into account.

  17. Bulk hydrodynamic stability and turbulent saturation in compressing hot spots

    NASA Astrophysics Data System (ADS)

    Davidovits, Seth; Fisch, Nathaniel J.

    2018-04-01

    For hot spots compressed at constant velocity, we give a hydrodynamic stability criterion that describes the expected energy behavior of non-radial hydrodynamic motion for different classes of trajectories (in ρR — T space). For a given compression velocity, this criterion depends on ρR, T, and d T /d (ρR ) (the trajectory slope) and applies point-wise so that the expected behavior can be determined instantaneously along the trajectory. Among the classes of trajectories are those where the hydromotion is guaranteed to decrease and those where the hydromotion is bounded by a saturated value. We calculate this saturated value and find the compression velocities for which hydromotion may be a substantial fraction of hot-spot energy at burn time. The Lindl (Phys. Plasmas 2, 3933 (1995)] "attractor" trajectory is shown to experience non-radial hydrodynamic energy that grows towards this saturated state. Comparing the saturation value with the available detailed 3D simulation results, we find that the fluctuating velocities in these simulations reach substantial fractions of the saturated value.

  18. Turbo FRMAC 2011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, John; Gallagher, Linda K.; Whitener, Dustin

    The Turbo FRMAC (TF) software automates the calculations described in volumes 1-3 of "The Federal Manual for Assessing Environmental Data During a Radiological Emergency" (2010 version). This software automates the process of assessing radiological data during a Federal Radiological Emergency. The manual upon which the software is based is unclassified and freely available on the Internet. TF takes values generated by field samples or computer dispersion models and assesses the data in a way which is meaningful to a decision maker at a radiological emergency; such as, do radiation values exceed city, state, or federal limits; should the crops bemore » destroyed or can they be utilized; do residents need to be evacuated, sheltered in place, or should another action taken. The software also uses formulas generated by the EPA, FDA, and other federal agencies to generate field observable values specific to the radiological event that can be used to determine where regulatory limit values are exceeded. In addition to these calculations, TF calculates values which indicate how long an emergency worker can work in the contaminated area during a radiological emergency, the dose received from drinking contaminated water or milk, the dose from eating contaminated food, the does expected down or upwind of a given field sample, along with a significant number of other similar radiological health values.« less

  19. Illustration of sampling-based approaches to the calculation of expected dose in performance assessments for the proposed high level radioactive waste repository at Yucca Mountain, Nevada.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon Craig; Sallaberry, Cedric J. PhD.; .)

    2007-04-01

    A deep geologic repository for high level radioactive waste is under development by the U.S. Department of Energy at Yucca Mountain (YM), Nevada. As mandated in the Energy Policy Act of 1992, the U.S. Environmental Protection Agency (EPA) has promulgated public health and safety standards (i.e., 40 CFR Part 197) for the YM repository, and the U.S. Nuclear Regulatory Commission has promulgated licensing standards (i.e., 10 CFR Parts 2, 19, 20, etc.) consistent with 40 CFR Part 197 that the DOE must establish are met in order for the YM repository to be licensed for operation. Important requirements in 40more » CFR Part 197 and 10 CFR Parts 2, 19, 20, etc. relate to the determination of expected (i.e., mean) dose to a reasonably maximally exposed individual (RMEI) and the incorporation of uncertainty into this determination. This presentation describes and illustrates how general and typically nonquantitive statements in 40 CFR Part 197 and 10 CFR Parts 2, 19, 20, etc. can be given a formal mathematical structure that facilitates both the calculation of expected dose to the RMEI and the appropriate separation in this calculation of aleatory uncertainty (i.e., randomness in the properties of future occurrences such as igneous and seismic events) and epistemic uncertainty (i.e., lack of knowledge about quantities that are poorly known but assumed to have constant values in the calculation of expected dose to the RMEI).« less

  20. Ab initio calculations of torsionally mediated hyperfine splittings in E states of acetaldehyde

    NASA Astrophysics Data System (ADS)

    Xu, Li-Hong; Reid, E. M.; Guislain, B.; Hougen, J. T.; Alekseev, E. A.; Krapivin, I.

    2017-12-01

    Quantum chemistry packages can be used to predict with reasonable accuracy spin-rotation hyperfine interaction constants for methanol, which contains one methyl-top internal rotor. In this work we use one of these packages to calculate components of the spin-rotation interaction tensor for acetaldehyde. We then use torsion-rotation wavefunctions obtained from a fit to the acetaldehyde torsion-rotation spectrum to calculate the expected magnitude of hyperfine splittings analogous to those observed at relatively high J values in the E symmetry states of methanol. We find that theory does indeed predict doublet splittings at moderate J values in the acetaldehyde torsion-rotation spectrum, which closely resemble those seen in methanol, but that the factor of three decrease in hyperfine spin-rotation constants compared to methanol puts the largest of the acetaldehyde splittings a factor of two below presently available Lamb-dip resolution.

  1. First principles investigation of structural, vibrational and thermal properties of black and blue phosphorene

    NASA Astrophysics Data System (ADS)

    Arif Khalil, R. M.; Ahmad, Javed; Rana, Anwar Manzoor; Bukhari, Syed Hamad; Tufiq Jamil, M.; Tehreem, Tuba; Nissar, Umair

    2018-05-01

    In this investigation, structural, dynamical and thermal properties of black and blue phosphorene (P) are presented through the first principles calculations based on the density functional theory (DFT). These DFT calculations depict that due to the approximately same values of ground state energy at zero Kelvin and Helmholtz free energy at room-temperature, it is expected that both structures can coexist at transition temperature. Lattice dynamics of both phases were investigated by using the finite displacement supercell approach. It is noticed on the basis of harmonic approximation thermodynamic calculations that the blue phase is thermodynamically more stable than the black phase above 155 K.

  2. Intramolecular BSSE and dispersion affect the structure of a dipeptide conformer

    NASA Astrophysics Data System (ADS)

    Hameed, Rabia; Khan, Afsar; van Mourik, Tanja

    2018-05-01

    B3LYP and MP2 calculations with the commonly-used 6-31+G(d) basis set predict qualitatively different structures for the Tyr-Gly conformer book1, which is the most stable conformer identified in a previous study. The structures differ mainly in the ψtyr Ramachandran angle (138° in the B3LYP structure and 120° in the MP2 structure). The causes for the discrepant structures are attributed to missing dispersion in the B3LYP calculations and large intramolecular BSSE in the MP2 calculations. The correct ψtyr value is estimated to be 130°. The MP2/6-31+G(d) profile identified an additional conformer, not present on the B3LYP surface, with a ψtyr value of 96° and a more folded structure. This minimum is, however, likely an artefact of large intramolecular BSSE values. We recommend the use of basis sets of at least quadruple-zeta quality in density functional theory (DFT), DFTaugmented with an empirical dispersion term (DFT-D) and second-order Møller-Plesset perturbation theory (MP2 ) calculations in cases where intramolecular BSSE is expected to be large.

  3. Flexibility and Project Value: Interactions and Multiple Real Options

    NASA Astrophysics Data System (ADS)

    Čulík, Miroslav

    2010-06-01

    This paper is focused on a project valuation with embedded portfolio of real options including their interactions. Valuation is based on the criterion of Net Present Value on the simulation basis. Portfolio includes selected types of European-type real options: option to expand, contract, abandon and temporarily shut down and restart a project. Due to the fact, that in reality most of the managerial flexibility takes the form of portfolio of real options, selected types of options are valued not only individually, but also in combination. The paper is structured as follows: first, diffusion models for forecasting of output prices and variable costs are derived. Second, project value is estimated on the assumption, that no real options are present. Next, project value is calculated with the presence of selected European-type options; these options and their impact on project value are valued first in isolation and consequently in different combinations. Moreover, intrinsic value evolution of given real options with respect to the time of exercising is analysed. In the end, results are presented graphically; selected statistics and risk measures (Value at Risk, Expected Shortfall) of the NPV's distributions are calculated and commented.

  4. Ground-state energy of HeH+

    NASA Astrophysics Data System (ADS)

    Zhou, Bing-Lu; Zhu, Jiong-Ming; Yan, Zong-Chao

    2006-06-01

    The nonrelativistic ground-state energy of He4H+ is calculated using a variational method in Hylleraas coordinates. Convergence to a few parts in 1010 is achieved, which improves the best previous result of Pavanello [J. Chem. Phys. 123, 104306 (2005)]. Expectation values of the interparticle distances are evaluated. Similar results for He3H+ are also presented.

  5. Keene v. Brigham and Women's Hospital, Inc.: On the Value of a Life with Mental Retardation.

    ERIC Educational Resources Information Center

    Vitello, Stanley J.

    2003-01-01

    Analysis of the Keene malpractice court case, which awarded compensatory damages to a child with severe disabilities probably contracted shortly after birth, focuses on how the court calculated life expectancy and the loss of life enjoyment, concluding discrimination against people with mental retardation, in that the decision assumes these…

  6. A practical guide to value of information analysis.

    PubMed

    Wilson, Edward C F

    2015-02-01

    Value of information analysis is a quantitative method to estimate the return on investment in proposed research projects. It can be used in a number of ways. Funders of research may find it useful to rank projects in terms of the expected return on investment from a variety of competing projects. Alternatively, trialists can use the principles to identify the efficient sample size of a proposed study as an alternative to traditional power calculations, and finally, a value of information analysis can be conducted alongside an economic evaluation as a quantitative adjunct to the 'future research' or 'next steps' section of a study write up. The purpose of this paper is to present a brief introduction to the methods, a step-by-step guide to calculation and a discussion of issues that arise in their application to healthcare decision making. Worked examples are provided in the accompanying online appendices as Microsoft Excel spreadsheets.

  7. Molecular Dynamics Simulations of Adhesion at Epoxy Interfaces

    NASA Technical Reports Server (NTRS)

    Frankland, Sarah-Jane V.; Clancy, Thomas C.; Hinkley, J. A.; Gates. T. S.

    2008-01-01

    The effect of moisture on adhesives used in aerospace applications can be modeled with chemically specific techniques such as molecular dynamics simulation. In the present study, the surface energy and work of adhesion are calculated for epoxy surfaces and interfaces, respectively, by using molecular dynamics simulation. Modifications are made to current theory to calculate the work of adhesion at the epoxy-epoxy interface with and without water. Quantitative agreement with experimental values is obtained for the surface energy and work of adhesion at the interface without water. The work of adhesion agrees qualitatively with the experimental values for the interface with water: the magnitude is reduced 15% with respect to the value for the interface without water. A variation of 26% in the magnitude is observed depending on the water configuration at a concentration of 1.6 wt%. The methods and modifications to the method that are employed to obtain these values are expected to be applicable for other epoxy adhesives to determine the effects of moisture uptake on their work of adhesion.

  8. Determination of dissociation constants of compounds with potential cognition enhancing activity by capillary zone electrophoresis.

    PubMed

    Lisková, Anna; Krivánková, Ludmila

    2005-12-01

    Accurate determination of pK(a) values is important for proper characterization of newly synthesized molecules. In this work we have used CZE for determination of pK(a) values of new compounds prepared from intermediates, 2, 3 and 4-(2-chloro-acetylamino)-phenoxyacetic acids, by substituting chloride for 2-oxo-pyrrolidine, 2-oxo-piperidine or 2-oxo-azepane. These substances are expected to have a cognition enhancing activity and free radicals scavenging effect. Measurements were performed in a polyacrylamide-coated fused-silica capillary of 0.075 mm ID using direct UV detection at 254 nm. Three electrolyte systems were used for measurements to eliminate effects of potential interactions between tested compounds and components of the BGE. In the pH range 2.7-5.4, chloride, formate, acetate and phosphate were used as BGE co-ions, and sodium, beta-alanine and epsilon-aminocaproate as counterions. Mobility standards were measured simultaneously with the tested compounds for calculations of correct electrophoretic mobilities. Several approaches for the calculation of the pK(a) values were used. The values of pK(a) were determined by standard point-to-point calculation using Henderson-Hasselbach equation. Mobility and pH data were also evaluated by using nonlinear regression. Three parameter sigmoidal function fitted the experimental data with correlation coefficients higher than 0.99. Results from CZE measurements were compared with spectrophotometric measurements performed in sodium formate buffer solutions and evaluated at wavelength where the highest absorbance difference for varying pH was recorded. The experimental pK(a) values were compared with corresponding values calculated by the SPARC online calculator. Results of all three used methods were in good correlation.

  9. Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.

    PubMed

    Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E

    2011-01-01

    Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. New Criterion and Tool for Caltrans Seismic Hazard Characterization

    NASA Astrophysics Data System (ADS)

    Shantz, T.; Merriam, M.; Turner, L.; Chiou, B.; Liu, X.

    2008-12-01

    Caltrans recently adopted new procedures for the development of response spectra for structure design. These procedures incorporate both deterministic and probabilistic criteria. The Next Generation Attenuation (NGA) models (2008) are used for deterministic assessment (using a revised late-Quaternary age fault database), and the USGS 2008 5% in 50-year hazard maps are used for probabilistic assessment. A minimum deterministic spectrum based on a M6.5 earthquake at 12 km is also included. These spectra are enveloped and the largest values used. A new publicly available web-based design tool for calculating the design spectrum will be used for calculations. The tool is built on a Windows-Apache-MySQL-PHP (WAMP) platform and integrates GoogleMaps for increased flexibility in the tool's use. Links to Caltrans data such as pre-construction logs of test borings assist in the estimation of Vs30 values used in the new procedures. Basin effects based on new models developed for the CFM, for the San Francisco Bay area by the USGS, and by Thurber (2008) are also incorporated. It is anticipated that additional layers such as CGS Seismic Hazard Zone maps will be added in the future. Application of the new criterion will result in expected higher levels of ground motion at many bridges west of the Coast Ranges. In eastern California, use of the NGA relationships for strike-slip faulting (the dominant sense of motion in California) will often result in slightly lower expected values for bridges. The expected result is a more realistic prediction of ground motions at bridges, in keeping with those motions developed for other large-scale and important structures. The tool is based on a simplified fault map of California, so it will not be used for more detailed evaluations such as surface rupture determination. Announcements regarding tool availability (expected to be in early 2009) are at http://www.dot.ca.gov/research/index.htm

  11. Aqueous phase hydration and hydrate acidity of perfluoroalkyl and n:2 fluorotelomer aldehydes.

    PubMed

    Rayne, Sierra; Forest, Kaya

    2016-01-01

    The SPARC software program and comparative density functional theory (DFT) calculations were used to investigate the aqueous phase hydration equilibrium constants (Khyd) of perfluoroalkyl aldehydes (PFAlds) and n:2 fluorotelomer aldehydes (FTAlds). Both classes are degradation products of known industrial compounds and environmental contaminants such as fluorotelomer alcohols, iodides, acrylates, phosphate esters, and other derivatives, as well as hydrofluorocarbons and hydrochlorofluorocarbons. Prior studies have generally failed to consider the hydration, and subsequent potential hydrate acidity, of these compounds, resulting in incomplete and erroneous predictions as to their environmental behavior. In the current work, DFT calculations suggest that all PFAlds will be dominantly present as the hydrated form in aqueous solution. Both SPARC and DFT calculations suggest that FTAlds will not likely be substantially hydrated in aquatic systems or in vivo. PFAld hydrates are expected to have pKa values in the range of phenols (ca. 9 to 10), whereas n:2 FTAld hydrates are expected to have pKa values ca. 2 to 3 units higher (ca. 12 to 13). In order to avoid spurious modeling predictions and a fundamental misunderstanding of their fate, the molecular and/or dissociated hydrate forms of PFAlds and FTAlds need to be explicitly considered in environmental, toxicological, and waste treatment investigations. The results of the current study will facilitate a more complete examination of the environmental fate of PFAlds and FTAlds.

  12. Application of Artificial Neural Network to Optical Fluid Analyzer

    NASA Astrophysics Data System (ADS)

    Kimura, Makoto; Nishida, Katsuhiko

    1994-04-01

    A three-layer artificial neural network has been applied to the presentation of optical fluid analyzer (OFA) raw data, and the accuracy of oil fraction determination has been significantly improved compared to previous approaches. To apply the artificial neural network approach to solving a problem, the first step is training to determine the appropriate weight set for calculating the target values. This involves using a series of data sets (each comprising a set of input values and an associated set of output values that the artificial neural network is required to determine) to tune artificial neural network weighting parameters so that the output of the neural network to the given set of input values is as close as possible to the required output. The physical model used to generate the series of learning data sets was the effective flow stream model, developed for OFA data presentation. The effectiveness of the training was verified by reprocessing the same input data as were used to determine the weighting parameters and then by comparing the results of the artificial neural network to the expected output values. The standard deviation of the expected and obtained values was approximately 10% (two sigma).

  13. Evaluation of the laboratory mouse model for screening topical mosquito repellents.

    PubMed

    Rutledge, L C; Gupta, R K; Wirtz, R A; Buescher, M D

    1994-12-01

    Eight commercial repellents were tested against Aedes aegypti 0 and 4 h after application in serial dilution to volunteers and laboratory mice. Results were analyzed by multiple regression of percentage of biting (probit scale) on dose (logarithmic scale) and time. Empirical correction terms for conversion of values obtained in tests on mice to values expected in tests on human volunteers were calculated from data obtained on 4 repellents and evaluated with data obtained on 4 others. Corrected values from tests on mice did not differ significantly from values obtained in tests on volunteers. Test materials used in the study were dimethyl phthalate, butopyronoxyl, butoxy polypropylene glycol, MGK Repellent 11, deet, ethyl hexanediol, Citronyl, and dibutyl phthalate.

  14. Pandemic risk: how large are the expected losses?

    PubMed

    Fan, Victoria Y; Jamison, Dean T; Summers, Lawrence H

    2018-02-01

    There is an unmet need for greater investment in preparedness against major epidemics and pandemics. The arguments in favour of such investment have been largely based on estimates of the losses in national incomes that might occur as the result of a major epidemic or pandemic. Recently, we extended the estimate to include the valuation of the lives lost as a result of pandemic-related increases in mortality. This produced markedly higher estimates of the full value of loss that might occur as the result of a future pandemic. We parametrized an exceedance probability function for a global influenza pandemic and estimated that the expected number of influenza-pandemic-related deaths is about 720 000 per year. We calculated that the expected annual losses from pandemic risk to be about 500 billion United States dollars - or 0.6% of global income - per year. This estimate falls within - but towards the lower end of - the Intergovernmental Panel on Climate Change's estimates of the value of the losses from global warming, which range from 0.2% to 2% of global income. The estimated percentage of annual national income represented by the expected value of losses varied by country income grouping: from a little over 0.3% in high-income countries to 1.6% in lower-middle-income countries. Most of the losses from influenza pandemics come from rare, severe events.

  15. Heliospheric Modulation Strength During The Neutron Monitor Era

    NASA Astrophysics Data System (ADS)

    Usoskin, I. G.; Alanko, K.; Mursula, K.; Kovaltsov, G. A.

    Using a stochastic simulation of a one-dimensional heliosphere we calculate galactic cosmic ray spectra at the Earth's orbit for different values of the heliospheric mod- ulation strength. Convoluting these spectra with the specific yield function of a neu- tron monitor, we obtain the expected neutron monitor count rates for different values of the modulation strength. Finally, inverting this relation, we calculate the modula- tion strength using the actually recorded neutron monitor count rates. We present the reconstructed annual heliospheric modulation strengths for the neutron monitor era (1953­2000) using several neutron monitors from different latitudes, covering a large range of geomagnetic rigidity cutoffs from polar to equatorial regions. The estimated modulation strengths are shown to be in good agreement with the corresponding esti- mates reported earlier for some years.

  16. Fuel cell stack monitoring and system control

    DOEpatents

    Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.

    2005-01-25

    A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell. Other polarization curves may be generated and used for fuel cell stack monitoring based on different operating pressures, temperatures, hydrogen quantities.

  17. First-Principles Study on the Gilbert Damping Constants of Transition Metal Alloys, Fe--Ni and Fe--Pt Systems

    NASA Astrophysics Data System (ADS)

    Sakuma, Akimasa

    2012-08-01

    We adapt the tight-binding linear muffin-tin orbital (TB-LMTO) method to the torque-correlation model for the Gilbert damping constant α and perform the first-principles calculation for disordered transition metal alloys, Fe--Ni and Fe--Pt systems, within the framework of the CPA. Quantitatively, the calculated α values are about one-half of the experimental values, whereas the variations in the Fermi level dependence of α are much larger than these discrepancies. As expected, we confirm in the (Fe--Ni)1-XPtX and FePt systems that Pt atoms certainly enhance α owing to their large spin--orbit coupling. For the disordered alloys, we find that α decreases with increasing chemical degree of order in a wide range.

  18. Experimental determination of the response functions of a Bonner sphere spectrometer to monoenergetic neutrons

    NASA Astrophysics Data System (ADS)

    Hu, Z.; Chen, Z.; Peng, X.; Du, T.; Cui, Z.; Ge, L.; Zhu, W.; Wang, Z.; Zhu, X.; Chen, J.; Zhang, G.; Li, X.; Chen, J.; Zhang, H.; Zhong, G.; Hu, L.; Wan, B.; Gorini, G.; Fan, T.

    2017-06-01

    A Bonner sphere spectrometer (BSS) plays an important role in characterizing neutron spectra and determining their neutron dose in a neutron-gamma mixed field. A BSS consisting of a set of nine polyethylene spheres with a 3He proportional counter was developed at Peking University to perform neutron spectrum and dosimetry measurements. Response functions (RFs) of the BSS were calculated with the general Monte Carlo code MCNP5 for the neutron energy range from thermal up to 20 MeV, and were experimentally calibrated with monoenergetic neutron beams from 144 keV to 14 MeV on a 4.5 MV Van de Graaff accelerator. The calculated RFs were corrected with the experimental values, and the whole response matrix was completely established. The spectrum of a 241Am-Be source was obtained after unfolding the measurement data of the BSS to the source and in fair agreement with the expected one. The integral ambient dose equivalent corresponding to the spectrum was 0.95 of the expected value. Results of the unfolded spectrum and the integral dose equivalent measured by the BSS verified that the RFs of the BSS were well established.

  19. Genetic evaluation of lactation persistency for five breeds of dairy cattle.

    PubMed

    Cole, J B; Null, D J

    2009-05-01

    Cows with high lactation persistency tend to produce less milk than expected at the beginning of lactation and more than expected at the end. Best prediction of lactation persistency is calculated as a function of trait-specific standard lactation curves and linear regressions of test-day deviations on days in milk. Because regression coefficients are deviations from a tipping point selected to make yield and lactation persistency phenotypically uncorrelated it should be possible to use 305-d actual yield and lactation persistency to predict yield for lactations with later endpoints. The objectives of this study were to calculate (co)variance components and breeding values for best predictions of lactation persistency of milk (PM), fat (PF), protein (PP), and somatic cell score (PSCS) in breeds other than Holstein, and to demonstrate the calculation of prediction equations for 400-d actual milk yield. Data included lactations from Ayrshire, Brown Swiss, Guernsey (GU), Jersey (JE), and Milking Shorthorn (MS) cows calving since 1997. The number of sires evaluated ranged from 86 (MS) to 3,192 (JE), and mean sire estimated breeding value for PM ranged from 0.001 (Ayrshire) to 0.10 (Brown Swiss); mean estimated breeding value for PSCS ranged from -0.01 (MS) to -0.043 (JE). Heritabilities were generally highest for PM (0.09 to 0.15) and lowest for PSCS (0.03 to 0.06), with PF and PP having intermediate values (0.07 to 0.13). Repeatabilities varied considerably between breeds, ranging from 0.08 (PSCS in GU, JE, and MS) to 0.28 (PM in GU). Genetic correlations of PM, PF, and PP with PSCS were moderate and favorable (negative), indicating that increasing lactation persistency of yield traits is associated with decreases in lactation persistency of SCS, as expected. Genetic correlations among yield and lactation persistency were low to moderate and ranged from -0.55 (PP in GU) to 0.40 (PP in MS). Prediction equations for 400-d milk yield were calculated for each breed by regression of both 305-d yield and 305-d yield and lactation persistency on 400-d yield. Goodness-of-fit was very good for both models, but the addition of lactation persistency to the model significantly improved fit in all cases. Routine genetic evaluations for lactation persistency, as well as the development of prediction equations for several lactation end-points, may provide producers with tools to better manage their herds.

  20. Three-dimensional ordered-subset expectation maximization iterative protocol for evaluation of left ventricular volumes and function by quantitative gated SPECT: a dynamic phantom study.

    PubMed

    Ceriani, Luca; Ruberto, Teresa; Delaloye, Angelika Bischof; Prior, John O; Giovanella, Luca

    2010-03-01

    The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.

  1. Dosimetric Consistency of Co-60 Teletherapy Unit- a ten years Study

    PubMed Central

    Baba, Misba H; Mohib-ul-Haq, M.; Khan, Aijaz A.

    2013-01-01

    Objective The goal of the Radiation standards and Dosimetry is to ensure that the output of the Teletherapy Unit is within ±2% of the stated one and the output of the treatment dose calculation methods are within ±5%. In the present paper, we studied the dosimetry of Cobalt-60 (Co-60) Teletherapy unit at Sher-I-Kashmir Institute of Medical Sciences (SKIMS) for last 10 years. Radioactivity is the phenomenon of disintegration of unstable nuclides called radionuclides. Among these radionuclides, Cobalt-60, incorporated in Telecobalt Unit, is commonly used in therapeutic treatment of cancer. Cobalt-60 being unstable decays continuously into Ni-60 with half life of 5.27 years thereby resulting in the decrease in its activity, hence dose rate (output). It is, therefore, mandatory to measure the dose rate of the Cobalt-60 source regularly so that the patient receives the same dose every time as prescribed by the radiation oncologist. The under dosage may lead to unsatisfactory treatment of cancer and over dosage may cause radiation hazards. Our study emphasizes the consistency between actual output and output obtained using decay method. Methodology The methodology involved in the present study is the calculations of actual dose rate of Co-60 Teletherapy Unit by two techniques i.e. Source to Surface Distance (SSD) and Source to Axis Distance (SAD), used for the External Beam Radiotherapy, of various cancers, using the standard methods. Thereby, a year wise comparison has been made between average actual dosimetric output (dose rate) and the average expected output values (obtained by using decay method for Co-60.) Results The present study shows that there is a consistency in the average output (dose rate) obtained by the actual dosimetry values and the expected output values obtained using decay method. The values obtained by actual dosimetry are within ±2% of the expected values. Conclusion The results thus obtained in a year wise comparison of average output by actual dosimetry done regularly as a part of Quality Assurance of the Telecobalt Radiotherapy Unit and its deviation from the expected output data is within the permissible limits. Thus our study shows a trend towards uniformity and a better dose delivery. PMID:23559901

  2. PROBABILISTIC SAFETY ASSESSMENT OF OPERATIONAL ACCIDENTS AT THE WASTE ISOLATION PILOT PLANT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rucker, D.F.

    2000-09-01

    This report presents a probabilistic safety assessment of radioactive doses as consequences from accident scenarios to complement the deterministic assessment presented in the Waste Isolation Pilot Plant (WIPP) Safety Analysis Report (SAR). The International Council of Radiation Protection (ICRP) recommends both assessments be conducted to ensure that ''an adequate level of safety has been achieved and that no major contributors to risk are overlooked'' (ICRP 1993). To that end, the probabilistic assessment for the WIPP accident scenarios addresses the wide range of assumptions, e.g. the range of values representing the radioactive source of an accident, that could possibly have beenmore » overlooked by the SAR. Routine releases of radionuclides from the WIPP repository to the environment during the waste emplacement operations are expected to be essentially zero. In contrast, potential accidental releases from postulated accident scenarios during waste handling and emplacement could be substantial, which necessitates the need for radiological air monitoring and confinement barriers (DOE 1999). The WIPP Safety Analysis Report (SAR) calculated doses from accidental releases to the on-site (at 100 m from the source) and off-site (at the Exclusive Use Boundary and Site Boundary) public by a deterministic approach. This approach, as demonstrated in the SAR, uses single-point values of key parameters to assess the 50-year, whole-body committed effective dose equivalent (CEDE). The basic assumptions used in the SAR to formulate the CEDE are retained for this report's probabilistic assessment. However, for the probabilistic assessment, single-point parameter values were replaced with probability density functions (PDF) and were sampled over an expected range. Monte Carlo simulations were run, in which 10,000 iterations were performed by randomly selecting one value for each parameter and calculating the dose. Statistical information was then derived from the 10,000 iteration batch, which included 5%, 50%, and 95% dose likelihood, and the sensitivity of each assumption to the calculated doses. As one would intuitively expect, the doses from the probabilistic assessment for most scenarios were found to be much less than the deterministic assessment. The lower dose of the probabilistic assessment can be attributed to a ''smearing'' of values from the high and low end of the PDF spectrum of the various input parameters. The analysis also found a potential weakness in the deterministic analysis used in the SAR, a detail on drum loading was not taken into consideration. Waste emplacement operations thus far have handled drums from each shipment as a single unit, i.e. drums from each shipment are kept together. Shipments typically come from a single waste stream, and therefore the curie loading of each drum can be considered nearly identical to that of its neighbor. Calculations show that if there are large numbers of drums used in the accident scenario assessment, e.g. 28 drums in the waste hoist failure scenario (CH5), then the probabilistic dose assessment calculations will diverge from the deterministically determined doses. As it is currently calculated, the deterministic dose assessment assumes one drum loaded to the maximum allowable (80 PE-Ci), and the remaining are 10% of the maximum. The effective average of drum curie content is therefore less in the deterministic assessment than the probabilistic assessment for a large number of drums. EEG recommends that the WIPP SAR calculations be revisited and updated to include a probabilistic safety assessment.« less

  3. The Jovian electron spectrum and synchrotron radiation at 375 cm

    NASA Technical Reports Server (NTRS)

    Birmingham, T. J.

    1975-01-01

    The synchrotron radiation expected at Earth from the region L=2.9-5 R sub J of Jupiter's magnetosphere is calculated using the Pioneer 10 electron model. The result is approximately 21 flux units (f.u.). This value is to be compared with 6.0 + or - 0.7 f.u., the flux density of synchrotron radiation measured from Jupiter's entire magnetosphere in ground-based radio observations. Most of the radiation at 375 cm is emitted by electrons in the 1 to 10 MeV range. If the electron model used for calculations is cut off below 10 MeV, the calculated flux is reduced to approximately 4 f.u., a level compatible with the radio observations.

  4. Working life tables, Bangladesh 1981.

    PubMed

    Matin, K A

    1986-06-01

    Data from the 1981 Bangladesh Population Census were used to construct life tables for working men and women. Bangladesh has a dependency burden of 109 dependents to 100 economically active population. Labor force participation rates in 1981 were 74.1/100 population aged 10 years and over for males and 4.3/100 population aged 10 years and over for females. The age-specific economic activity rates provided the essential link in translating life table data to working life table data. It was calculated that a newborn Bangladesh male had a working life expectancy of 37.8 years and an overall life expectancy of 50.0 years; working life expectancy peaks at 44.2 years at 10 years of age. A newborn female has a working life expectancy of 1.8 years and an overall life expectancy of 49.0 years; a maximum working life expectancy of 2.4 years is obtained at 10 years of age. In the period 1962-81, male working life expectancy registered a slight decline at all ages, while female working life expectancy increased by about 6 months for ages up to 30 years. Mortality accounts for a loss of about 10% of gross years of active life in the 10-69-year goups and 20% in the 10-79-year age group. The male working life expectancy values for Bangladesh in 1981 correspond well with those found in India in 1971, Pakistan in 1978, and Sri Lanka in 1971. However, there is wide divergency in terms of female working life expectancy values: such rates were significantly higher in Sri Lanka and India than in Bangladesh up to the age of 30 years, after which point there was little divergence.

  5. A careful look at ECG sampling frequency and R-peak interpolation on short-term measures of heart rate variability.

    PubMed

    Ellis, Robert J; Zhu, Bilei; Koenig, Julian; Thayer, Julian F; Wang, Ye

    2015-09-01

    As the literature on heart rate variability (HRV) continues to burgeon, so too do the challenges faced with comparing results across studies conducted under different recording conditions and analysis options. Two important methodological considerations are (1) what sampling frequency (SF) to use when digitizing the electrocardiogram (ECG), and (2) whether to interpolate an ECG to enhance the accuracy of R-peak detection. Although specific recommendations have been offered on both points, the evidence used to support them can be seen to possess a number of methodological limitations. The present study takes a new and careful look at how SF influences 24 widely used time- and frequency-domain measures of HRV through the use of a Monte Carlo-based analysis of false positive rates (FPRs) associated with two-sample tests on independent sets of healthy subjects. HRV values from the first sample were calculated at 1000 Hz, and HRV values from the second sample were calculated at progressively lower SFs (and either with or without R-peak interpolation). When R-peak interpolation was applied prior to HRV calculation, FPRs for all HRV measures remained very close to 0.05 (i.e. the theoretically expected value), even when the second sample had an SF well below 100 Hz. Without R-peak interpolation, all HRV measures held their expected FPR down to 125 Hz (and far lower, in the case of some measures). These results provide concrete insights into the statistical validity of comparing datasets obtained at (potentially) very different SFs; comparisons which are particularly relevant for the domains of meta-analysis and mobile health.

  6. Flexible engineering designs for urban water management in Lusaka, Zambia.

    PubMed

    Tembo, Lucy; Pathirana, Assela; van der Steen, Peter; Zevenbergen, Chris

    2015-01-01

    Urban water systems are often designed using deterministic single values as design parameters. Subsequently the different design alternatives are compared using a discounted cash flow analysis that assumes that all parameters remain as-predicted for the entire project period. In reality the future is unknown and at best a possible range of values for design parameters can be estimated. A Monte Carlo simulation could then be used to calculate the expected Net Present Value of project alternatives, as well as so-called target curves (cumulative frequency distribution of possible Net Present Values). The same analysis could be done after flexibilities were incorporated in the design, either by using decision rules to decide about the moment of capacity increase, or by buying Real Options (in this case land) to cater for potential capacity increases in the future. This procedure was applied to a sanitation and wastewater treatment case in Lusaka, Zambia. It included various combinations of on-site anaerobic baffled reactors and off-site waste stabilisation ponds. For the case study, it was found that the expected net value of wastewater treatment systems can be increased by 35-60% by designing a small flexible system with Real Options, rather than a large inflexible system.

  7. Effective Theories for QCD-like at TeV Scale

    NASA Astrophysics Data System (ADS)

    Lu, Jie; Bijnens, Johan

    2016-04-01

    We study the Effective Field Theory of three QCD-like theories, which can be classified by having quarks in a complex, real or pseudo-real representations of the gauge group. The Lagrangians are written in a very similar way so that the calculations can be done using techniques from Chiral Perturbation Theory (ChPT). We calculated the vacuum-expectation-value, the mass and the decay constant of pseudo-Goldstone Bosons up to next-to-next-to leading order (NNLO) [J. Bijnens and J. Lu, JHEP 0911 (2009) 116 [arxiv:arXiv:0910.5424 [hep-ph

  8. Treatment decision making and adjustment to breast cancer: a longitudinal study.

    PubMed

    Stanton, A L; Estes, M A; Estes, N C; Cameron, C L; Danoff-Burg, S; Irving, L M

    1998-04-01

    This study monitored women (N = 76) with breast cancer from diagnosis through 1 year, and tested constructs from subjective expected utility theory with regard to their ability to predict patients' choice of surgical treatment as well as psychological distress and well-being over time. Women's positive expectancies for the consequences of treatment generally were maintained in favorable perceptions of outcome in several realms (i.e., physician agreement, likelihood of cancer cure or recurrence, self-evaluation, likelihood of additional treatment, partner support for option, attractiveness to partner). Assessed before the surgical decision-making appointment, women's expectancies for consequences of the treatment options, along with age, correctly classified 94% of the sample with regard to election of mastectomy versus breast-conserving procedures. Calculated from the point of decision making to 3 months later, expectancy disconfirmations and value discrepancies concerning particular treatment consequences predicted psychological adjustment 3 months and 1 year after diagnosis.

  9. Ro-vibrational averaging of the isotropic hyperfine coupling constant for the methyl radical

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adam, Ahmad Y.; Jensen, Per, E-mail: jensen@uni-wuppertal.de; Yachmenev, Andrey

    2015-12-28

    We present the first variational calculation of the isotropic hyperfine coupling constant of the carbon-13 atom in the CH{sub 3} radical for temperatures T = 0, 96, and 300 K. It is based on a newly calculated high level ab initio potential energy surface and hyperfine coupling constant surface of CH{sub 3} in the ground electronic state. The ro-vibrational energy levels, expectation values for the coupling constant, and its temperature dependence were calculated variationally by using the methods implemented in the computer program TROVE. Vibrational energies and vibrational and temperature effects for coupling constant are found to be in verymore » good agreement with the available experimental data. We found, in agreement with previous studies, that the vibrational effects constitute about 44% of the constant’s equilibrium value, originating mainly from the large amplitude out-of-plane bending motion and that the temperature effects play a minor role.« less

  10. Ground-state energy of HeH{sup +}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Binglu; Zhu Jiongming; Yan Zongchao

    2006-06-15

    The nonrelativistic ground-state energy of {sup 4}HeH{sup +} is calculated using a variational method in Hylleraas coordinates. Convergence to a few parts in 10{sup 10} is achieved, which improves the best previous result of Pavanello et al. [J. Chem. Phys. 123, 104306 (2005)]. Expectation values of the interparticle distances are evaluated. Similar results for {sup 3}HeH{sup +} are also presented.

  11. A wildfire risk modeling system for evaluating landscape fuel treatment strategies

    Treesearch

    Alan Ager; Mark Finney; Andrew McMahan

    2006-01-01

    Despite a wealth of literature and models concerning wildfire risk, field units in Federal land management agencies lack a clear framework and operational tools to measure how risk might change from proposed fuel treatments. In an actuarial context, risk is defined as the expected value change from a fire, calculated as the product of (1) probability of a fire at a...

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashinaka, Takahiro; Department of Physics, Graduate School of Science,The University of Tokyo, Bunkyo, Tokyo, 113-0033; Yokoyama, Jun’ichi

    The covariant and gauge invariant calculation of the current expectation value in the homogeneous electric field in 1+3 dimensional de Sitter spacetime is shown. The result accords with previous work obtained by using adiabatic subtraction scheme. We therefore conclude the counterintuitive behaviors of the current in the infrared (IR) regime such as IR hyperconductivity and negative current are not artifacts of the renormalization scheme, but are real IR effects of the spacetime.

  13. An ab initio potential energy surface for the formic acid dimer: zero-point energy, selected anharmonic fundamental energies, and ground-state tunneling splitting calculated in relaxed 1-4-mode subspaces.

    PubMed

    Qu, Chen; Bowman, Joel M

    2016-09-14

    We report a full-dimensional, permutationally invariant potential energy surface (PES) for the cyclic formic acid dimer. This PES is a least-squares fit to 13475 CCSD(T)-F12a/haTZ (VTZ for H and aVTZ for C and O) energies. The energy-weighted, root-mean-square fitting error is 11 cm -1 and the barrier for the double-proton transfer on the PES is 2848 cm -1 , in good agreement with the directly-calculated ab initio value of 2853 cm -1 . The zero-point vibrational energy of 15 337 ± 7 cm -1 is obtained from diffusion Monte Carlo calculations. Energies of fundamentals of fifteen modes are calculated using the vibrational self-consistent field and virtual-state configuration interaction method. The ground-state tunneling splitting is computed using a reduced-dimensional Hamiltonian with relaxed potentials. The highest-level, four-mode coupled calculation gives a tunneling splitting of 0.037 cm -1 , which is roughly twice the experimental value. The tunneling splittings of (DCOOH) 2 and (DCOOD) 2 from one to three mode calculations are, as expected, smaller than that for (HCOOH) 2 and consistent with experiment.

  14. Control of Solar Power Plants Connected Grid with Simple Calculation Method on Residential Homes

    NASA Astrophysics Data System (ADS)

    Kananda, Kiki; Nazir, Refdinal

    2017-12-01

    One of the most compatible renewable energy in all regions to apply is solar energy. Solar power plants can be built connected to existing or stand-alone power grids. In assisting the residential electricity in which there is a power grid, then a small scale solar energy power plants is very appropriate. However, the general constraint of solar energy power plants is still low in terms of efficiency. Therefore, this study will explain how to control the power of solar power plants more optimally, which is expected to reactive power to zero to raise efficiency. This is a continuation of previous research using Newton Rapshon control method. In this study we introduce a simple method by using ordinary mathematical calculations of solar-related equations. In this model, 10 PV modules type of ND T060M1 with a 60 Wp capacity are used. The calculations performed using MATLAB Simulink provide excellent value. For PCC voltage values obtained a stable quantity of approximately 220 V. At a maximum irradiation condition of 1000 W / m2, the reactive power value of Q solar generating system maximum 20.48 Var and maximum active power of 417.5 W. In the condition of lower irradiation, value of reactive power Q almost close to zero 0.77Var. This simple mathematical method can provide excellent quality control power values.

  15. Hip Preservation Surgery Expectations Survey: A New Method to Measure Patients' Preoperative Expectations.

    PubMed

    Mancuso, Carol A; Wentzel, Catherine H; Ghomrawi, Hassan M K; Kelly, Bryan T

    2017-05-01

    To develop a patient-derived expectations survey for hip preservation surgery. Patients were eligible if they were undergoing primary hip surgery and were recruited in person or by telephone. The survey was developed in 3 phases. During phase 1, 64 patients were interviewed preoperatively and asked open-ended questions about their expectations of surgery; a draft survey was assembled by categorizing responses. During phase 2, the survey was administered twice to another group of 50 patients preoperatively to assess test-retest reliability and concordance was measured with weighted kappa values and intraclass correlations. All patients also completed valid standard hip surveys electronically. During phase 3, final items were selected, factor analysis was performed, and a scoring system was developed. In phase 1, 509 expectations were volunteered from which 21 distinct categories were discerned and became the items for the draft survey. In phase 2, the draft survey was completed twice, 4 days apart. In phase 3, all 21 items were retained for the final survey addressing pain, mobility, sports, resumption of active lifestyles, future function, and psychological well-being. An overall score is calculated from the number of items expected and the amount of improvement expected, and ranges from 0 to 100; higher is more expectations. For phase 2 patients, mean scores for both administrations were 82, Cronbach alpha coefficients were 0.88 and 0.91, and the intraclass correlation was 0.92. A higher score (i.e., greater expectations) was associated with worse hip condition measured by standard hip surveys (P ≤ .05). We developed a patient-derived survey that is valid, reliable, and addresses a spectrum of expectations. The survey generates an overall score that is easy to calculate and interpret and offers a practical and comprehensive way to record patients' preoperative expectations. Level II, prognostic study, prospective sample. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  16. Three statistical models for estimating length of stay.

    PubMed Central

    Selvin, S

    1977-01-01

    The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program. PMID:914532

  17. Three statistical models for estimating length of stay.

    PubMed

    Selvin, S

    1977-01-01

    The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program.

  18. Non-universal critical exponents in earthquake complex networks

    NASA Astrophysics Data System (ADS)

    Pastén, Denisse; Torres, Felipe; Toledo, Benjamín A.; Muñoz, Víctor; Rogan, José; Valdivia, Juan Alejandro

    2018-02-01

    The problem of universality of critical exponents in complex networks is studied based on networks built from seismic data sets. Using two data sets corresponding to Chilean seismicity (northern zone, including the 2014 Mw = 8 . 2 earthquake in Iquique; and central zone without major earthquakes), directed networks for each set are constructed. Connectivity and betweenness centrality distributions are calculated and found to be scale-free, with respective exponents γ and δ. The expected relation between both characteristic exponents, δ >(γ + 1) / 2, is verified for both data sets. However, unlike the expectation for certain scale-free analytical complex networks, the value of δ is found to be non-universal.

  19. Optimal rotation sequences for active perception

    NASA Astrophysics Data System (ADS)

    Nakath, David; Rachuy, Carsten; Clemens, Joachim; Schill, Kerstin

    2016-05-01

    One major objective of autonomous systems navigating in dynamic environments is gathering information needed for self localization, decision making, and path planning. To account for this, such systems are usually equipped with multiple types of sensors. As these sensors often have a limited field of view and a fixed orientation, the task of active perception breaks down to the problem of calculating alignment sequences which maximize the information gain regarding expected measurements. Action sequences that rotate the system according to the calculated optimal patterns then have to be generated. In this paper we present an approach for calculating these sequences for an autonomous system equipped with multiple sensors. We use a particle filter for multi- sensor fusion and state estimation. The planning task is modeled as a Markov decision process (MDP), where the system decides in each step, what actions to perform next. The optimal control policy, which provides the best action depending on the current estimated state, maximizes the expected cumulative reward. The latter is computed from the expected information gain of all sensors over time using value iteration. The algorithm is applied to a manifold representation of the joint space of rotation and time. We show the performance of the approach in a spacecraft navigation scenario where the information gain is changing over time, caused by the dynamic environment and the continuous movement of the spacecraft

  20. Pricing of premiums for equity-linked life insurance based on joint mortality models

    NASA Astrophysics Data System (ADS)

    Riaman; Parmikanti, K.; Irianingsih, I.; Supian, S.

    2018-03-01

    Life insurance equity - linked is a financial product that not only offers protection, but also investment. The calculation of equity-linked life insurance premiums generally uses mortality tables. Because of advances in medical technology and reduced birth rates, it appears that the use of mortality tables is less relevant in the calculation of premiums. To overcome this problem, we use a combination mortality model which in this study is determined based on Indonesian Mortality table 2011 to determine the chances of death and survival. In this research, we use the Combined Mortality Model of the Weibull, Inverse-Weibull, and Gompertz Mortality Model. After determining the Combined Mortality Model, simulators calculate the value of the claim to be given and the premium price numerically. By calculating equity-linked life insurance premiums well, it is expected that no party will be disadvantaged due to the inaccuracy of the calculation result

  1. Calculating the nutrient composition of recipes with computers.

    PubMed

    Powers, P M; Hoover, L W

    1989-02-01

    The objective of this research project was to compare the nutrient values computed by four commonly used computerized recipe calculation methods. The four methods compared were the yield factor, retention factor, summing, and simplified retention factor methods. Two versions of the summing method were modeled. Four pork entrée recipes were selected for analysis: roast pork, pork and noodle casserole, pan-broiled pork chops, and pork chops with vegetables. Assumptions were made about changes expected to occur in the ingredients during preparation and cooking. Models were designed to simulate the algorithms of the calculation methods using a microcomputer spreadsheet software package. Identical results were generated in the yield factor, retention factor, and summing-cooked models for roast pork. The retention factor and summing-cooked models also produced identical results for the recipe for pan-broiled pork chops. The summing-raw model gave the highest value for water in all four recipes and the lowest values for most of the other nutrients. A superior method or methods was not identified. However, on the basis of the capabilities provided with the yield factor and retention factor methods, more serious consideration of these two methods is recommended.

  2. Fragment-based approach to calculate hydrophobicity of anionic and nonionic surfactants derived from chromatographic retention on a C18 stationary phase.

    PubMed

    Hammer, Jort; Haftka, Joris J-H; Scherpenisse, Peter; Hermens, Joop L M; de Voogt, Pim W P

    2017-02-01

    To predict the fate and potential effects of organic contaminants, information about their hydrophobicity is required. However, common parameters to describe the hydrophobicity of organic compounds (e.g., octanol-water partition constant [K OW ]) proved to be inadequate for ionic and nonionic surfactants because of their surface-active properties. As an alternative approach to determine their hydrophobicity, the aim of the present study was therefore to measure the retention of a wide range of surfactants on a C 18 stationary phase. Capacity factors in pure water (k' 0 ) increased linearly with increasing number of carbon atoms in the surfactant structure. Fragment contribution values were determined for each structural unit with multilinear regression, and the results were consistent with the expected influence of these fragments on the hydrophobicity of surfactants. Capacity factors of reference compounds and log K OW values from the literature were used to estimate log K OW values for surfactants (log KOWHPLC). These log KOWHPLC values were also compared to log K OW values calculated with 4 computational programs: KOWWIN, Marvin calculator, SPARC, and COSMOThermX. In conclusion, capacity factors from a C 18 stationary phase are found to better reflect hydrophobicity of surfactants than their K OW values. Environ Toxicol Chem 2017;36:329-336. © 2016 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc. on behalf of SETAC. © 2016 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc. on behalf of SETAC.

  3. Value-of-information analysis within a stakeholder-driven research prioritization process in a US setting: an application in cancer genomics.

    PubMed

    Carlson, Josh J; Thariani, Rahber; Roth, Josh; Gralow, Julie; Henry, N Lynn; Esmail, Laura; Deverka, Pat; Ramsey, Scott D; Baker, Laurence; Veenstra, David L

    2013-05-01

    The objective of this study was to evaluate the feasibility and outcomes of incorporating value-of-information (VOI) analysis into a stakeholder-driven research prioritization process in a US-based setting. . Within a program to prioritize comparative effectiveness research areas in cancer genomics, over a period of 7 months, we developed decision-analytic models and calculated upper-bound VOI estimates for 3 previously selected genomic tests. Thirteen stakeholders representing patient advocates, payers, test developers, regulators, policy makers, and community-based oncologists ranked the tests before and after receiving VOI results. The stakeholders were surveyed about the usefulness and impact of the VOI findings. The estimated upper-bound VOI ranged from $33 million to $2.8 billion for the 3 research areas. Seven stakeholders indicated the results modified their rankings, 9 stated VOI data were useful, and all indicated they would support its use in future prioritization processes. Some stakeholders indicated expected value of sampled information might be the preferred choice when evaluating specific Limitations. Our study was limited by the size and the potential for selection bias in the composition of the external stakeholder group, lack of a randomized design to assess effect of VOI data on rankings, and the use of expected value of perfect information v. expected value of sample information methods. Value of information analyses may have a meaningful role in research topic prioritization for comparative effectiveness research in the United States, particularly when large differences in VOI across topic areas are identified. Additional research is needed to facilitate the use of more complex value of information analyses in this setting.

  4. Effort-based cost-benefit valuation and the human brain

    PubMed Central

    Croxson, Paula L; Walton, Mark E; O'Reilly, Jill X; Behrens, Timothy EJ; Rushworth, Matthew FS

    2010-01-01

    In both the wild and the laboratory, animals' preferences for one course of action over another reflect not just reward expectations but also the cost in terms of effort that must be invested in pursuing the course of action. The ventral striatum and dorsal anterior cingulate cortex (ACCd) are implicated in the making of cost-benefit decisions in the rat but there is little information about how effort costs are processed and influence calculations of expected net value in other mammals including the human. We carried out a functional magnetic resonance imaging (fMRI) study to determine whether and where activity in the human brain was available to guide effort-based cost-benefit valuation. Subjects were scanned while they performed a series of effortful actions to obtain secondary reinforcers. At the beginning of each trial, subjects were presented with one of eight different visual cues which they had learned indicated how much effort the course of action would entail and how much reward could be expected at its completion. Cue-locked activity in the ventral striatum and midbrain reflected the net value of the course of action, signaling the expected amount of reward discounted by the amount of effort to be invested. Activity in ACCd also reflected the interaction of both expected reward and effort costs. Posterior orbitofrontal and insular activity, however, only reflected the expected reward magnitude. The ventral striatum and anterior cingulate cortex may be the substrate of effort-based cost-benefit valuation in primates as well as in rats. PMID:19357278

  5. Evaluation of Magnetic Diagnostics for MHD Equilibrium Reconstruction of LHD Discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sontag, Aaron C; Hanson, James D.; Lazerson, Sam

    2011-01-01

    Equilibrium reconstruction is the process of determining the set of parameters of an MHD equilibrium that minimize the difference between expected and experimentally observed signals. This is routinely performed in axisymmetric devices, such as tokamaks, and the reconstructed equilibrium solution is then the basis for analysis of stability and transport properties. The V3FIT code [1] has been developed to perform equilibrium reconstruction in cases where axisymmetry cannot be assumed, such as in stellarators. The present work is focused on using V3FIT to analyze plasmas in the Large Helical Device (LHD) [2], a superconducting, heliotron type device with over 25 MWmore » of heating power that is capable of achieving both high-beta ({approx}5%) and high density (>1 x 10{sup 21}/m{sup 3}). This high performance as well as the ability to drive tens of kiloamperes of toroidal plasma current leads to deviations in the equilibrium state from the vacuum flux surfaces. This initial study examines the effectiveness of using magnetic diagnostics as the observed signals in reconstructing experimental plasma parameters for LHD discharges. V3FIT uses the VMEC [3] 3D equilibrium solver to calculate an initial equilibrium solution with closed, nested flux surfaces based on user specified plasma parameters. This equilibrium solution is then used to calculate the expected signals for specified diagnostics. The differences between these expected signal values and the observed values provides a starting {chi}{sup 2} value. V3FIT then varies all of the fit parameters independently, calculating a new equilibrium and corresponding {chi}{sup 2} for each variation. A quasi-Newton algorithm [1] is used to find the path in parameter space that leads to a minimum in {chi}{sup 2}. Effective diagnostic signals must vary in a predictable manner with the variations of the plasma parameters and this signal variation must be of sufficient amplitude to be resolved from the signal noise. Signal effectiveness can be defined for a specific signal and specific reconstruction parameter as the dimensionless fractional reduction in the posterior parameter variance with respect to the signal variance. Here, {sigma}{sub i}{sup sig} is the variance of the ith signal and {sigma}{sub j}{sup param} param is the posterior variance of the jth fit parameter. The sum of all signal effectiveness values for a given reconstruction parameter is normalized to one. This quantity will be used to determine signal effectiveness for various reconstruction cases. The next section will examine the variation of the expected signals with changes in plasma pressure and the following section will show results for reconstructing model plasmas using these signals.« less

  6. Assessing the value-adding impact of diagnostic-type tests on drug development and marketing.

    PubMed

    Blair, Edward D

    2008-01-01

    We explore the cash value of the companion diagnostics opportunity from the perspective of the pharmaceutical partner. Cashflow-based modeling is used to demonstrate the potential financial benefits of key relationships between the pharmaceutical and diagnostics industries. In four scenarios, the uplift in the net present value (NPV) of a proprietary medicine can exceed $US1.8 billion. By simple extrapolation, the uplifted NPV calculations allow realistic and plausible estimates of the companion diagnostic opportunity to be in the region of $US40 billion to $US90 billion. It is expected that such market valuation could drive a macroeconomic change that shifts healthcare practice from reactionary disease-treatment to proactive health maintenance.

  7. Responses of selected neutron monitors to cosmic radiation at aviation altitudes.

    PubMed

    Yasuda, Hiroshi; Yajima, Kazuaki; Sato, Tatsuhiko; Takada, Masashi; Nakamura, Takashi

    2009-06-01

    Cosmic radiation exposure of aircraft crew, which is generally evaluated by numerical simulations, should be verified by measurements. From the perspective of radiological protection, the most contributing radiation component at aviation altitude is neutrons. Measurements of cosmic neutrons, however, are difficult in a civilian aircraft because of the limitations of space and electricity; a small, battery-operated dosimeter is required whereas larger-size instruments are generally used to detect neutrons with a broad range of energy. We thus examined the applicability of relatively new transportable neutron monitors for use in an aircraft. They are (1) a conventional rem meter with a polyethylene moderator (NCN1), (2) an extended energy-range rem meter with a tungsten-powder mixed moderator (WENDI-II), and (3) a recoil-proton scintillation rem meter (PRESCILA). These monitors were installed onto the racks of a business jet aircraft that flew two times near Japan. Observed data were compared to model calculations using a PHITS-based Analytical Radiation Model in the Atmosphere (PARMA). Excellent agreement between measured and calculated values was found for the WENDI-II. The NCN1 showed approximately half of predicted values, which were lower than those expected from its response function. The observations made with PRESCILA showed much higher than expected values; which is attributable to the presence of cosmic-ray protons and muons. These results indicate that careful attention must be paid to the dosimetric properties of a detector employed for verification of cosmic neutron dose.

  8. Two-photon absorption cross sections within equation-of-motion coupled-cluster formalism using resolution-of-the-identity and Cholesky decomposition representations: Theory, implementation, and benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nanda, Kaushik D.; Krylov, Anna I.

    The equation-of-motion coupled-cluster (EOM-CC) methods provide a robust description of electronically excited states and their properties. Here, we present a formalism for two-photon absorption (2PA) cross sections for the equation-of-motion for excitation energies CC with single and double substitutions (EOM-CC for electronically excited states with single and double substitutions) wave functions. Rather than the response theory formulation, we employ the expectation-value approach which is commonly used within EOM-CC, configuration interaction, and algebraic diagrammatic construction frameworks. In addition to canonical implementation, we also exploit resolution-of-the-identity (RI) and Cholesky decomposition (CD) for the electron-repulsion integrals to reduce memory requirements and to increasemore » parallel efficiency. The new methods are benchmarked against the CCSD and CC3 response theories for several small molecules. We found that the expectation-value 2PA cross sections are within 5% from the quadratic response CCSD values. The RI and CD approximations lead to small errors relative to the canonical implementation (less than 4%) while affording computational savings. RI/CD successfully address the well-known issue of large basis set requirements for 2PA cross sections calculations. The capabilities of the new code are illustrated by calculations of the 2PA cross sections for model chromophores of the photoactive yellow and green fluorescent proteins.« less

  9. [Dose loads on and radiation risk values for cosmonauts on a mission to Mars estimated from actual Martian vehicle engineering development].

    PubMed

    Shafirkin, A V; Kolomenskiĭ, A V; Mitrikas, V G; Petrov, V M

    2010-01-01

    The current design philosophy of a Mars orbiting vehicle, takeoff and landing systems and the transport return vehicle was taken into consideration for calculating the equivalent doses imparted to cosmonaut's organs and tissues by galactic cosmic rays, solar rays and the Earth's radiation belts, values of the total radiation risk over the lifespan following the mission and over the whole career period, and possible shortening of life expectancy. There are a number of uncertainties that should be evaluated, and radiation limits specified before setting off to Mars.

  10. Hot topic: Definition and implementation of a breeding value for feed efficiency in dairy cows.

    PubMed

    Pryce, J E; Gonzalez-Recio, O; Nieuwhof, G; Wales, W J; Coffey, M P; Hayes, B J; Goddard, M E

    2015-10-01

    A new breeding value that combines the amount of feed saved through improved metabolic efficiency with predicted maintenance requirements is described. The breeding value includes a genomic component for residual feed intake (RFI) combined with maintenance requirements calculated from either a genomic or pedigree estimated breeding value (EBV) for body weight (BW) predicted using conformation traits. Residual feed intake is only available for genotyped Holsteins; however, BW is available for all breeds. The RFI component of the "feed saved" EBV has 2 parts: Australian calf RFI and Australian lactating cow RFI. Genomic breeding values for RFI were estimated from a reference population of 2,036 individuals in a multi-trait analysis including Australian calf RFI (n=843), Australian lactating cow RFI (n=234), and UK and Dutch lactating cow RFI (n=958). In all cases, the RFI phenotypes were deviations from a mean of 0, calculated by correcting dry matter intake for BW, growth, and milk yield (in the case of lactating cows). Single nucleotide polymorphism effects were calculated from the output of genomic BLUP and used to predict breeding values of 4,106 Holstein sires that were genotyped but did not have RFI phenotypes themselves. These bulls already had BW breeding values calculated from type traits, from which maintenance requirements in kilograms of feed per year were inferred. Finally, RFI and the feed required for maintenance (through BW) were used to calculate a feed saved breeding value and expressed as the predicted amount of feed saved per year. Animals that were 1 standard deviation above the mean were predicted to eat 66 kg dry matter less per year at the same level of milk production. In a data set of genotyped Holstein sires, the mean reliability of the feed saved breeding value was 0.37. For Holsteins that are not genotyped and for breeds other than Holsteins, feed saved is calculated using BW only. From April 2015, feed saved has been included as part of the Australian national selection index, the Balanced Performance Index (BPI). Selection on the BPI is expected to lead to modest gains in feed efficiency. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. JMAT 2.0 Operating Room Requirements Estimation Study

    DTIC Science & Technology

    2011-05-25

    Health Research Center 140 Sylvester Rd. San Diego, CA 92106-3521 Report No. 11-10J, supported by the Office of the Assistant...expected-value methodology for estimating OR requirements in a theater hospital; (b) algorithms for estimating a special case OR table requirement...assuming the probabilities of entering the OR are either 1 or 0; and (c) an Excel worksheet that calculates the special case OR table estimates

  12. Ecological risk-benefit analysis of a wetland development based on risk assessment using "expected loss of biodiversity".

    PubMed

    Oka, T; Matsuda, H; Kadono, Y

    2001-12-01

    Ecological risk from the development of a wetland is assessed quantitatively by means of a new risk measure, expected loss of biodiversity (ELB). ELB is defined as the weighted sum of the increments in the probabilities of extinction of the species living in the wetland due to its loss. The weighting for a particular species is calculated according to the length of the branch on the phylogenetic tree that will be lost if the species becomes extinct. The length of the branch on the phylogenetic tree is regarded as reflecting the extent of contribution of the species to the taxonomic diversity of the world of living things. The increments in the probabilities of extinction are calculated by a simulation used for making the Red List for vascular plants in Japan. The resulting ELB for the loss of Nakaikemi wetland is 9,200 years. This result is combined with the economic costs for conservation of the wetland to produce a value for the indicator of the "cost per unit of biodiversity saved." Depending on the scenario, the value is 13,000 yen per year-ELB or 110,000 to 420,000 yen per year-ELB (1 US dollar = 110 yen in 1999).

  13. Dimensionless Analysis Applied to Bacterial Chemotaxis towards NAPL Contaminants

    NASA Astrophysics Data System (ADS)

    Wang, X.; GAO, B.; Zhong, W.; Kihaule, K. S.; Ford, R.

    2017-12-01

    The use of chemotactic bacteria in bioremediation may improve the efficiency and decrease the cost of restoration, which means it has the potential to address environmental problems caused by oil spills. However, most previous studies were focused at the laboratory-scale and there lacks a formalism that can use these laboratory-scale results as input to evaluate the relative importance of chemotaxis at the field scale. In this study, a dimensionless equation is formulated to solve this problem. First, the main influential factors were extracted based on previous researches in environmental bioremediation and then five sets of dimensionless numbers were obtained according to Buckingham theory. After collecting basic parameter values and supplementary calculations to determine the concentration gradient of the chemoattractant, all dimensionless numbers were calculated and categorized into two types, those that were sensitive to chemotaxis or those to groundwater velocity. The bacteria ratio (BR), defined as the ratio of maximum bacteria concentration to its original value, was correlated with a combination of dimensionless numbers to yield, BR=cP1-0.085P20.329P30.1P4-0.098. For a bacterial ratio greater than one, the bioremediation strategy based on chemotaxis is expected to be effective, and chemotactic bacteria are expected to accumulate around NAPL contaminant sources efficiently.

  14. An EEG should not be obtained routinely after first unprovoked seizure in childhood.

    PubMed

    Gilbert, D L; Buncher, C R

    2000-02-08

    To quantify and analyze the value of expected information from an EEG after first unprovoked seizure in childhood. An EEG is often recommended as part of the standard diagnostic evaluation after first seizure. A MEDLINE search from 1980 to 1998 was performed. From eligible studies, data on EEG results and seizure recurrence risk in children were abstracted, and sensitivity, specificity, and positive and negative predictive values of EEG in predicting recurrence were calculated. Linear information theory was used to quantify and compare the expected information from the EEG in all studies. Standard test-treat decision analysis with a treatment threshold at 80% recurrence risk was used to determine the range of pretest recurrence probabilities over which testing affects treatment decisions. Four studies involving 831 children were eligible for analysis. At best, the EEG had a sensitivity of 61%, a specificity of 71%, and an expected information of 0.16 out of a possible 0.50. The pretest probability of recurrence was less than the lower limit of the range for rational testing in all studies. In this analysis, the quantity of expected information from the EEG was too low to affect treatment recommendations in most patients. EEG should be ordered selectively, not routinely, after first unprovoked seizure in childhood.

  15. Synergistic Effects of Expectancy and Value on Homework Engagement: The Case for a Within-Person Perspective.

    PubMed

    Nagengast, Benjamin; Trautwein, Ulrich; Kelava, Augustin; Lüdtke, Oliver

    2013-05-01

    Historically, expectancy-value models of motivation assumed a synergistic relation between expectancy and value: motivation is high only when both expectancy and value are high. Motivational processes were studied from a within-person perspective, with expectancies and values being assessed or experimentally manipulated across multiple domains and the focus being placed on intraindividual differences. In contrast, contemporary expectancy-value models in educational psychology concentrate almost exclusively on linear effects of expectancy and value on motivational outcomes, with a focus on between-person differences. Recent advances in latent variable methodology allow both issues to be addressed in observational studies. Using the expectancy-value model of homework motivation as a theoretical framework, this study estimated multilevel structural equation models with latent interactions in a sample of 511 secondary school students and found synergistic effects between domain-specific homework expectancy and homework value in predicting homework engagement in 6 subjects. This approach not only brings the "×" back into expectancy-value theory but also reestablishes the within-person perspective as the appropriate level of analysis for latent expectancy-value models.

  16. Exact Holography of Massive M2-brane Theories and Entanglement Entropy

    NASA Astrophysics Data System (ADS)

    Jang, Dongmin; Kim, Yoonbai; Kwon, O.-Kab; Tolla, D. D.

    2018-01-01

    We test the gauge/gravity duality between the N = 6 mass-deformed ABJM theory with Uk(N) × U-k(N) gauge symmetry and the 11-dimensional supergravity on LLM geometries with SO(4)=ℤk × SO(4)=ℤk isometry. Our analysis is based on the evaluation of vacuum expectation values of chiral primary operators from the supersymmetric vacua of mass-deformed ABJM theory and from the implementation of Kaluza-Klein (KK) holography to the LLM geometries. We focus on the chiral primary operator (CPO) with conformal dimension Δ = 1. The non-vanishing vacuum expectation value (vev) implies the breaking of conformal symmetry. In that case, we show that the variation of the holographic entanglement entropy (HEE) from it's value in the CFT, is related to the non-vanishing one-point function due to the relevant deformation as well as the source field. Applying Ryu Takayanagi's HEE conjecture to the 4-dimensional gravity solutions, which are obtained from the KK reduction of the 11-dimensional LLM solutions, we calculate the variation of the HEE. We show how the vev and the value of the source field determine the HEE.

  17. Electron affinity of perhalogenated benzenes: A theoretical DFT study

    NASA Astrophysics Data System (ADS)

    Volatron, François; Roche, Cécile

    2007-10-01

    The potential energy surfaces (PES) of unsubstituted and perhalogenated benzene anions ( CX6-, X = F, Cl, Br, and I) were explored by means of DFT-B3LYP calculations. In the F and Cl cases seven extrema were located and characterized. In the Br and I cases only one minimum and two extrema were found. In each case the minimum was recomputed at the CCSD(T) level. The electron affinities of C 6X 6 were calculated (ZPE included). The results obtained agree well with the experimental determinations when available. The values obtained in the X = Br and the X = I cases are expected to be valuable predictions.

  18. Prediction of packaging seal life using thermoanalytical techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigrey, P.J.

    1997-11-01

    In this study, Thermogravimetric Analysis (TGA) has been used to study silicone, Viton and Ethylene Propylene (EPDM) rubber. The studies have shown that TGA accurately predicts the relative order of thermo-oxidative stability of these three materials from the calculated activation energies. As expected, the greatest thermal stability was found in silicone rubber followed by Viton and EPDM rubber. The calculated lifetimes for these materials were in relatively close agreement with published values. The preliminary results also accurately reflect decreased thermal stability and lifetime for EPDM rubber exposed to radiation and chemicals. These results suggest TGA provides a rapid method tomore » evaluate material stability.« less

  19. The value of value of information: best informing research design and prioritization using current methods.

    PubMed

    Eckermann, Simon; Karnon, Jon; Willan, Andrew R

    2010-01-01

    Value of information (VOI) methods have been proposed as a systematic approach to inform optimal research design and prioritization. Four related questions arise that VOI methods could address. (i) Is further research for a health technology assessment (HTA) potentially worthwhile? (ii) Is the cost of a given research design less than its expected value? (iii) What is the optimal research design for an HTA? (iv) How can research funding be best prioritized across alternative HTAs? Following Occam's razor, we consider the usefulness of VOI methods in informing questions 1-4 relative to their simplicity of use. Expected value of perfect information (EVPI) with current information, while simple to calculate, is shown to provide neither a necessary nor a sufficient condition to address question 1, given that what EVPI needs to exceed varies with the cost of research design, which can vary from very large down to negligible. Hence, for any given HTA, EVPI does not discriminate, as it can be large and further research not worthwhile or small and further research worthwhile. In contrast, each of questions 1-4 are shown to be fully addressed (necessary and sufficient) where VOI methods are applied to maximize expected value of sample information (EVSI) minus expected costs across designs. In comparing complexity in use of VOI methods, applying the central limit theorem (CLT) simplifies analysis to enable easy estimation of EVSI and optimal overall research design, and has been shown to outperform bootstrapping, particularly with small samples. Consequently, VOI methods applying the CLT to inform optimal overall research design satisfy Occam's razor in both improving decision making and reducing complexity. Furthermore, they enable consideration of relevant decision contexts, including option value and opportunity cost of delay, time, imperfect implementation and optimal design across jurisdictions. More complex VOI methods such as bootstrapping of the expected value of partial EVPI may have potential value in refining overall research design. However, Occam's razor must be seriously considered in application of these VOI methods, given their increased complexity and current limitations in informing decision making, with restriction to EVPI rather than EVSI and not allowing for important decision-making contexts. Initial use of CLT methods to focus these more complex partial VOI methods towards where they may be useful in refining optimal overall trial design is suggested. Integrating CLT methods with such partial VOI methods to allow estimation of partial EVSI is suggested in future research to add value to the current VOI toolkit.

  20. SU-E-T-02: 90Y Microspheres Dosimetry Calculation with Voxel-S-Value Method: A Simple Use in the Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maneru, F; Gracia, M; Gallardo, N

    2015-06-15

    Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less

  1. Wind direction change criteria for wind turbine design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cliff, W.C.

    1979-01-01

    A method is presented for estimating the root mean square (rms) value of the wind direction change, ..delta..theta(tau) = theta(tau + tau) - theta(tau), that occurs over the swept area of wind turbine rotor systems. An equation is also given for the rms value of the wind direction change that occurs at a single point in space, i.e., a direcion change that a wind vane would measure. Assuming a normal probability density function for the lateral wind velocity change and relating this to angular changes, equations are given for calculating the expected number of wind direction changes, larger than anmore » arbitrary value, that will occur in 1 hr as well as the expected number that will occur during the design life of a wind turbine. The equations presented are developed using a small angle approximation and are, therefore, considered appropriate for wind direction changes of less than 30/sup 0/. The equations presented are based upon neutral atmospheric boundary-layer conditions and do not include information regarding events such as tornados, hurricanes, etc.« less

  2. Velocity diagnostics of electron beams within a 140 GHz gyrotron

    NASA Astrophysics Data System (ADS)

    Polevoy, Jeffrey Todd

    1989-06-01

    Experimental measurements of the average axial velocity v(sub parallel) of the electron beam within the M.I.T. 140 GHz MW gyrotron have been performed. The method involves the simultaneous measurement of the radial electrostatic potential of the electron beam V(sub p) and the beam current I(sub b). The V(sub p) is measured through the use of a capacitive probe installed near or within the gyrotron cavity, while I(sub b) is measured with a previously installed Rogowski coil. Three capacitive probes have been designed and built, and two have operated within the gyrotron. The probe results are repeatable and consistent with theory. The measurements of v(sub parallel) and calculations of the corresponding transverse to longitudinal beam velocity ratio (alpha) = v(sub perpendicular)/v(sub parallel) at the cavity have been made at various gyrotron operation parameters. These measurements will provide insight into the causes of discrepancies between theoretical RF interaction efficiencies and experimental efficiencies obtained in experiments with the M.I.T. 140 GHz MW gyrotron. The expected values of v(sub parallel) and (alpha) are determined through the use of a computer code (EGUN) which is used to model the cathode and anode regions of the gyrotron. It also computes the trajectories and velocities of the electrons within the gyrotron. There is good correlation between the expected and measured values of (alpha) at low (alpha), with the expected values from EGUN often falling within the standard errors of the measured values.

  3. Handling Density Conversion in TPS.

    PubMed

    Isobe, Tomonori; Mori, Yutaro; Takei, Hideyuki; Sato, Eisuke; Tadano, Kiichi; Kobayashi, Daisuke; Tomita, Tetsuya; Sakae, Takeji

    2016-01-01

    Conversion from CT value to density is essential to a radiation treatment planning system. Generally CT value is converted to the electron density in photon therapy. In the energy range of therapeutic photon, interactions between photons and materials are dominated with Compton scattering which the cross-section depends on the electron density. The dose distribution is obtained by calculating TERMA and kernel using electron density where TERMA is the energy transferred from primary photons and kernel is a volume considering spread electrons. Recently, a new method was introduced which uses the physical density. This method is expected to be faster and more accurate than that using the electron density. As for particle therapy, dose can be calculated with CT-to-stopping power conversion since the stopping power depends on the electron density. CT-to-stopping power conversion table is also called as CT-to-water-equivalent range and is an essential concept for the particle therapy.

  4. Classification of customer lifetime value models using Markov chain

    NASA Astrophysics Data System (ADS)

    Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi

    2017-10-01

    A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.

  5. Maximal Predictability Approach for Identifying the Right Descriptors for Electrocatalytic Reactions.

    PubMed

    Krishnamurthy, Dilip; Sumaria, Vaidish; Viswanathan, Venkatasubramanian

    2018-02-01

    Density functional theory (DFT) calculations are being routinely used to identify new material candidates that approach activity near fundamental limits imposed by thermodynamics or scaling relations. DFT calculations are associated with inherent uncertainty, which limits the ability to delineate materials (distinguishability) that possess high activity. Development of error-estimation capabilities in DFT has enabled uncertainty propagation through activity-prediction models. In this work, we demonstrate an approach to propagating uncertainty through thermodynamic activity models leading to a probability distribution of the computed activity and thereby its expectation value. A new metric, prediction efficiency, is defined, which provides a quantitative measure of the ability to distinguish activity of materials and can be used to identify the optimal descriptor(s) ΔG opt . We demonstrate the framework for four important electrochemical reactions: hydrogen evolution, chlorine evolution, oxygen reduction and oxygen evolution. Future studies could utilize expected activity and prediction efficiency to significantly improve the prediction accuracy of highly active material candidates.

  6. Effects of Increasing Drag on Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Frigm, Ryan Clayton; McKinley, David P.

    2010-01-01

    Conjunction Assessment Risk Analysis relies heavily on the computation of the Probability of Collision (Pc) and the understanding of the sensitivity of this calculation to the position errors as defined by the covariance. In Low Earth Orbit (LEO), covariance is predominantly driven by perturbations due to atmospheric drag. This paper describes the effects of increasing atmospheric drag through Solar Cycle 24 on Pc calculations. The process of determining these effects is found through analyzing solar flux predictions on Energy Dissipation Rate (EDR), historical relationship between EDR and covariance, and the sensitivity of Pc to covariance. It is discovered that while all LEO satellites will be affected by the increase in solar activity, the relative effect is more significant in the LEO regime around 700 kilometers in altitude compared to 400 kilometers. Furthermore, it is shown that higher Pc values can be expected at larger close approach miss distances. Understanding these counter-intuitive results is important to setting Owner/Operator expectations concerning conjunctions as solar maximum approaches.

  7. Evaluation of Fibre Lifetime in Optical Ground Wire Transmission Lines

    NASA Astrophysics Data System (ADS)

    Grunvalds, R.; Ciekurs, A.; Porins, J.; Supe, A.

    2017-06-01

    In the research, measurements of polarisation mode dispersion of two OPGWs (optical ground wire transmission lines), in total four fibres, have been carried out, and the expected lifetime of the infrastructure has been assessed on the basis of these measurements. The cables under consideration were installed in 1995 and 2011, respectively. Measurements have shown that polarisation mode dispersion values for cable installed in 1995 are four times higher than that for cable installed in 2011, which could mainly be explained by technological differences in fibre production and lower fibre polarisation mode dispersion requirements in 1995 due to lack of high-speed (over 10 Gbit/s) optical transmission systems. The calculation methodology of non-refusal work and refusal probabilities, using the measured polarisation mode dispersion parameters, is proposed in the paper. Based on reliability calculations, the expected lifetime is then predicted, showing that all measured fibres most likely will be operational within minimum theoretical service life of 25 years accepted by the industry.

  8. Black bear habitat use in relation to food availability in the Interior Highlands of Arkansas

    USGS Publications Warehouse

    Clark, Joseph D.; Clapp, Daniel L.; Smith, Kimberly G.; Ederington, Belinda

    1994-01-01

    A black bear (Ursus americanus) food value index (FVI) was developed and calculated for forest cover type classifications on Ozark Mountain (White Rock) and Ouachita Mountain (Dry Creek) study areas in western Arkansas. FVIs are estimates of bear food production capabilities of the major forest cover types and were calculated using percent cover, mean fruit production scorings, and the dietary percentage of each major plant food species as variables. Goodness-of-fit analyses were used to determine use of forest cover types by 23 radio-collared female bears. Habitat selection by forest cover type was not detected on White Rock but was detected on Dry Creek. Use of habitats on Dry Creek appeared to be related to food production with the exception of regeneration areas, which were used less than expected but had a high FVI ranking. In general, pine cover types had low FVI rankings and were used less than expected by bears. Forest management implications are discussed. 

  9. The measurement of radiation exposure of astronauts by radiochemical techniques

    NASA Technical Reports Server (NTRS)

    Brodzinski, R. L.

    1972-01-01

    Cosmic radiation doses to the crews of the Apollo 14, 15, and 16 missions of 142 + or - 80, 340 + or - 80, and 210 + or - 130 mR respectively were calculated from the specific activities of Na-22 and Na-24 in the postflight urine specimens of the astronauts. The specific activity of Fe-59 was higher in the urine than in the feces of the Apollo 14 and 15 astronauts, and a possible explanation is given. The concentrations of K-40, K-42, Cr-51, Co-60, and Cs-137 in the urine are also reported for these astronauts. The radiation doses received by pilots and navigators flying high altitude missions during the solar flare of March 27 to 30, 1972 were calculated from the specific activity of Na-24 in their urine. These values are compared with the expected radiation dose calculated from the known shape and intensity of the proton spectrum and demonstrate the magnitude of atmospheric shielding. The concentrations of Na, K, Rb, Cs, Fe, Co, Ag, Zn, Hg, As, Sb, Se, and Br were measured in the urine specimens from the Apollo 14 and 15 astronauts by neutron activation analysis. The mercury and arsenic levels were much higher than expected.

  10. Measuring Renyi entanglement entropy in quantum Monte Carlo simulations.

    PubMed

    Hastings, Matthew B; González, Iván; Kallin, Ann B; Melko, Roger G

    2010-04-16

    We develop a quantum Monte Carlo procedure, in the valence bond basis, to measure the Renyi entanglement entropy of a many-body ground state as the expectation value of a unitary Swap operator acting on two copies of the system. An improved estimator involving the ratio of Swap operators for different subregions enables convergence of the entropy in a simulation time polynomial in the system size. We demonstrate convergence of the Renyi entropy to exact results for a Heisenberg chain. Finally, we calculate the scaling of the Renyi entropy in the two-dimensional Heisenberg model and confirm that the Néel ground state obeys the expected area law for systems up to linear size L=32.

  11. The option value of innovative treatments in the context of chronic myeloid leukemia.

    PubMed

    Sanchez, Yuri; Penrod, John R; Qiu, Xiaoli Lily; Romley, John; Thornton Snider, Julia; Philipson, Tomas

    2012-11-01

    To quantify in the context of chronic myeloid leukemia (CML) the additional value patients receive when innovative treatments enable them to survive until the advent of even more effective future treatments (ie, the "option value"). Observational study using data from the Surveillance, Epidemiology and End Results (SEER) cancer registry comprising all US patients with CML diagnosed between 2000 and 2008 (N = 9,760). We quantified the option value of recent breakthroughs in CML treatment by first conducting retrospective survival analyses on SEER data to assess the effectiveness of TKI treatments, and then forecasting survival from CML and other causes to measure expected future medical progress. We then developed an analytical framework to calculate option value of innovative CML therapies, and used an economic model to value these gains. We calculated the option value created both by future innovations in CML treatment and by medical progress in reducing background mortality. For a recently diagnosed CML patient, the option value of innovative therapies from future medical innovation amounts to 0.76 life-years. This option value is worth $63,000, equivalent to 9% of the average survival gains from existing treatments. Future innovations in CML treatment jointly account for 96% of this benefit. The option value of innovative treatments has significance in the context of CML and, more broadly, in disease areas with rapid innovation. Incorporating option value into traditional valuations of medical innovations is both a feasible and a necessary practice in health technology assessment.

  12. First principles calculations of stability and lithium intercalation potentials of ZnCo2O4

    NASA Astrophysics Data System (ADS)

    Yu, L. C.; Wu, J.; Liu, H.; Zhang, Y. N.

    2015-03-01

    Among the metal oxides, which are the most widely investigated alternative anodes for use in lithium ion batteries (LIBs), binary and ternary tin oxides have received special attention due to their high capacity values. ZnCo2O4 is a promising candidate as the anode material for LIB, and one can expect a total capacity corresponding to 7.0 - 8.33 mol of recyclable Li per mole of ZnCo2O4. Here we studied the structural stability, electronic properties, diffusion barrier and lithium intercalation potentials of ZnCo2O4 through density functional calculations. The calculated structural and energetic parameters are comparable with experiments. Our DFT studies provide insights in understanding the mechanism of lithium ion displacement reactions in this ternary metal oxide.

  13. MO-FG-202-08: Real-Time Monte Carlo-Based Treatment Dose Reconstruction and Monitoring for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Z; Shi, F; Gu, X

    2016-06-15

    Purpose: This proof-of-concept study is to develop a real-time Monte Carlo (MC) based treatment-dose reconstruction and monitoring system for radiotherapy, especially for the treatments with complicated delivery, to catch treatment delivery errors at the earliest possible opportunity and interrupt the treatment only when an unacceptable dosimetric deviation from our expectation occurs. Methods: First an offline scheme is launched to pre-calculate the expected dose from the treatment plan, used as ground truth for real-time monitoring later. Then an online scheme with three concurrent threads is launched while treatment delivering, to reconstruct and monitor the patient dose in a temporally resolved fashionmore » in real-time. Thread T1 acquires machine status every 20 ms to calculate and accumulate fluence map (FM). Once our accumulation threshold is reached, T1 transfers the FM to T2 for dose reconstruction ad starts to accumulate a new FM. A GPU-based MC dose calculation is performed on T2 when MC dose engine is ready and a new FM is available. The reconstructed instantaneous dose is directed to T3 for dose accumulation and real-time visualization. Multiple dose metrics (e.g. maximum and mean dose for targets and organs) are calculated from the current accumulated dose and compared with the pre-calculated expected values. Once the discrepancies go beyond our tolerance, an error message will be send to interrupt the treatment delivery. Results: A VMAT Head-and-neck patient case was used to test the performance of our system. Real-time machine status acquisition was simulated here. The differences between the actual dose metrics and the expected ones were 0.06%–0.36%, indicating an accurate delivery. ∼10Hz frequency of dose reconstruction and monitoring was achieved, with 287.94s online computation time compared to 287.84s treatment delivery time. Conclusion: Our study has demonstrated the feasibility of computing a dose distribution in a temporally resolved fashion in real-time and quantitatively and dosimetrically monitoring the treatment delivery.« less

  14. Construct Demo Input Deck

    DTIC Science & Technology

    2010-06-01

    information diffusion patterns when bridging agents span two otherwise separate groups ? and then how such a simulation would be simulated using ...All users may find it more useful to use the table of contents in order to read sections of interest and reference chain to other parts of the...expression evaluates to 0+20-1 = 19, which would be the end-value expected for a zero-indexed group of twenty agents. Similar calculation can be used to

  15. Subfemtosecond quantum nuclear dynamics in water isotopomers.

    PubMed

    Rao, B Jayachander; Varandas, A J C

    2015-05-21

    Subfemtosecond quantum dynamics studies of all water isotopomers in the X̃ (2)B1 and à (2)A1 electronic states of the cation formed by Franck-Condon ionization of the neutral ground electronic state are reported. Using the ratio of the autocorrelation functions for the isotopomers as obtained from the solution of the time-dependent Schrödinger equation in a grid representation, high-order harmonic generation signals are calculated as a function of time. The results are found to be in agreement with the available experimental findings and with our earlier study for D2O(+)/H2O(+). Maxima are predicted in the autocorrelation function ratio at various times. Their origin and occurrence is explained by calculating expectation values of the bond lengths and bond angle of the water isotopomers as a function of time. The values so calculated for the (2)B1 and (2)A1 electronic states of the cation show quasiperiodic oscillations that can be associated with the time at which the nuclear wave packet reaches the minima of the potential energy surface, there being responsible for the peaks in the HHG signals.

  16. A path integral molecular dynamics study of the hyperfine coupling constants of the muoniated and hydrogenated acetone radicals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oba, Yuki; Kawatsu, Tsutomu; Tachikawa, Masanori, E-mail: tachi@yokohama-cu.ac.jp

    2016-08-14

    The on-the-fly ab initio density functional path integral molecular dynamics (PIMD) simulations, which can account for both the nuclear quantum effect and thermal effect, were carried out to evaluate the structures and “reduced” isotropic hyperfine coupling constants (HFCCs) for muoniated and hydrogenated acetone radicals (2-muoxy-2-propyl and 2-hydoxy-2-propyl) in vacuo. The reduced HFCC value from a simple geometry optimization calculation without both the nuclear quantum effect and thermal effect is −8.18 MHz, and that by standard ab initio molecular dynamics simulation with only the thermal effect and without the nuclear quantum effect is 0.33 MHz at 300 K, where these twomore » methods cannot distinguish the difference between muoniated and hydrogenated acetone radicals. In contrast, the reduced HFCC value of the muoniated acetone radical by our PIMD simulation is 32.1 MHz, which is about 8 times larger than that for the hydrogenated radical of 3.97 MHz with the same level of calculation. We have found that the HFCC values are highly correlated with the local molecular structures; especially, the Mu—O bond length in the muoniated acetone radical is elongated due to the large nuclear quantum effect of the muon, which makes the expectation value of the HFCC larger. Although our PIMD result calculated in vacuo is about 4 times larger than the measured experimental value in aqueous solvent, the ratio of these HFCC values between muoniated and hydrogenated acetone radicals in vacuo is in reasonable agreement with the ratio of the experimental values in aqueous solvent (8.56 MHz and 0.9 MHz); the explicit presence of solvent molecules has a major effect on decreasing the reduced muon HFCC of in vacuo calculations for the quantitative reproduction.« less

  17. Crustal and uppermost mantle S-wave velocity below the East European Craton in northern Poland from the inversion of ambient-noise records

    NASA Astrophysics Data System (ADS)

    Lepore, Simone; Polkowski, Marcin; Grad, Marek

    2018-02-01

    The P-wave velocities (V p) within the East European Craton in Poland are well known through several seismic experiments which permitted to build a high-resolution 3D model down to 60 km depth. However, these seismic data do not provide sufficient information about the S-wave velocities (V s). For this reason, this paper presents the values of lithospheric V s and P-wave-to-S-wave velocity ratios (V p/V s) calculated from the ambient noise recorded during 2014 at "13 BB star" seismic array (13 stations, 78 midpoints) located in northern Poland. The 3D V p model in the area of the array consists of six sedimentary layers having total thickness within 3-7 km and V p in the range 1.85.3 km/s, a three-layer crystalline crust of total thickness 40 km and V p within 6.15-7.15 km/s, and the uppermost mantle, where V p is about 8.25 km/s. The V s and V p/V s values are calculated by the inversion of the surface-wave dispersion curves extracted from the noise cross correlation between all the station pairs. Due to the strong velocity differences among the layers, several modes are recognized in the 0.021 Hz frequency band: therefore, multimodal Monte Carlo inversions are applied. The calculated V s and V p/V s values in the sedimentary cover range within 0.992.66 km/s and 1.751.97 as expected. In the upper crust, the V s value (3.48 ± 0.10 km/s) is very low compared to the starting value of 3.75 ± 0.10 km/s. Consequently, the V p/V s value is very large (1.81 ± 0.03). To explain that the calculated values are compared with the ones for other old cratonic areas.

  18. Effects of data structure on the estimation of covariance functions to describe genotype by environment interactions in a reaction norm model

    PubMed Central

    Calus, Mario PL; Bijma, Piter; Veerkamp, Roel F

    2004-01-01

    Covariance functions have been proposed to predict breeding values and genetic (co)variances as a function of phenotypic within herd-year averages (environmental parameters) to include genotype by environment interaction. The objective of this paper was to investigate the influence of definition of environmental parameters and non-random use of sires on expected breeding values and estimated genetic variances across environments. Breeding values were simulated as a linear function of simulated herd effects. The definition of environmental parameters hardly influenced the results. In situations with random use of sires, estimated genetic correlations between the trait expressed in different environments were 0.93, 0.93 and 0.97 while simulated at 0.89 and estimated genetic variances deviated up to 30% from the simulated values. Non random use of sires, poor genetic connectedness and small herd size had a large impact on the estimated covariance functions, expected breeding values and calculated environmental parameters. Estimated genetic correlations between a trait expressed in different environments were biased upwards and breeding values were more biased when genetic connectedness became poorer and herd composition more diverse. The best possible solution at this stage is to use environmental parameters combining large numbers of animals per herd, while losing some information on genotype by environment interaction in the data. PMID:15339629

  19. Freshwater Mussel Shell δ13C Values as a Proxy for δ13CDIC in a Polluted, Temperate River

    NASA Astrophysics Data System (ADS)

    Graniero, L. E.; Gillikin, D. P.; Surge, D. M.

    2017-12-01

    Freshwater mussel shell δ13C values have been examined as an indicator of ambient δ13C composition of dissolved inorganic carbon (DIC) in temperate rivers. However, shell δ13C values may be obscured by the assimilation of respired, metabolic carbon (CM) derived from the organism's diet. Water δ18O and δ13CDIC values were collected fortnightly from August 2015 through July 2017 from three sites (one agricultural, one downstream of a wastewater treatment plant, one urban) in the Neuse River, NC to test the reliability of Elliptio complanata shell δ13C values as a proxy for δ13CDIC values. Muscle, mantle, gill, and stomach δ13C values were analyzed to approximate the %CM incorporated into the shell. All tissue δ13C values were within 2‰ of each other, which equates to a ±1% difference in calculated %CM. As such, muscle tissue δ13C values will be used for calculating the %CM, because they have the slowest turnover rate of the tissues sampled. Water temperature and δ18O values were used to calculate predicted aragonite shell δ18O­ values (δ18O­ar) based on the aragonite-water fractionation relationship. To assign dates to each shell microsample, predicted δ18O­ar values were compared to high-resolution serially sampled shell values. Consistent with previous studies, E. complanata cease growth in winter when temperatures are below about 12ºC. Preliminary results indicate that during the growing season, shell δ13C values are lower than expected equilibrium values, reflecting the assimilation of 15% CM, on average. Shell δ13C values are not significantly different than δ13CDIC values, but do not capture the full range of δ13CDIC values during each growing season. Thus, δ13C values of E. complanata shells can be used to reliably reconstruct past δ13CDIC values within 2‰ of coeval values. Further research will investigate how differing land-use affects the relationship between shell δ13C, CM, and δ13CDIC values.

  20. Who took the "x" out of expectancy-value theory? A psychological mystery, a substantive-methodological synergy, and a cross-national generalization.

    PubMed

    Nagengast, Benjamin; Marsh, Herbert W; Scalas, L Francesca; Xu, Man K; Hau, Kit-Tai; Trautwein, Ulrich

    2011-08-01

    Expectancy-value theory (EVT) is a dominant theory of human motivation. Historically, the Expectancy × Value interaction, in which motivation is high only if both expectancy and value are high, was central to EVT. However, the Expectancy × Value interaction mysteriously disappeared from published research more than 25 years ago. Using large representative samples of 15-year-olds (N = 398,750) from 57 diverse countries, we attempted to solve this mystery by testing Expectancy × Value interactions using latent-variable models with interactions. Expectancy (science self-concept), value (enjoyment of science), and the Expectancy × Value interaction all had statistically significant positive effects on both engagement in science activities and intentions of pursuing scientific careers; these results were similar for the total sample and for nearly all of the 57 countries considered separately. This study, apparently the strongest cross-national test of EVT ever undertaken, supports the generalizability of EVT predictions--including the "lost" Expectancy × Value interaction.

  1. Experimental validation of photon-heating calculation for the Jules Horowitz Reactor

    NASA Astrophysics Data System (ADS)

    Lemaire, M.; Vaglio-Gaudard, C.; Lyoussi, A.; Reynard-Carette, C.; Di Salvo, J.; Gruel, A.

    2015-04-01

    The Jules Horowitz Reactor (JHR) is the next Material-Testing Reactor (MTR) under construction at CEA Cadarache. High values of photon heating (up to 20 W/g) are expected in this MTR. As temperature is a key parameter for material behavior, the accuracy of photon-heating calculation in the different JHR structures is an important stake with regard to JHR safety and performances. In order to experimentally validate the calculation of photon heating in the JHR, an integral experiment called AMMON was carried out in the critical mock-up EOLE at CEA Cadarache to help ascertain the calculation bias and its associated uncertainty. Nuclear heating was measured in different JHR-representative AMMON core configurations using ThermoLuminescent Detectors (TLDs) and Optically Stimulated Luminescent Detectors (OSLDs). This article presents the interpretation methodology and the calculation/experiment (C/E) ratio for all the TLD and OSLD measurements conducted in AMMON. It then deals with representativeness elements of the AMMON experiment regarding the JHR and establishes the calculation biases (and its associated uncertainty) applicable to photon-heating calculation for the JHR.

  2. Value of information analysis as a decision support tool for biosecurity: Chapter 15

    USGS Publications Warehouse

    Runge, Michael C.; Rout, Tracy; Spring, Daniel; Walshe, Terry

    2017-01-01

    This chapter demonstrates the economic concept of ‘value of information’(VOI), and how biosecurity managers can use VOI analysis to decide whether or not to reduce uncertainty by collecting additional information through monitoring, experimentation, or some other form of research. We first explore how some uncertainties may be scientifically interesting to resolve, but ultimately irrelevant to decision-making. We then develop a prototype model where a manager must choose between eradication or containment of an infestation. Eradication is more cost-effective for smaller infestations, but once the extent reaches a certain size it becomes more cost-effective to contain. When choosing between eradication and containment, how much does knowing the extent of the infestation more exactly improve the outcome of the decision? We calculate the expected value of perfect information (EVPI) about the extent, which provides an upper limit for the value of reducing uncertainty. We then illustrate the approach using the example of red imported fire ant management in south-east Queensland. We calculate the EVPI for three different uncertain variables: the extent of the infestation, the sensitivity (true positive rate) of remote sensing, and the efficacy of baiting.

  3. Periodic Orbits and Semiclassical Form Factor in Barrier Billiards

    NASA Astrophysics Data System (ADS)

    Giraud, O.

    2005-11-01

    Using heuristic arguments based on the trace formulas, we analytically calculate the semiclassical two-point correlation form factor for a family of rectangular billiards with a barrier of height irrational with respect to the side of the billiard and located at any rational position p/q from the side. To do this, we first obtain the asymptotic density of lengths for each family of periodic orbits by a Siegel-Veech formula. The result obtained for these pseudo-integrable, non-Veech billiards is different but not far from the value of 1/2 expected for semi-Poisson statistics and from values of obtained previously in the case of Veech billiards.

  4. Life expectancy in elderly patients following burns injury.

    PubMed

    Sepehripour, Sarvnaz; Duggineni, Sirisha; Shahsavari, Somaya; Dheansa, Baljit

    2018-05-18

    Burn injuries commonly occur in vulnerable age and social groups. Previous research has shown that frailty may represent a more important marker of adverse outcome in healthcare rather than chronological age (Roberts et al., 2012). In this paper we determined the relationship between burn injury, frailty, co-morbidities and long-term survival. Retrospective data collection from patients aged 75 with burns injuries, treated and discharged at Queen Victoria Hospital. The Clinical Frailty Scale (Rockwood et al., 2005) was used to calculate frailty at the time of admission. The expected mortality age (life expectancy) of deceased patients was obtained from two survival predictors. The data shows a statistically significant correlation between frailty score and complications and a statistically significant correlation between total body surface area percentage and complications. No significant difference was found between expected and observed age of death or life expectancy amongst the deceased (p value of 0.109). Based on the data from our unit, sustaining a burn as an elderly person does not reduce life expectancy. Medical and surgical complications, immediate, early and late, although higher with greater frailty and TBSA of burn, but do not adversely affect survival in this population. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Online irrigation service for fruit und vegetable crops at farmers site

    NASA Astrophysics Data System (ADS)

    Janssen, W.

    2009-09-01

    Online irrigation service for fruit und vegetable crops at farmers site by W. Janssen, German Weather Service, 63067 Offenbach Agrowetter irrigation advice is a product which calculates the present soil moisture as well as the soil moisture to be expected over the next 5 days for over 30 different crops. It's based on a water balance model and provides targeted recommendations for irrigation. Irrigation inputs according to the soil in order to avoid infiltration and, as a consequence thereof, the undesired movement of nitrate and plant protectants into the groundwater. This interactive 'online system' takes into account the user's individual circumstances such as crop and soil characteristics and the precipitation and irrigation amounts at the user's site. Each user may calculate up to 16 different enquiries simultaneously (different crops or different emergence dates). The user can calculate the individual soil moistures for his fields with a maximum effort of 5 minutes per week only. The sources of water are precipitation and irrigation whereas water losses occur due to evapotranspiration and infiltration of water into the ground. The evapotranspiration is calculated by multiplying a reference evapotranspiration (maximum evapotranspiration over grass) with the so-called crop coefficients (kc values) that have been developed by the Geisenheim Research Centre, Vegetable Crops Branch. Kc values depending on the crop and the individual plant development stage. The reference evapotranspiration is calculated from a base weather station user has chosen (out of around 500 weather stations) using Penman method based on daily values. After chosen a crop and soil type the user must manually enter the precipitation data measured at the site, the irrigation water inputs and the dates for a few phenological stages. Economical aspects can be considered by changing the values of soil moisture from which recommendations for irrigation start from optimal to necessary plant supply. Previous comparative measurements carried out by the Agricultural Administration of Baden-Württemberg relating to potatoes, onions, vine stocks, and strawberries agreed very well with the calculations.

  6. Measuring signal-to-noise ratio in partially parallel imaging MRI

    PubMed Central

    Goerner, Frank L.; Clarke, Geoffrey D.

    2011-01-01

    Purpose: To assess five different methods of signal-to-noise ratio (SNR) measurement for partially parallel imaging (PPI) acquisitions. Methods: Measurements were performed on a spherical phantom and three volunteers using a multichannel head coil a clinical 3T MRI system to produce echo planar, fast spin echo, gradient echo, and balanced steady state free precession image acquisitions. Two different PPI acquisitions, generalized autocalibrating partially parallel acquisition algorithm and modified sensitivity encoding with acceleration factors (R) of 2–4, were evaluated and compared to nonaccelerated acquisitions. Five standard SNR measurement techniques were investigated and Bland–Altman analysis was used to determine agreement between the various SNR methods. The estimated g-factor values, associated with each method of SNR calculation and PPI reconstruction method, were also subjected to assessments that considered the effects on SNR due to reconstruction method, phase encoding direction, and R-value. Results: Only two SNR measurement methods produced g-factors in agreement with theoretical expectations (g ≥ 1). Bland–Altman tests demonstrated that these two methods also gave the most similar results relative to the other three measurements. R-value was the only factor of the three we considered that showed significant influence on SNR changes. Conclusions: Non-signal methods used in SNR evaluation do not produce results consistent with expectations in the investigated PPI protocols. Two of the methods studied provided the most accurate and useful results. Of these two methods, it is recommended, when evaluating PPI protocols, the image subtraction method be used for SNR calculations due to its relative accuracy and ease of implementation. PMID:21978049

  7. Does metabolic compensation explain the majority of less-than-expected weight loss in obese adults during a short-term severe diet and exercise intervention?

    PubMed

    Byrne, N M; Wood, R E; Schutz, Y; Hills, A P

    2012-11-01

    We investigated to what extent changes in metabolic rate and composition of weight loss explained the less-than-expected weight loss in obese men and women during a diet-plus-exercise intervention. In all, 16 obese men and women (41 ± 9 years; body mass index (BMI) 39 ± 6 kg m(-2)) were investigated in energy balance before, after and twice during a 12-week very-low-energy diet(565-650 kcal per day) plus exercise (aerobic plus resistance training) intervention. The relative energy deficit (EDef) from baseline requirements was severe (74%-87%). Body composition was measured by deuterium dilution and dual energy X-ray absorptiometry, and resting metabolic rate (RMR) was measured by indirect calorimetry. Fat mass (FM) and fat-free mass (FFM) were converted into energy equivalents using constants 9.45 kcal per g FM and 1.13 kcal per g FFM. Predicted weight loss was calculated from the EDef using the '7700 kcal kg(-1) rule'. Changes in weight (-18.6 ± 5.0 kg), FM (-15.5 ± 4.3 kg) and FFM (-3.1 ± 1.9 kg) did not differ between genders. Measured weight loss was on average 67% of the predicted value, but ranged from 39% to 94%. Relative EDef was correlated with the decrease in RMR (R=0.70, P<0.01), and the decrease in RMR correlated with the difference between actual and expected weight loss (R=0.51, P<0.01). Changes in metabolic rate explained on average 67% of the less-than-expected weight loss, and variability in the proportion of weight lost as FM accounted for a further 5%. On average, after adjustment for changes in metabolic rate and body composition of weight lost, actual weight loss reached 90% of the predicted values. Although weight loss was 33% lower than predicted at baseline from standard energy equivalents, the majority of this differential was explained by physiological variables. Although lower-than-expected weight loss is often attributed to incomplete adherence to prescribed interventions, the influence of baseline calculation errors and metabolic downregulation should not be discounted.

  8. Correcting power and p-value calculations for bias in diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Landman, Bennett A

    2013-07-01

    Diffusion tensor imaging (DTI) provides quantitative parametric maps sensitive to tissue microarchitecture (e.g., fractional anisotropy, FA). These maps are estimated through computational processes and subject to random distortions including variance and bias. Traditional statistical procedures commonly used for study planning (including power analyses and p-value/alpha-rate thresholds) specifically model variability, but neglect potential impacts of bias. Herein, we quantitatively investigate the impacts of bias in DTI on hypothesis test properties (power and alpha-rate) using a two-sided hypothesis testing framework. We present theoretical evaluation of bias on hypothesis test properties, evaluate the bias estimation technique SIMEX for DTI hypothesis testing using simulated data, and evaluate the impacts of bias on spatially varying power and alpha rates in an empirical study of 21 subjects. Bias is shown to inflame alpha rates, distort the power curve, and cause significant power loss even in empirical settings where the expected difference in bias between groups is zero. These adverse effects can be attenuated by properly accounting for bias in the calculation of power and p-values. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Predictive isotopic biogeochemistry: hydrocarbons from anoxic marine basins

    NASA Technical Reports Server (NTRS)

    Freeman, K. H.; Wakeham, S. G.; Hayes, J. M.

    1994-01-01

    Carbon isotopic compositions were determined for individual hydrocarbons in water column and sediment samples from the Cariaco Trench and Black Sea. In order to identify hydrocarbons derived from phytoplankton, the isotopic compositions expected for biomass of autotrophic organisms living in surface waters of both localities were calculated based on the concentrations of CO2(aq) and the isotopic compositions of dissolved inorganic carbon. These calculated values are compared to measured delta values for particulate organic carbon and for individual hydrocarbon compounds. Specifically, we find that lycopane is probably derived from phytoplankton and that diploptene is derived from the lipids of chemoautotrophs living above the oxic/anoxic boundary. Three acyclic isoprenoids that have been considered markers for methanogens, pentamethyleicosane and two hydrogenated squalenes, have different delta values and apparently do not derive from a common source. Based on the concentration profiles and isotopic compositions, the C31 and C33 n-alkanes and n-alkenes have a similar source, and both may have a planktonic origin. If so, previously assigned terrestrial origins of organic matter in some Black Sea sediments may be erroneous.

  10. Temperature elevation in the fetus from electromagnetic exposure during magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Kikuchi, Satoru; Saito, Kazuyuki; Takahashi, Masaharu; Ito, Koichi

    2010-04-01

    This study computationally assessed the temperature elevations due to electromagnetic wave energy deposition during magnetic resonance imaging in non-pregnant and pregnant woman models. We used a thermal model with thermoregulatory response of the human body for our calculations. We also considered the effect of blood temperature variation on body core temperature. In a thermal equilibrium state, the temperature elevations in the intrinsic tissues of the woman and fetal tissues were 0.85 and 0.61 °C, respectively, at a whole-body averaged specific absorption rate of 2.0 W kg-1, which is the restriction value of the International Electrotechnical Commission for the normal operating mode. As predicted, these values are below the temperature elevation of 1.5 °C that is expected to be teratogenic. However, these values exceeded the recommended temperature elevation limit of 0.5 °C by the International Commission on Non-Ionizing Radiation Protection. We also assessed the irradiation time required for a temperature elevation of 0.5 °C at the aforementioned specific absorption rate. As a result, the calculated irradiation time was 40 min.

  11. Calculation of contact angles at triple phase boundary in solid oxide fuel cell anode using the level set method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Xiaojun; Hasegawa, Yosuke; CREST, JST

    2014-10-15

    A level set method is applied to characterize the three dimensional structures of nickel, yttria stabilized zirconia and pore phases in solid oxide fuel cell anode reconstructed by focused ion beam-scanning electron microscope. A numerical algorithm is developed to evaluate the contact angles at the triple phase boundary based on interfacial normal vectors which can be calculated from the signed distance functions defined for each of the three phases. Furthermore, surface tension force is estimated from the contact angles by assuming the interfacial force balance at the triple phase boundary. The average contact angle values of nickel, yttria stabilized zirconiamore » and pore are found to be 143°–156°, 83°–138° and 82°–123°, respectively. The mean contact angles remained nearly unchanged after 100 hour operation. However, the contact angles just after reduction are different for the cells with different sintering temperatures. In addition, standard deviations of the contact angles are very large especially for yttria stabilized zirconia and pore phases. The calculated surface tension forces from mean contact angles were close to the experimental values found in the literature. Slight increase of surface tensions of nickel/pore and nickel/yttria stabilized zirconia were observed after operation. Present data are expected to be used not only for the understanding of the degradation mechanism, but also for the quantitative prediction of the microstructural temporal evolution of solid oxide fuel cell anode. - Highlights: • A level set method is applied to characterize the 3D structures of SOFC anode. • A numerical algorithm is developed to evaluate the contact angles at the TPB. • Surface tension force is estimated from the contact angles. • The average contact angle values are found to be 143o-156o, 83o-138o and 82o-123o. • Present data are expected to understand degradation and predict evolution of SOFC.« less

  12. Unbiased reduced density matrices and electronic properties from full configuration interaction quantum Monte Carlo.

    PubMed

    Overy, Catherine; Booth, George H; Blunt, N S; Shepherd, James J; Cleland, Deidre; Alavi, Ali

    2014-12-28

    Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.

  13. Unbiased reduced density matrices and electronic properties from full configuration interaction quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Overy, Catherine; Blunt, N. S.; Shepherd, James J.

    2014-12-28

    Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamicmore » itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.« less

  14. Optimal Parameters to Determine the Apparent Diffusion Coefficient in Diffusion Weighted Imaging via Simulation

    NASA Astrophysics Data System (ADS)

    Perera, Dimuthu

    Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate peripheral region was 1: 9. Also, the minimum percentage accuracy and percentage precision were obtained when low b-value is 0 and high b-value is 800 mm2/s for normal tissue and 1400 mm2/s for tumor tissue. Results also showed that for tissues with 1 x 10-3 < ADC < 2.1 x 10-3 mm 2/s the parameter combination at SNR = 20, b-value pair 0, 800 mm 2/s with NEX = 1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%. Also, for tissues with 0.6 x 10-3 < ADC < 1.25 x 10-3 mm2 /s the parameter combination at SNR = 20, b-value pair 0, 1400 mm 2/s with NEX =1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%.

  15. Chiral symmetry restoration at finite temperature and chemical potential in the improved ladder approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taniguchi, Y.; Yoshida, Y.

    1997-02-01

    The chiral symmetry of QCD is studied at finite temperature and chemical potential using the Schwinger-Dyson equation in the improved ladder approximation. We calculate three order parameters: the vacuum expectation value of the quark bilinear operator, the pion decay constant, and the quark mass gap. We have a second order phase transition at the temperature T{sub c}=169 MeV along the zero chemical potential line, and a first order phase transition at the chemical potential {mu}{sub c}=598 MeV along the zero temperature line. We also calculate the critical exponents of the three order parameters. {copyright} {ital 1997} {ital The American Physicalmore » Society}« less

  16. Persuasion, Influence, and Value: Perspectives from Communication and Social Neuroscience.

    PubMed

    Falk, Emily; Scholz, Christin

    2018-01-04

    Opportunities to persuade and be persuaded are ubiquitous. What determines whether influence spreads and takes hold? This review provides an overview of evidence for the central role of subjective valuation in persuasion and social influence for both propagators and receivers of influence. We first review evidence that decisions to communicate information are determined by the subjective value a communicator expects to gain from sharing. We next review evidence that the effects of social influence and persuasion on receivers, in turn, arise from changes in the receiver's subjective valuation of objects, ideas, and behaviors. We then review evidence that self-related and social considerations are two key inputs to the value calculation in both communicators and receivers. Finally, we highlight biological coupling between communicators and receivers as a mechanism through which perceptions of value can be transmitted.

  17. Cultivating an entrepreneurial mindset.

    PubMed

    Matheson, Sandra A

    2013-01-01

    Now as never before, familiar challenges require bold, novel approaches. Registered dietitians will benefit by cultivating an entrepreneurial mindset that involves being comfortable with uncertainty, learning to take calculated risks, and daring to just try it. An entrepreneur is someone who takes risks to create something new, usually in business. But the entrepreneurial mindset is available to anyone prepared to rely only on their own abilities for their economic security and expect no opportunity without first creating value for others.

  18. Contribution of domestic production records, Interbull estimated breeding values, and single nucleotide polymorphism genetic markers to the single-step genomic evaluation of milk production.

    PubMed

    Přibyl, J; Madsen, P; Bauer, J; Přibylová, J; Simečková, M; Vostrý, L; Zavadilová, L

    2013-03-01

    Estimated breeding values (EBV) for first-lactation milk production of Holstein cattle in the Czech Republic were calculated using a conventional animal model and by single-step prediction of the genomic enhanced breeding value. Two overlapping data sets of milk production data were evaluated: (1) calving years 1991 to 2006, with 861,429 lactations and 1,918,901 animals in the pedigree and (2) calving years 1991 to 2010, with 1,097,319 lactations and 1,906,576 animals in the pedigree. Global Interbull (Uppsala, Sweden) deregressed proofs of 114,189 bulls were used in the analyses. Reliabilities of Interbull values were equivalent to an average of 8.53 effective records, which were used in a weighted analysis. A total of 1,341 bulls were genotyped using the Illumina BovineSNP50 BeadChip V2 (Illumina Inc., San Diego, CA). Among the genotyped bulls were 332 young bulls with no daughters in the first data set but more than 50 daughters (88.41, on average) with performance records in the second data set. For young bulls, correlations of EBV and genomic enhanced breeding value before and after progeny testing, corresponding average expected reliabilities, and effective daughter contributions (EDC) were calculated. The reliability of prediction pedigree EBV of young bulls was 0.41, corresponding to EDC=10.6. Including Interbull deregressed proofs improved the reliability of prediction by EDC=13.4 and including genotyping improved prediction reliability by EDC=6.2. Total average expected reliability of prediction reached 0.67, corresponding to EDC=30.2. The combination of domestic and Interbull sources for both genotyped and nongenotyped animals is valuable for improving the accuracy of genetic prediction in small populations of dairy cattle. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. The mass spectra, hierarchy and cosmology of B-L MSSM heterotic compactifications

    DOE PAGES

    Ambroso, Michael; Ovrut, Burt A.

    2011-04-10

    The matter spectrum of the MSSM, including three right-handed neutrino supermultiplets and one pair of Higgs-Higgs conjugate superfields, can be obtained by compactifying the E₈ x E₈ heterotic string and M-theory on Calabi-Yau manifolds with specific SU(4) vector bundles. These theories have the standard model gauge group augmented by an additional gauged U(1) B-L. Their minimal content requires that the B-L gauge symmetry be spontaneously broken by a vacuum expectation value of at least one right-handed neutrino. In previous papers, we presented the results of a quasi-analytic renormalization group analysis showing that B-L gauge symmetry is indeed radiatively broken withmore » an appropriate B-L/electroweak hierarchy. In this paper, we extend these results by 1) enlarging the initial parameter space and 2) explicitly calculating all renormalization group equations numerically. The regions of the initial parameter space leading to realistic vacua are presented and the B-L/electroweak hierarchy computed over these regimes. At representative points, the mass spectrum for all particles and Higgs fields is calculated and shown to be consistent with present experimental bounds. Some fundamental phenomenological signatures of a non-zero right-handed neutrino expectation value are discussed, particularly the cosmology and proton lifetime arising from induced lepton and baryon number violating interactions.« less

  20. In Pursuit of the Far-Infrared Spectrum of Cyanogen Iso-Thiocyanate Ncncs, Under the Influence of the Energy Level Dislocation due to Quantum Monodromy

    NASA Astrophysics Data System (ADS)

    Winnewisser, Manfred; Winnewisser, Brenda P.; Medvedev, Ivan R.; De Lucia, Frank, C.; Ross, Stephen C.; Koput, Jacek

    2010-06-01

    Quantum Monodromy has a strong impact on the ro-vibrational energy levels of chain molecules whose bending potential energy function has the form of the bottom of a champagne bottle (i.e. with a hump or punt) around the linear configuration. NCNCS is a particularly good example of such a molecule and clearly exhibits a distinctive monodromy-induced dislocation of the energy level pattern at the top of the potential energy hump. The generalized semi-rigid bender (GSRB) wave functions are used to show that the expectation values of any physical quantity which varies with the large amplitude bending coordinate will also have monodromy-induced dislocations. This includes the electric dipole moment components. High level ab initio calculations not only provided the molecular equilibrium structure of NCNCS, but also the electric dipole moment components μa and μb as functions of the large-amplitude bending coordinate. The calculated expectation values of these quantities indicate large ro-vibrational transition moments that will be discussed in pursuit of possible far-infrared bands. To our knowledge there is no NCNCS infrared spectrum reported in the literature. B. P. Winnewisser, M. Winnewisser, I. R. Medvedev, F. C. De Lucia, S. C. Ross and J. Koput, Phys. Chem. Chem. Phys., 2010, DOI:10.1039/B922023B.

  1. Estimating the expected value of partial perfect information in health economic evaluations using integrated nested Laplace approximation.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2016-10-15

    The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  2. Monophasic demyelination reduces brain growth in children

    PubMed Central

    Weier, Katrin; Longoni, Giulia; Fonov, Vladimir S.; Bar-Or, Amit; Marrie, Ruth Ann; Yeh, E. Ann; Narayanan, Sridar; Arnold, Douglas L.; Verhey, Leonard H.; Banwell, Brenda; Collins, D. Louis

    2017-01-01

    Objective: To investigate how monophasic acquired demyelinating syndromes (ADS) affect age-expected brain growth over time. Methods: We analyzed 83 pediatric patients imaged serially from initial demyelinating attack: 18 with acute disseminated encephalomyelitis (ADEM) and 65 with other monophasic ADS presentations (monoADS). We further subdivided the monoADS group by the presence (n = 33; monoADSlesion) or absence (n = 32; monoADSnolesion) of T2 lesions involving the brain at onset. We used normative data to compare brain volumes and calculate age- and sex-specific z scores, and used mixed-effect models to investigate their relationship with time from demyelinating illness. Results: Children with monophasic demyelination (ADEM, non-ADEM with brain lesions, and those without brain involvement) demonstrated reduced age-expected brain growth on serial images, driven by reduced age-expected white matter growth. Cortical gray matter volumes were not reduced at onset but demonstrated reduced age-expected growth afterwards in all groups. Brain volumes differed from age- and sex-expected values to the greatest extent in children with ADEM. All patient groups failed to recover age-expected brain growth trajectories. Conclusions: Brain volume, and more importantly age-expected brain growth, is negatively affected by acquired demyelination, even in the absence of chronicity, implicating factors other than active inflammation as operative in this process. PMID:28381515

  3. Monophasic demyelination reduces brain growth in children.

    PubMed

    Aubert-Broche, Bérengère; Weier, Katrin; Longoni, Giulia; Fonov, Vladimir S; Bar-Or, Amit; Marrie, Ruth Ann; Yeh, E Ann; Narayanan, Sridar; Arnold, Douglas L; Verhey, Leonard H; Banwell, Brenda; Collins, D Louis

    2017-05-02

    To investigate how monophasic acquired demyelinating syndromes (ADS) affect age-expected brain growth over time. We analyzed 83 pediatric patients imaged serially from initial demyelinating attack: 18 with acute disseminated encephalomyelitis (ADEM) and 65 with other monophasic ADS presentations (monoADS). We further subdivided the monoADS group by the presence (n = 33; monoADSlesion) or absence (n = 32; monoADSnolesion) of T2 lesions involving the brain at onset. We used normative data to compare brain volumes and calculate age- and sex-specific z scores, and used mixed-effect models to investigate their relationship with time from demyelinating illness. Children with monophasic demyelination (ADEM, non-ADEM with brain lesions, and those without brain involvement) demonstrated reduced age-expected brain growth on serial images, driven by reduced age-expected white matter growth. Cortical gray matter volumes were not reduced at onset but demonstrated reduced age-expected growth afterwards in all groups. Brain volumes differed from age- and sex-expected values to the greatest extent in children with ADEM. All patient groups failed to recover age-expected brain growth trajectories. Brain volume, and more importantly age-expected brain growth, is negatively affected by acquired demyelination, even in the absence of chronicity, implicating factors other than active inflammation as operative in this process. © 2017 American Academy of Neurology.

  4. Global Pattern of Potential Evaporation Calculated from the Penman-Monteith Equation Using Satellite and Assimilated Data

    NASA Technical Reports Server (NTRS)

    Choudhury, Bhaskar J.

    1997-01-01

    Potential evaporation (E(0)) has been found to be useful in many practical applications and in research for setting a reference level for actual evaporation. All previous estimates of regional or global E(0) are based upon empirical formulae using climatologic meteorologic measurements at isolated stations (i.e., point data). However, the Penman-Monteith equation provides a physically based approach for computing E(0), and by comparing 20 different methods of estimating E(0), Jensen et al. (1990) showed that the Penman-Monteith equation provides the most accurate estimate of monthly E(0) from well-watered grass or alfalfa. In the present study, monthly total E(0) for 24 months (January 1987 to December 1988) was calculated from the Penman-Monteith equation, with prescribed albedo of 0.23 and surface resistance of 70 s/m, which are considered to be representative of actively growing well-watered grass covering the ground. These calculations have been done using spatially representative data derived from satellite observations and data assimilation results. Satellite observations were used to obtain solar radiation, fractional cloud cover, air temperature, and vapor pressure, while four-dimensional data assimilation results were used to calculate the aerodynamic resistance. Meteorologic data derived from satellite observations were compared with the surface measurements to provide a measure of accuracy. The accuracy of the calculated E(0) values was assessed by comparing with lysimeter observations for evaporation from well-watered grass at 35 widely distributed locations, while recognizing that the period of present calculations was not concurrent with the lysimeter measurements and the spatial scales of these measurements and calculations are vastly different. These comparisons suggest that the error in the calculated E(0) values may not be exceeded, on average, 20% for any month or location, but are more likely to be about 15%. These uncertainties are difficult to quantify for mountainous areas or locations close to extensive water bodies. The difference between the calculated and observed E(0) is about 5% when all month and locations were considered. Errors are expected to be less than 15% for averages of E(0) over large areas or several months. Further comparisons with lysimeter observations could provide a better appraisal of the calculated values. Global pattern of E(0) was presented, together with zonal average values.

  5. Decreasing Kd uncertainties through the application of thermodynamic sorption models.

    PubMed

    Domènech, Cristina; García, David; Pękala, Marek

    2015-09-15

    Radionuclide retardation processes during transport are expected to play an important role in the safety assessment of subsurface disposal facilities for radioactive waste. The linear distribution coefficient (Kd) is often used to represent radionuclide retention, because analytical solutions to the classic advection-diffusion-retardation equation under simple boundary conditions are readily obtainable, and because numerical implementation of this approach is relatively straightforward. For these reasons, the Kd approach lends itself to probabilistic calculations required by Performance Assessment (PA) calculations. However, it is widely recognised that Kd values derived from laboratory experiments generally have a narrow field of validity, and that the uncertainty of the Kd outside this field increases significantly. Mechanistic multicomponent geochemical simulators can be used to calculate Kd values under a wide range of conditions. This approach is powerful and flexible, but requires expert knowledge on the part of the user. The work presented in this paper aims to develop a simplified approach of estimating Kd values whose level of accuracy would be comparable with those obtained by fully-fledged geochemical simulators. The proposed approach consists of deriving simplified algebraic expressions by combining relevant mass action equations. This approach was applied to three distinct geochemical systems involving surface complexation and ion-exchange processes. Within bounds imposed by model simplifications, the presented approach allows radionuclide Kd values to be estimated as a function of key system-controlling parameters, such as the pH and mineralogy. This approach could be used by PA professionals to assess the impact of key geochemical parameters on the variability of radionuclide Kd values. Moreover, the presented approach could be relatively easily implemented in existing codes to represent the influence of temporal and spatial changes in geochemistry on Kd values. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. 3D modeling inversion calculation of magnetic data using iterative reweighted least squares at the Lau basin, Southwest Pacific

    NASA Astrophysics Data System (ADS)

    Choi, S.; Kim, C.; Kim, H. R.; Park, C.; Park, H. Y.

    2015-12-01

    We performed the marine magnetic and the bathymetry survey in the Lau basin for finding the submarine hydrothermal deposits in October 2009. We acquired magnetic and bathymetry datasets by using Overhouser Proton Magnetometer SeaSPY(Marine Magnetics Co.) and Multi-Beam Echo Sounder EM120(Kongsberg Co.). We conducted the data processing to obtain detailed seabed topography, magnetic anomaly and reduction to the pole(RTP). The Lau basin is one of the youngest back-arc basins in the Southwest Pacific. This region was a lot of hydrothermal activities and hydrothermal deposits. In particular, Tofua Arc(TA) in the Lau basin consists of various and complex stratovolcanos(from Massoth et al., 2007).), We calculated the magnetic susceptibility distribution of the TA19-1 seamount(longitude:176°23.5'W, latitude: 22°42.5'W)area using the RTP data by 3-D magnetic inversion from Jung's previous study(2013). Based on 2D 'compact gravity inversion' by Last & Kubik(1983), we expend it to the 3D algorithm using iterative reweighted least squares method with some weight matrices. The used weight matrices are two types: 1) the minimum gradient support(MGS) that controls the spatial distribution of the solution from Porniaguine and Zhdanov(1999); 2) the depth weight that are used according to the shape of subsurface structures. From the modeling, we derived the appropriate scale factor for the use of depth weight and setting magnetic susceptibility. Furthermore, we have to enter a very small error value to control the computation of the singular point of the inversion model that was able to be easily calculated for modeling. In addition, we applied separately weighted value for the correct shape and depth of the magnetic source. We selected the best results model by change to converge of RMS. Compared between the final modeled result and RTP values in this study, they are generally similar to the each other. But the input values and the modeled values have slightly little difference. This difference is expected to have been caused by various and complex stratovolcanos, misunderstanding of regional geology distribution, modeling design, limited vertical resolution from non-uniqueness in potential field and etc. We can expect to have the better results of advanced modeling design with more geological survey data.

  7. Genetic variation maintained in multilocus models of additive quantitative traits under stabilizing selection.

    PubMed Central

    Bürger, R; Gimelfarb, A

    1999-01-01

    Stabilizing selection for an intermediate optimum is generally considered to deplete genetic variation in quantitative traits. However, conflicting results from various types of models have been obtained. While classical analyses assuming a large number of independent additive loci with individually small effects indicated that no genetic variation is preserved under stabilizing selection, several analyses of two-locus models showed the contrary. We perform a complete analysis of a generalization of Wright's two-locus quadratic-optimum model and investigate numerically the ability of quadratic stabilizing selection to maintain genetic variation in additive quantitative traits controlled by up to five loci. A statistical approach is employed by choosing randomly 4000 parameter sets (allelic effects, recombination rates, and strength of selection) for a given number of loci. For each parameter set we iterate the recursion equations that describe the dynamics of gamete frequencies starting from 20 randomly chosen initial conditions until an equilibrium is reached, record the quantities of interest, and calculate their corresponding mean values. As the number of loci increases from two to five, the fraction of the genome expected to be polymorphic declines surprisingly rapidly, and the loci that are polymorphic increasingly are those with small effects on the trait. As a result, the genetic variance expected to be maintained under stabilizing selection decreases very rapidly with increased number of loci. The equilibrium structure expected under stabilizing selection on an additive trait differs markedly from that expected under selection with no constraints on genotypic fitness values. The expected genetic variance, the expected polymorphic fraction of the genome, as well as other quantities of interest, are only weakly dependent on the selection intensity and the level of recombination. PMID:10353920

  8. Content Specificity of Expectancy Beliefs and Task Values in Elementary Physical Education

    PubMed Central

    Chen, Ang; Martin, Robert; Ennis, Catherine D.; Sun, Haichun

    2015-01-01

    The curriculum may superimpose a content-specific context that mediates motivation (Bong, 2001). This study examined content specificity of the expectancy-value motivation in elementary school physical education. Students’ expectancy beliefs and perceived task values from a cardiorespiratory fitness unit, a muscular fitness unit, and a traditional skill/game unit were analyzed using constant comparison coding procedures, multivariate analysis of variance, χ2, and correlation analyses. There was no difference in the intrinsic interest value among the three content conditions. Expectancy belief, attainment, and utility values were significantly higher for the cardiorespiratory fitness curriculum. Correlations differentiated among the expectancy-value components of the content conditions, providing further evidence of content specificity in the expectancy-value motivation process. The findings suggest that expectancy beliefs and task values should be incorporated in the theoretical platform for curriculum development based on the learning outcomes that can be specified with enhanced motivation effect. PMID:18664044

  9. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  10. Expectancy-Value Theory of Achievement Motivation.

    PubMed

    Wigfield; Eccles

    2000-01-01

    We discuss the expectancy-value theory of motivation, focusing on an expectancy-value model developed and researched by Eccles, Wigfield, and their colleagues. Definitions of crucial constructs in the model, including ability beliefs, expectancies for success, and the components of subjective task values, are provided. These definitions are compared to those of related constructs, including self-efficacy, intrinsic and extrinsic motivation, and interest. Research is reviewed dealing with two issues: (1) change in children's and adolescents' ability beliefs, expectancies for success, and subjective values, and (2) relations of children's and adolescents' ability-expectancy beliefs and subjective task values to their performance and choice of activities. Copyright 2000 Academic Press.

  11. Newly synthesized MgAl2Ge2: A first-principles comparison with its silicide and carbide counterparts

    NASA Astrophysics Data System (ADS)

    Tanveer Karim, A. M. M.; Hadi, M. A.; Alam, M. A.; Parvin, F.; Naqib, S. H.; Islam, A. K. M. A.

    2018-06-01

    Using plane-wave pseudopotential density functional theory (DFT), the first-principle calculations are performed to investigate the structural aspects, mechanical behaviors and electronic features of the newly synthesized CaAl2Si2-prototype intermetallic compound, MgAl2Ge2 for the first time and the results are compared with those calculated for its silicide and carbide counterparts MgAl2Si2 and MgAl2C2. The calculated lattice constants agree fairly well with their corresponding experimental values. The estimated elastic tensors satisfy the mechanical stability conditions for MgAl2Ge2 along with MgAl2Si2 and MgAl2C2. The level of elastic anisotropy increases following the sequence of X-elements Ge → Si → C. MgAl2Ge2 and MgAl2Si2 are expected to be ductile and damage tolerant, while MgAl2C2 is a brittle one. MgAl2Ge2 and MgAl2Si2 should exhibit better thermal shock resistance and low thermal conductivity and accordingly these can be used as thermal barrier coating (TBC) materials. The Debye temperature of MgAl2Ge2 is lowest among three intermetallic compounds. MgAl2Ge2 and MgAl2Si2 should exhibit metallic conductivity; while the dual characters of weak-metals and semiconductors are expected for MgAl2C2. The values of theoretical Vickers hardness for MgAl2Ge2, MgAl2Si2, and MgAl2C2 are 3.3, 2.7, and 7.7 GPa, respectively, indicating that these three intermetallics are soft and easily machinable.

  12. An extensive study of Bose-Einstein condensation in liquid helium using Tsallis statistics

    NASA Astrophysics Data System (ADS)

    Guha, Atanu; Das, Prasanta Kumar

    2018-05-01

    Realistic scenario can be represented by general canonical ensemble way better than the ideal one, with proper parameter sets involved. We study the Bose-Einstein condensation phenomena of liquid helium within the framework of Tsallis statistics. With a comparatively high value of the deformation parameter q(∼ 1 . 4) , the theoretically calculated value of the critical temperature (Tc) of the phase transition of liquid helium is found to agree with the experimentally determined value (Tc = 2 . 17 K), although they differs from each other for q = 1 (undeformed scenario). This throws a light on the understanding of the phenomenon and connects temperature fluctuation(non-equilibrium conditions) with the interactions between atoms qualitatively. More interactions between atoms give rise to more non-equilibrium conditions which is as expected.

  13. Connecting Expectations and Values: Students' Perceptions of Developmental Mathematics in a Computer-Based Learning Environment

    ERIC Educational Resources Information Center

    Jackson, Karen Latrice Terrell

    2014-01-01

    Students' perceptions influence their expectations and values. According to Expectations and Values Theory of Achievement Motivation (EVT-AM), students' expectations and values impact their behaviors (Eccles & Wigfield, 2002). This study seeks to find students' perceptions of developmental mathematics in a mastery learning computer-based…

  14. Influence of an external electric field on the potential-energy surface of alkali-metal-decorated C60

    NASA Astrophysics Data System (ADS)

    De, Deb Sankar; Saha, Santanu; Genovese, Luigi; Goedecker, Stefan

    2018-06-01

    We present a fully ab initio, unbiased structure search of the configurational space of decorated C60 fullerenes in the presence of an electric field. We observed that the potential-energy surface is significantly perturbed by an external electric field and that the energetic ordering of low-energy isomers differs with and without electric field. We identify the energetically lowest configuration for a varying number of decorating atoms (1 ≤n ≤12 ) for Li and (1 ≤n ≤6 ) for K on the C60 surface at different electric-field strengths. Using the correct geometric ground state in the electric field for the calculation of the dipole we obtain better agreement with the experimentally measured values than previous calculations based on the ground state in absence of an electric field. Since the lowest-energy structures are typically nearly degenerate in energy, a combination of different structures is expected to be found at room temperature. The experimentally measured dipole is therefore also expected to contain significant contributions from several low-energy structures.

  15. Size Determination of Y2O3 Crystallites in MgO Composite Using Mie Scattering

    DTIC Science & Technology

    2017-11-07

    particle size, and the path length through the material to generate an expected light transmission spectrum. These calculated curves were compared to...materials. In the current work, light transmission data are compared to the theoretical curves generated by the Mie scattering model in an attempt to...Since the authors wanted to compare the model’s predictions to the experimental %T values, it seemed logical to start with Beer’s Law: )exp()1( 2

  16. Continuum strong-coupling expansion of Yang-Mills theory: quark confinement and infra-red slavery

    NASA Astrophysics Data System (ADS)

    Mansfield, Paul

    1994-04-01

    We solve Schrödinger's equation for the ground-state of four-dimensional Yang-Mills theory as an expansion in inverse powers of the coupling. Expectation values computed with the leading-order approximation are reduced to a calculation in two-dimensional Yang-Mills theory which is known to confine. Consequently the Wilson loop in the four-dimensional theory obeys an area law to leading order and the coupling becomes infinite as the mass scale goes to zero.

  17. Continuous opacity from Ne^-

    NASA Astrophysics Data System (ADS)

    John, T. L.

    1996-04-01

    Free-free absorption coefficients of the negative neon ion are calculated by the phase-shift approximation based on multiconfiguration Hartree-Fock continuum wave functions. These wave functions accurately account for electron-neon correlation and polarization, and yield scattering cross-sections in excellent agreement with the latest experimental values. The coefficients are expected to give the best current estimates of Ne^- continuous absorption. We find that Ne^- makes only a small contribution (less than 0.3 per cent) to stellar opacities, including hydrogen-deficient stars with enhanced Ne abundances.

  18. The Free-Free Absorption Coefficients of the Negative Helium Ion

    NASA Astrophysics Data System (ADS)

    John, T. L.

    1994-08-01

    Free-free absorption coefficients of the negative helium ion are calculated by a phaseshift approximation, using continuum data that accurately account for electron-atom correlation and polarization. The approximation is considered to yield results within a few per cent of numerical values for wavelengths greater than 1 m, over the temperature range 1400-10080 K. These coefficients are expected to give the best current estimates of He - continuous absorption. Key words: atomic data - atomic processes - stars: atmospheres - infrared: general.

  19. Theoretical foundations for a quantitative approach to paleogenetics. I, II.

    NASA Technical Reports Server (NTRS)

    Holmquist, R.

    1972-01-01

    It is shown that by neglecting the phenomena of multiple hits, back mutation, and chance coincidence errors larger than 100% can be introduced in the calculated value of the average number of nucleotide base differences to be expected between two homologous polynucleotides. Mathematical formulas are derived to correct quantitatively for these effects. It is pointed out that the effects change materially the quantitative aspects of phylogenics, such as the length of the legs of the trees. A number of problems are solved without approximation.-

  20. Predictions for neutral K and B meson physics

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Savas; Hall, Lawrence J.; Raby, Stuart

    1992-12-01

    Using supersymmetric grand unified theories, we have recently invented a framework which allows the prediction of three quark masses, two of the parameters of the Kobayashi-Maskawa matrix, and tanβ, the ratio of the two electroweak vacuum expectation values. These predictions are used to calculate ɛ and ɛ' in the kaon system, the mass mixing in the B0d and B0s systems, and the size of CP asymmetries in the decays of neutral B mesons to explicit final states of given CP.

  1. The hydrogen atom in D = 3 - 2ɛ dimensions

    NASA Astrophysics Data System (ADS)

    Adkins, Gregory S.

    2018-06-01

    The nonrelativistic hydrogen atom in D = 3 - 2 ɛ dimensions is the reference system for perturbative schemes used in dimensionally regularized nonrelativistic effective field theories to describe hydrogen-like atoms. Solutions to the D-dimensional Schrödinger-Coulomb equation are given in the form of a double power series. Energies and normalization integrals are obtained numerically and also perturbatively in terms of ɛ. The utility of the series expansion is demonstrated by the calculation of the divergent expectation value <(V‧)2 >.

  2. Senior nurses' control expectations and the development of pressure ulcers.

    PubMed

    Maylor, M

    The aim of this research was to establish whether the attitudes and expectations of senior nursing staff might adversely affect patient outcomes in the prevention of pressure ulcers. The hypothesis was that nursing locus of control affects clinical outcomes in patients. In particular, it affects departmental prevalence of pressure damage. A population of nurses (n = 439) in an acute and community NHS trust were surveyed to test knowledge, control beliefs and value of pressure ulcer prevention relative to prevalence. The research was designed to provide different data against which to test the hypothesis: first, to assess acceptability of nurses' knowledge of prevention and appropriate use of risk assessment and equipment; second, to calculate a mean departmental pressure ulcer prevalence; and third, to measure locus of control and value, which is the focus of this article. There were strong associations between departmental prevalence of pressure ulcers and attitudes of senior nursing staff. For example, the more that ward sisters believed they could control pressure ulcer prevention, the higher the prevalence of ulcers in their department. The more that sisters believed that they could not control prevalence, the lower the prevalence of ulcers. The study shows that failure to account for beliefs, values and expectations of staff could lead to patient harm. It is suggested that it might be counterproductive to put great effort into developing clinical guidelines and refinement of risk assessment methods. The findings have important implications for nursing, and challenge the assumption that nurse leaders are universally beneficial to patients.

  3. Long-term persistence of solar activity. [Abstract only

    NASA Technical Reports Server (NTRS)

    Ruzmaikin, Alexander; Feynman, Joan; Robinson, Paul

    1994-01-01

    The solar irradiance has been found to change by 0.1% over the recent solar cycle. A change of irradiance of about 0.5% is required to effect the Earth's climate. How frequently can a variation of this size be expected? We examine the question of the persistence of non-periodic variations in solar activity. The Huerst exponent, which characterizes the persistence of a time series (Mandelbrot and Wallis, 1969), is evaluated for the series of C-14 data for the time interval from about 6000 BC to 1950 AD (Stuiver and Pearson, 1986). We find a constant Huerst exponent, suggesting that solar activity in the frequency range of from 100 to 3000 years includes an important continuum component in addition to the well-known periodic variations. The value we calculate, H approximately equal to 0.8, is significantly larger than the value of 0.5 that would correspond to variations produced by a white-noise process. This value is in good agreement with the results for the monthly sunspot data reported elsewhere, indicating that the physics that produces the continuum is a correlated random process (Ruzmaikin et al., 1992), and that is is the same type of process over a wide range of time interval lengths. We conclude that the time period over which an irradiance change of 0.5% can be expected to occur is significantly shorter than that which would be expected for variations produced by a white-noise process.

  4. Radioiodine therapy of hyperfunctioning thyroid nodules: usefulness of an implemented dose calculation algorithm allowing reduction of radioiodine amount.

    PubMed

    Schiavo, M; Bagnara, M C; Pomposelli, E; Altrinetti, V; Calamia, I; Camerieri, L; Giusti, M; Pesce, G; Reitano, C; Bagnasco, M; Caputo, M

    2013-09-01

    Radioiodine is a common option for treatment of hyperfunctioning thyroid nodules. Due to the expected selective radioiodine uptake by adenoma, relatively high "fixed" activities are often used. Alternatively, the activity is individually calculated upon the prescription of a fixed value of target absorbed dose. We evaluated the use of an algorithm for personalized radioiodine activity calculation, which allows as a rule the administration of lower radioiodine activities. Seventy-five patients with single hyperfunctioning thyroid nodule eligible for 131I treatment were studied. The activities of 131I to be administered were estimated by the method described by Traino et al. and developed for Graves'disease, assuming selective and homogeneous 131I uptake by adenoma. The method takes into account 131I uptake and its effective half-life, target (adenoma) volume and its expected volume reduction during treatment. A comparison with the activities calculated by other dosimetric protocols, and the "fixed" activity method was performed. 131I uptake was measured by external counting, thyroid nodule volume by ultrasonography, thyroid hormones and TSH by ELISA. Remission of hyperthyroidism was observed in all but one patient; volume reduction of adenoma was closely similar to that assumed by our model. Effective half-life was highly variable in different patients, and critically affected dose calculation. The administered activities were clearly lower with respect to "fixed" activities and other protocols' prescription. The proposed algorithm proved to be effective also for single hyperfunctioning thyroid nodule treatment and allowed a significant reduction of administered 131I activities, without loss of clinical efficacy.

  5. Aerosol-Induced Radiative Flux Changes Off the United States Mid-Atlantic Coast: Comparison of Values Calculated from Sunphotometer and In Situ Data with Those Measured by Airborne Pyranometer

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Livingston, J. M.; Hignett, P.; Kinne, S.; Wong, J.; Chien, A.; Bergstrom, R.; Durkee, P.; Hobbs, P. V.

    2000-01-01

    The Tropospheric Aerosol Radiative Forcing Observational Experiment (TARFOX) measured a variety of aerosol radiative effects (including flux changes) while simultaneously measuring the chemical, physical, and optical properties of the responsible aerosol particles. Here we use TARFOX-determined aerosol and surface properties to compute shortwave radiative flux changes for a variety of aerosol situations, with midvisible optical depths ranging from 0.06 to 0.55. We calculate flux changes by several techniques with varying degrees of sophistication, in part to investigate the sensitivity of results to computational approach. We then compare computed flux changes to those determined from aircraft measurements. Calculations using several approaches yield downward and upward flux changes that agree with measurements. The agreement demonstrates closure (i.e. consistency) among the TARFOX-derived aerosol properties, modeling techniques, and radiative flux measurements. Agreement between calculated and measured downward flux changes is best when the aerosols are modeled as moderately absorbing (midvisible single-scattering albedos between about 0.89 and 0.93), in accord with independent measurements of the TARPOX aerosol. The calculated values for instantaneous daytime upwelling flux changes are in the range +14 to +48 W/sq m for midvisible optical depths between 0.2 and 0.55. These values are about 30 to 100 times the global-average direct forcing expected for the global-average sulfate aerosol optical depth of 0.04. The reasons for the larger flux changes in TARFOX include the relatively large optical depths and the focus on cloud-free, daytime conditions over the dark ocean surface. These are the conditions that produce major aerosol radiative forcing events and contribute to any global-average climate effect.

  6. Assessing the Internal Consistency of the Marine Carbon Dioxide System at High Latitudes: The Labrador Sea AR7W Line Study Case

    NASA Astrophysics Data System (ADS)

    Raimondi, L.; Azetsu-Scott, K.; Wallace, D.

    2016-02-01

    This work assesses the internal consistency of ocean carbon dioxide through the comparison of discrete measurements and calculated values of four analytical parameters of the inorganic carbon system: Total Alkalinity (TA), Dissolved Inorganic Carbon (DIC), pH and Partial Pressure of CO2 (pCO2). The study is based on 486 seawater samples analyzed for TA, DIC and pH and 86 samples for pCO2 collected during the 2014 Cruise along the AR7W line in Labrador Sea. The internal consistency has been assessed using all combinations of input parameters and eight sets of thermodynamic constants (K1, K2) in calculating each parameter through the CO2SYS software. Residuals of each parameter have been calculated as the differences between measured and calculated values (reported as ΔTA, ΔDIC, ΔpH and ΔpCO2). Although differences between the selected sets of constants were observed, the largest were obtained using different pairs of input parameters. As expected the couple pH-pCO2 produced to poorest results, suggesting that measurements of either TA or DIC are needed to define the carbonate system accurately and precisely. To identify signature of organic alkalinity we isolated the residuals in the bloom area. Therefore only ΔTA from surface waters (0-30 m) along the Greenland side of the basin were selected. The residuals showed that no measured value was higher than calculations and therefore we could not observe presence of organic bases in the shallower water column. The internal consistency in characteristic water masses of Labrador Sea (Denmark Strait Overflow Water, North East Atlantic Deep Water, Newly-ventilated Labrador Sea Water, Greenland and Labrador Shelf waters) will also be discussed.

  7. Temperature dependent barrier height and ideality factor of electrodeposited n-CdSe/Cu Schottky barrier diode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahato, S., E-mail: som.phy.ism@gmail.com; Shiwakoti, N.; Kar, A. K.

    2015-06-24

    This article reports the measurement of temperature-dependent barrier height and ideality factor of n-CdSe/Cu Schottky barrier diode. The Cadmium Selenide (CdSe) thin films have been deposited by simple electrodeposition technique. The XRD measurements ravels the deposited single phase CdSe films are highly oriented on (002) plane and the average particle size has been calculated to be ~18 nm. From SEM characterization, it is clear that the surface of CdSe thin films are continuous, homogeneous and the film is well adhered to the substrate and consists of fine grains which are irregular in shape and size. Current-Voltage characteristics have been measured atmore » different temperatures in the range (298 K – 353 K). The barrier height and ideality factor are found to be strongly temperature dependent. The inhomogenious barrier height increases and ideality factor decreases with increase in temperature. The expectation value has been calculated and its value is 0.30 eV.« less

  8. Nickel-63 microirradiators.

    PubMed

    Steeb, Jennifer; Josowicz, Mira; Janata, Jiri

    2009-03-01

    Here we report the fabrication of two types of microirradiators, consisting of a recessed disk and protruding wire with low-beta-energy radionuclide Ni-63 electrodeposited onto a 25 microm diameter Pt wire. Ni-63 is constricted to a small surface area of the microelectrode; hence, this tool provides a means of delivery of localized, large dose density of beta radiation to the object but a minimal dose exposure to the user. The activity levels of Ni-63 emitted from the recessed disk and protruding wire are 0.25 and 1 Bq, respectively. The corresponding beta particles flux levels emitted from the recessed disk and protruding wire are 51 and 11 kBq/cm(2), respectively. These values, measured experimentally using liquid scintillation counting, fit very well the expected values of activity for each microirradiator, calculated considering the self-absorption effect, typical for low-energy beta particles. In order to determine the optimal configuration the dose rates for varying distances from the object were calculated.

  9. Infrastructure performance of irrigation canal to irrigation efficiency of irrigation area of Candi Limo in Mojokerto District

    NASA Astrophysics Data System (ADS)

    Kisnanto, S.; Hadiani, R. R. R.; Ikhsan, C.

    2018-03-01

    Performance is a measure of infrastructure success in delivering the benefits corresponding it’s design implementation. Debit efficiency is a comparison between outflow debit and inflow debit. Irrigation canal performance is part of the overall performance aspects of an irrigation area. The greater of the canal performance will be concluded that the canal is increasingly able to meet the planned benefits, need to be seen its comparison between the performance and debit efficiency of the canal. The existing problems in the field that the value of the performance of irrigation canals are not always comparable to the debit efficiency. This study was conducted to describe the relationship between the performance of the canal to the canal debit efficiency. The study was conducted at Candi Limo Irrigation Area in Mojokerto Disctrict under the authority of Pemerintahan Provinsi Jawa Timur. The primary canal and secondary canal are surveyed to obtain data. The physical condition of the primary and secondary canals into the material of this study also. Primary and secondary canal performance based on the physical condition in the field. Measurement inflow and outflow debit into the data for the calculation of the debit efficiency. The instrument used in this study such as the current meter for debit measurements in the field as a solution when there is a building measure in the field were damaged, also using the meter and the camera. Permen PU No.32 is used to determine the value of the performance of the canal, while the efficiency analysis to calculate a comparison value between outflow and inflow debit. The process of data running processing by performing the measurement and calculation of the performance of the canal, the canal debit efficiency value calculation, and display a graph of the relationship between the value of the performance with the debit efficiency in each canal. The expected results of this study that the performance value on the primary canal in the range of 0 to 100 % with debit efficiency value in the range of 0 to 100 %, while for the secondary canal 1 has a performance range between 0 to 100% with efficiency ranges between 0 to 100%, the performance of the secondary canals 2 ranges between 0 to 100% with efficiencies ranging from 0 to 100%, the secondary canal 3 performance ranges between 0 to 100% efficiency ranges between 0 to 100%, the secondary canal 4 performance ranges between 0 to 100% efficiency ranges between 0 to 100% and secondary canals 5 performance ranges between 0 to 100% efficiency ranges between 0 to 100%. For the tendency to expect from the performance and efficiency of the debit canal can have a proportional clockwise or counterclockwise, which amount can be random. The tendency to be graphed the relationship between performance and efficiency of the debit of each segment studied canal.

  10. A Quantitative Risk Analysis of Deficient Contractor Business System

    DTIC Science & Technology

    2012-04-30

    Mathematically , Jorion’s concept of VaR looks like this: ( > ) ≤ 1 − (2) where, = ^Åèìáëáíáçå=oÉëÉ~êÅÜ=éêçÖê~ãW= `êÉ~íáåÖ=póåÉêÖó=Ñçê=fåÑçêãÉÇ=ÅÜ...presents three models for calculating VaR. The local-valuation method determines the value of a portfolio once and uses mathematical derivatives...management. In the insurance industry, actuarial data is applied to model risk and risk capital reserves are “held” to cover the expected values for

  11. Resolving the Tevatron Top Quark Forward-Backward Asymmetry Puzzle: Fully Differential Next-to-Next-to-Leading-Order Calculation.

    PubMed

    Czakon, Michal; Fiedler, Paul; Mitov, Alexander

    2015-07-31

    We determine the dominant missing standard model (SM) contribution to the top quark pair forward-backward asymmetry at the Tevatron. Contrary to past expectations, we find a large, around 27%, shift relative to the well-known value of the inclusive asymmetry in next-to-leading order QCD. Combining all known standard model corrections, we find that A(FB)(SM)=0.095±0.007. This value is in agreement with the latest DØ measurement [V. M. Abazov et al. (D0 Collaboration), Phys. Rev. D 90, 072011 (2014)] A(FB)(D∅)=0.106±0.03 and about 1.5σ below that of CDF [T. Aaltonen et al. (CDF Collaboration), Phys. Rev. D 87, 092002 (2013)] A(FB)(CDF)=0.164±0.047. Our result is derived from a fully differential calculation of the next-to-next-to leading order (NNLO) QCD corrections to inclusive top pair production at hadron colliders and includes-without any approximation-all partonic channels contributing to this process. This is the first complete fully differential calculation in NNLO QCD of a two-to-two scattering process with all colored partons.

  12. Resolving the Tevatron Top Quark Forward-Backward Asymmetry Puzzle: Fully Differential Next-to-Next-to-Leading-Order Calculation

    NASA Astrophysics Data System (ADS)

    Czakon, Michal; Fiedler, Paul; Mitov, Alexander

    2015-07-01

    We determine the dominant missing standard model (SM) contribution to the top quark pair forward-backward asymmetry at the Tevatron. Contrary to past expectations, we find a large, around 27%, shift relative to the well-known value of the inclusive asymmetry in next-to-leading order QCD. Combining all known standard model corrections, we find that AF BS M = 0.095 ±0.007 . This value is in agreement with the latest DØ measurement [V. M. Abazov et al. (D0 Collaboration), Phys. Rev. D 90, 072011 (2014)] AFBD ∅=0.106 ±0.03 and about 1.5 σ below that of CDF [T. Aaltonen et al. (CDF Collaboration), Phys. Rev. D 87, 092002 (2013)] AFBCDF=0.164 ±0.047 . Our result is derived from a fully differential calculation of the next-to-next-to leading order (NNLO) QCD corrections to inclusive top pair production at hadron colliders and includes—without any approximation—all partonic channels contributing to this process. This is the first complete fully differential calculation in NNLO QCD of a two-to-two scattering process with all colored partons.

  13. Cosmogenic nuclides in cometary materials: Implications for rate of mass loss and exposure history

    NASA Astrophysics Data System (ADS)

    Herzog, G. F.; Englert, P. A. J.; Reedy, R. C.

    As planned, the Rosetta mission will return to earth with a 10-kg core and a 1-kg surface sample from a comet. The selection of a comet with low current activity will maximize the chance of obtaining material altered as little as possible. Current temperature and level of activity, however, may not reliably indicate previous values. Fortunately, from measurements of the cosmogenic nuclide contents of cometary material, one may estimate a rate of mass loss in the past and perhaps learn something about the exposure history of the comet. Perhaps the simplest way to estimate the rate of mass loss is to compare the total inventories of several long-lived cosmogenic radionuclides with the values expected on the basis of model calculations. Although model calculations have become steadily more reliable, application to bodies with the composition of comets will require some extension beyond the normal range of use. In particular, the influence of light elements on the secondary particle cascade will need study, in part through laboratory irradiations of volatile-rich materials. In the analysis of cometary data, it would be valuable to test calculations against measurements of short-lived isotopes.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Bipasha; Davies, C. T. H.; Donald, G. C.

    Here, we compare correlators for pseudoscalar and vector mesons made from valence strange quarks using the clover quark and highly improved staggered quark (HISQ) formalisms in full lattice QCD. We use fully nonperturbative methods to normalise vector and axial vector current operators made from HISQ quarks, clover quarks and from combining HISQ and clover fields. This allows us to test expectations for the renormalisation factors based on perturbative QCD, with implications for the error budget of lattice QCD calculations of the matrix elements of clover-staggeredmore » $b$-light weak currents, as well as further HISQ calculations of the hadronic vacuum polarisation. We also compare the approach to the (same) continuum limit in clover and HISQ formalisms for the mass and decay constant of the $$\\phi$$ meson. Our final results for these parameters, using single-meson correlators and neglecting quark-line disconnected diagrams are: $$m_{\\phi} =$$ 1.023(5) GeV and $$f_{\\phi} = $$ 0.238(3) GeV in good agreement with experiment. These results come from calculations in the HISQ formalism using gluon fields that include the effect of $u$, $d$, $s$ and $c$ quarks in the sea with three lattice spacing values and $$m_{u/d}$$ values going down to the physical point.« less

  15. Determination of aberration center of Ronchigram for automated aberration correctors in scanning transmission electron microscopy.

    PubMed

    Sannomiya, Takumi; Sawada, Hidetaka; Nakamichi, Tomohiro; Hosokawa, Fumio; Nakamura, Yoshio; Tanishiro, Yasumasa; Takayanagi, Kunio

    2013-12-01

    A generic method to determine the aberration center is established, which can be utilized for aberration calculation and axis alignment for aberration corrected electron microscopes. In this method, decentering induced secondary aberrations from inherent primary aberrations are minimized to find the appropriate axis center. The fitness function to find the optimal decentering vector for the axis was defined as a sum of decentering induced secondary aberrations with properly distributed weight values according to the aberration order. Since the appropriate decentering vector is determined from the aberration values calculated at an arbitrary center axis, only one aberration measurement is in principle required to find the center, resulting in /very fast center search. This approach was tested for the Ronchigram based aberration calculation method for aberration corrected scanning transmission electron microscopy. Both in simulation and in experiments, the center search was confirmed to work well although the convergence to find the best axis becomes slower with larger primary aberrations. Such aberration center determination is expected to fully automatize the aberration correction procedures, which used to require pre-alignment of experienced users. This approach is also applicable to automated aperture positioning. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. A pleiotropy-informed Bayesian false discovery rate adapted to a shared control design finds new disease associations from GWAS summary statistics.

    PubMed

    Liley, James; Wallace, Chris

    2015-02-01

    Genome-wide association studies (GWAS) have been successful in identifying single nucleotide polymorphisms (SNPs) associated with many traits and diseases. However, at existing sample sizes, these variants explain only part of the estimated heritability. Leverage of GWAS results from related phenotypes may improve detection without the need for larger datasets. The Bayesian conditional false discovery rate (cFDR) constitutes an upper bound on the expected false discovery rate (FDR) across a set of SNPs whose p values for two diseases are both less than two disease-specific thresholds. Calculation of the cFDR requires only summary statistics and have several advantages over traditional GWAS analysis. However, existing methods require distinct control samples between studies. Here, we extend the technique to allow for some or all controls to be shared, increasing applicability. Several different SNP sets can be defined with the same cFDR value, and we show that the expected FDR across the union of these sets may exceed expected FDR in any single set. We describe a procedure to establish an upper bound for the expected FDR among the union of such sets of SNPs. We apply our technique to pairwise analysis of p values from ten autoimmune diseases with variable sharing of controls, enabling discovery of 59 SNP-disease associations which do not reach GWAS significance after genomic control in individual datasets. Most of the SNPs we highlight have previously been confirmed using replication studies or larger GWAS, a useful validation of our technique; we report eight SNP-disease associations across five diseases not previously declared. Our technique extends and strengthens the previous algorithm, and establishes robust limits on the expected FDR. This approach can improve SNP detection in GWAS, and give insight into shared aetiology between phenotypically related conditions.

  17. Carbon 14 measurements in surface water CO{sub 2} from the Atlantic, India, and Pacific Oceans, 1965--1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nydal, R.; Brenkert, A.L.; Boden, T.A.

    1998-03-01

    In the 1960s, thermonuclear bomb tests released significant pulses of radioactive carbon-14 ({sup 14}C) into the atmosphere. These major perturbations allowed scientists to study the dynamics of the global carbon cycle by calculating rates of isotope exchange between the atmosphere and ocean waters. A total of 950 ocean surface water observations were made from 1965 through 1994. The measurements were taken at 30 stations in the Atlantic Ocean, 14 stations in the Indian Ocean, and 38 stations in the Pacific Ocean. Thirty-two of the 950 samples were taken in the Atlantic Ocean during the R/V Andenes research cruise. {sup 14}Cmore » was measured in 871 of the 950 samples, and those measurements have been corrected ({Delta}{sup 14}C) for isotopic fractionation and radioactive decay. The {Delta}{sup 14}C values range between {minus}113.3 and 280.9 per mille and have a mean value of 101.3 per mille. The highest yearly mean (146.5 per mille) was calculated for 1969, the lowest yearly mean value was calculated for 1990 (67.9 per mille) illustrating a decrease over time. This decrease was to be expected as a result of the ban on atmospheric thermonuclear tests and the slow mixing of the ocean surface waters with the deeper layers.« less

  18. Transition Analysis for the Mars Science Laboratory Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Choudhari, Meelan M.; Hollis, Brian R.; Li, Fei

    2009-01-01

    Viscous Laminar-turbulent transition plays an important role in the design of the Mars Science Laboratory (MSL) entry vehicle. The lift-to-drag ratio required for the precision landing trajectory will be achieved via an angle of attack equal to 16 degrees. At this relatively high angle of attack, the boundary layer flow near the leeward meridian is expected to transition early in the trajectory, resulting in substantially increased heating loads. This paper presents stability calculations and transition correlations for a series of wind tunnel models of the MSL vehicle. Experimentally measured transition onset locations are used to correlate with the N-factor calculations for various wind tunnel conditions. Due to relatively low post-shock Mach numbers near the edge of the boundary layer, the dominant instability waves are found to be of the first mode type. The N-factor values correlating with measured transition onset at selected test points from the Mach 6 conventional facility experiments fall between 3.5 and 4.5 and apparently vary linearly with the wind tunnel unit Reynolds number, indicating strong receptivity effect. The small transition N value is consistent with previous correlations for second-mode dominant transition in the same wind tunnel facility. Stability calculations for stationary and traveling crossflow instability waves in selected configurations indicate that an N value of 4 and 6, respectively, correlates reasonably well with transition onset discerned from one experimentally measured thermographic image.

  19. Derivatives of random matrix characteristic polynomials with applications to elliptic curves

    NASA Astrophysics Data System (ADS)

    Snaith, N. C.

    2005-12-01

    The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.

  20. Probing for the Multiplicative Term in Modern Expectancy-Value Theory: A Latent Interaction Modeling Study

    ERIC Educational Resources Information Center

    Trautwein, Ulrich; Marsh, Herbert W.; Nagengast, Benjamin; Ludtke, Oliver; Nagy, Gabriel; Jonkmann, Kathrin

    2012-01-01

    In modern expectancy-value theory (EVT) in educational psychology, expectancy and value beliefs additively predict performance, persistence, and task choice. In contrast to earlier formulations of EVT, the multiplicative term Expectancy x Value in regression-type models typically plays no major role in educational psychology. The present study…

  1. SU-E-T-769: T-Test Based Prior Error Estimate and Stopping Criterion for Monte Carlo Dose Calculation in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Schuemann, J

    2015-06-15

    Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  2. A comparison of direct and indirect methods for the estimation of health utilities from clinical outcomes.

    PubMed

    Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb

    2014-10-01

    Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.

  3. Automated neurovascular tracing and analysis of the knife-edge scanning microscope Rat Nissl data set using a computing cluster.

    PubMed

    Sungjun Lim; Nowak, Michael R; Yoonsuck Choe

    2016-08-01

    We present a novel, parallelizable algorithm capable of automatically reconstructing and calculating anatomical statistics of cerebral vascular networks embedded in large volumes of Rat Nissl-stained data. In this paper, we report the results of our method using Rattus somatosensory cortical data acquired using Knife-Edge Scanning Microscopy. Our algorithm performs the reconstruction task with averaged precision, recall, and F2-score of 0.978, 0.892, and 0.902 respectively. Calculated anatomical statistics show some conformance to values previously reported. The results that can be obtained from our method are expected to help explicate the relationship between the structural organization of the microcirculation and normal (and abnormal) cerebral functioning.

  4. Gauge-invariant expectation values of the energy of a molecule in an electromagnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandal, Anirban; Hunt, Katharine L. C.

    In this paper, we show that the full Hamiltonian for a molecule in an electromagnetic field can be separated into a molecular Hamiltonian and a field Hamiltonian, both with gauge-invariant expectation values. The expectation value of the molecular Hamiltonian gives physically meaningful results for the energy of a molecule in a time-dependent applied field. In contrast, the usual partitioning of the full Hamiltonian into molecular and field terms introduces an arbitrary gauge-dependent potential into the molecular Hamiltonian and leaves a gauge-dependent form of the Hamiltonian for the field. With the usual partitioning of the Hamiltonian, this same problem of gaugemore » dependence arises even in the absence of an applied field, as we show explicitly by considering a gauge transformation from zero applied field and zero external potentials to zero applied field, but non-zero external vector and scalar potentials. We resolve this problem and also remove the gauge dependence from the Hamiltonian for a molecule in a non-zero applied field and from the field Hamiltonian, by repartitioning the full Hamiltonian. It is possible to remove the gauge dependence because the interaction of the molecular charges with the gauge potential cancels identically with a gauge-dependent term in the usual form of the field Hamiltonian. We treat the electromagnetic field classically and treat the molecule quantum mechanically, but nonrelativistically. Our derivation starts from the Lagrangian for a set of charged particles and an electromagnetic field, with the particle coordinates, the vector potential, the scalar potential, and their time derivatives treated as the variables in the Lagrangian. We construct the full Hamiltonian using a Lagrange multiplier method originally suggested by Dirac, partition this Hamiltonian into a molecular term H{sub m} and a field term H{sub f}, and show that both H{sub m} and H{sub f} have gauge-independent expectation values. Any gauge may be chosen for the calculations; but following our partitioning, the expectation values of the molecular Hamiltonian are identical to those obtained directly in the Coulomb gauge. As a corollary of this result, the power absorbed by a molecule from a time-dependent, applied electromagnetic field is equal to the time derivative of the non-adiabatic term in the molecular energy, in any gauge.« less

  5. Experimental and Theoretical Studies of Interstellar Grains. Ph.D. Thesis - Maryland Univ., College Park, 1982

    NASA Technical Reports Server (NTRS)

    Nuth, J. A., III

    1981-01-01

    Steady state vibrational populations of SiO and CO in dilute black body radiation fields were calculated as a function of total pressure, kinetic temperature and chemical composition of the gas. Approximate calculations for polyatomic molecules are presented. Vibrational disequilibrium becomes increasingly significant as total pressure and radiation density decrease. Many regions of postulated grain formation are found to be far from thermal equilibrium before the onset of nucleation. Calculations based upon classical nucleation theory or equilibrium thermodynamics are expected to be of dubious value in such regions. Laboratory measurements of the extinction of small iron and magnetite grains were made from 195 nm to 830 nm and found to be consistent with predictions based upon published optical constants. This implies that small iron particles are not responsible for the 220 nm interstellar extinction features. Additional measurements are discussed.

  6. Neutral-atom electron binding energies from relaxed-orbital relativistic Hartree-Fock-Slater calculations for Z between 2 and 106

    NASA Technical Reports Server (NTRS)

    Huang, K.-N.; Aoyagi, M.; Mark, H.; Chen, M. H.; Crasemann, B.

    1976-01-01

    Electron binding energies in neutral atoms have been calculated relativistically, with the requirement of complete relaxation. Hartree-Fock-Slater wave functions served as zeroth-order eigenfunctions to compute the expectation of the total Hamiltonian. A first-order correction to the local approximation was thus included. Quantum-electrodynamic corrections were made. For all elements with atomic numbers ranging from 2 to 106, the following quantities are listed: total energies, electron kinetic energies, electron-nucleus potential energies, electron-electron potential energies consisting of electrostatic and Breit interaction (magnetic and retardation) terms, and vacuum polarization energies. Binding energies including relaxation are listed for all electrons in all atoms over the indicated range of atomic numbers. A self-energy correction is included for the 1s, 2s, and 2p(1/2) levels. Results for selected atoms are compared with energies calculated by other methods and with experimental values.

  7. A decoding procedure for the Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1978-01-01

    A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.

  8. Experimental measurement and calculation of losses in planar radial magnetic bearings

    NASA Technical Reports Server (NTRS)

    Kasarda, M. E. F.; Allaire, P. E.; Hope, R. W.; Humphris, R. R.

    1994-01-01

    The loss mechanisms associated with magnetic bearings have yet to be adequately characterized or modeled analytically and thus pose a problem for the designer of magnetic bearings. This problem is particularly important for aerospace applications where low power consumption of components is critical. Also, losses are expected to be large for high speed operation. The iron losses in magnetic bearings can be divided into eddy current losses and hysteresis losses. While theoretical models for these losses exist for transformer and electric motor applications, they have not been verified for magnetic bearings. This paper presents the results from a low speed experimental test rig and compares them to calculated values from existing theory. Experimental data was taken over a range of 90 to 2,800 rpm for several bias currents and two different pole configurations. With certain assumptions agreement between measured and calculated power losses was within 16 percent for a number of test configurations.

  9. An economic prediction of the finer resolution level wavelet coefficients in electronic structure calculations.

    PubMed

    Nagy, Szilvia; Pipek, János

    2015-12-21

    In wavelet based electronic structure calculations, introducing a new, finer resolution level is usually an expensive task, this is why often a two-level approximation is used with very fine starting resolution level. This process results in large matrices to calculate with and a large number of coefficients to be stored. In our previous work we have developed an adaptively refined solution scheme that determines the indices, where the refined basis functions are to be included, and later a method for predicting the next, finer resolution coefficients in a very economic way. In the present contribution, we would like to determine whether the method can be applied for predicting not only the first, but also the other, higher resolution level coefficients. Also the energy expectation values of the predicted wave functions are studied, as well as the scaling behaviour of the coefficients in the fine resolution limit.

  10. Comparison of monoenergetic photon organ dose rate coefficients for stylized and voxel phantoms submerged in air

    DOE PAGES

    Bellamy, Michael B.; Hiller, Mauritius M.; Dewji, Shaheen A.; ...

    2016-02-01

    As part of a broader effort to calculate effective dose rate coefficients for external exposure to photons and electrons emitted by radionuclides distributed in air, soil or water, age-specific stylized phantoms have been employed to determine dose coefficients relating dose rate to organs and tissues in the body. In this article, dose rate coefficients computed using the International Commission on Radiological Protection reference adult male voxel phantom are compared with values computed using the Oak Ridge National Laboratory adult male stylized phantom in an air submersion exposure geometry. Monte Carlo calculations for both phantoms were performed for monoenergetic source photonsmore » in the range of 30 keV to 5 MeV. Furthermore, these calculations largely result in differences under 10 % for photon energies above 50 keV, and it can be expected that both models show comparable results for the environmental sources of radionuclides.« less

  11. Efficient and accurate treatment of electron correlations with correlation matrix renormalization theory

    DOE PAGES

    Yao, Y. X.; Liu, J.; Liu, C.; ...

    2015-08-28

    We present an efficient method for calculating the electronic structure and total energy of strongly correlated electron systems. The method extends the traditional Gutzwiller approximation for one-particle operators to the evaluation of the expectation values of two particle operators in the many-electron Hamiltonian. The method is free of adjustable Coulomb parameters, and has no double counting issues in the calculation of total energy, and has the correct atomic limit. We demonstrate that the method describes well the bonding and dissociation behaviors of the hydrogen and nitrogen clusters, as well as the ammonia composed of hydrogen and nitrogen atoms. We alsomore » show that the method can satisfactorily tackle great challenging problems faced by the density functional theory recently discussed in the literature. The computational workload of our method is similar to the Hartree-Fock approach while the results are comparable to high-level quantum chemistry calculations.« less

  12. Multireference configuration interaction calculations of the first six ionization potentials of the uranium atom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bross, David H.; Parmar, Payal; Peterson, Kirk A., E-mail: kipeters@wsu.edu

    The first 6 ionization potentials (IPs) of the uranium atom have been calculated using multireference configuration interaction (MRCI+Q) with extrapolations to the complete basis set limit using new all-electron correlation consistent basis sets. The latter was carried out with the third-order Douglas-Kroll-Hess Hamiltonian. Correlation down through the 5s5p5d electrons has been taken into account, as well as contributions to the IPs due to the Lamb shift. Spin-orbit coupling contributions calculated at the 4-component Kramers restricted configuration interaction level, as well as the Gaunt term computed at the Dirac-Hartree-Fock level, were added to the best scalar relativistic results. The final ionizationmore » potentials are expected to be accurate to at least 5 kcal/mol (0.2 eV) and thus more reliable than the current experimental values of IP{sub 3} through IP{sub 6}.« less

  13. Comparison of monoenergetic photon organ dose rate coefficients for stylized and voxel phantoms submerged in air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellamy, Michael B.; Hiller, Mauritius M.; Dewji, Shaheen A.

    As part of a broader effort to calculate effective dose rate coefficients for external exposure to photons and electrons emitted by radionuclides distributed in air, soil or water, age-specific stylized phantoms have been employed to determine dose coefficients relating dose rate to organs and tissues in the body. In this article, dose rate coefficients computed using the International Commission on Radiological Protection reference adult male voxel phantom are compared with values computed using the Oak Ridge National Laboratory adult male stylized phantom in an air submersion exposure geometry. Monte Carlo calculations for both phantoms were performed for monoenergetic source photonsmore » in the range of 30 keV to 5 MeV. Furthermore, these calculations largely result in differences under 10 % for photon energies above 50 keV, and it can be expected that both models show comparable results for the environmental sources of radionuclides.« less

  14. A model for calculating expected performance of the Apollo unified S-band (USB) communication system

    NASA Technical Reports Server (NTRS)

    Schroeder, N. W.

    1971-01-01

    A model for calculating the expected performance of the Apollo unified S-band (USB) communication system is presented. The general organization of the Apollo USB is described. The mathematical model is reviewed and the computer program for implementation of the calculations is included.

  15. Predicting problem behaviors with multiple expectancies: expanding expectancy-value theory.

    PubMed

    Borders, Ashley; Earleywine, Mitchell; Huey, Stanley J

    2004-01-01

    Expectancy-value theory emphasizes the importance of outcome expectancies for behavioral decisions, but most tests of the theory focus on a single behavior and a single expectancy. However, the matching law suggests that individuals consider expected outcomes for both the target behavior and alternative behaviors when making decisions. In this study, we expanded expectancy-value theory to evaluate the contributions of two competing expectancies to adolescent behavior problems. One hundred twenty-one high school students completed measures of behavior problems, expectancies for both acting out and academic effort, and perceived academic competence. Students' self-reported behavior problems covaried mostly with perceived competence and academic expectancies and only nominally with problem behavior expectancies. We suggest that behavior problems may result from students perceiving a lack of valued or feasible alternative behaviors, such as studying. We discuss implications for interventions and suggest that future research continue to investigate the contribution of alternative expectancies to behavioral decisions.

  16. Digital Rock Physics Aplications: Visualisation Complex Pore and Porosity-Permeability Estimations of the Porous Sandstone Reservoir

    NASA Astrophysics Data System (ADS)

    Handoyo; Fatkhan; Del, Fourier

    2018-03-01

    Reservoir rock containing oil and gas generally has high porosity and permeability. High porosity is expected to accommodate hydrocarbon fluid in large quantities and high permeability is associated with the rock’s ability to let hydrocarbon fluid flow optimally. Porosity and permeability measurement of a rock sample is usually performed in the laboratory. We estimate the porosity and permeability of sandstones digitally by using digital images from μCT-Scan. Advantages of the method are non-destructive and can be applied for small rock pieces also easily to construct the model. The porosity values are calculated by comparing the digital image of the pore volume to the total volume of the sandstones; while the permeability values are calculated using the Lattice Boltzmann calculations utilizing the nature of the law of conservation of mass and conservation of momentum of a particle. To determine variations of the porosity and permeability, the main sandstone samples with a dimension of 300 × 300 × 300 pixels are made into eight sub-cubes with a size of 150 × 150 × 150 pixels. Results of digital image modeling fluid flow velocity are visualized as normal velocity (streamline). Variations in value sandstone porosity vary between 0.30 to 0.38 and permeability variations in the range of 4000 mD to 6200 mD. The results of calculations show that the sandstone sample in this research is highly porous and permeable. The method combined with rock physics can be powerful tools for determining rock properties from small rock fragments.

  17. Natal dispersal and genetic structure in a population of the European wild rabbit (Oryctolagus cuniculus).

    PubMed

    Webb, N J; Ibrahim, K M; Bell, D J; Hewitt, G M

    1995-04-01

    A combination of behavioural observation, DNA fingerprinting, and allozyme analysis were used to examine natal dispersal in a wild rabbit population. Rabbits lived in territorial, warren based social groups. Over a 6-year period, significantly more male than female rabbits moved to a new social group before the start of their first breeding season. This pattern of female philopatry and male dispersal was reflected in the genetic structure of the population. DNA fingerprint band-sharing coefficients were significantly higher for females within the same group than for females between groups, while this was not the case for males. Wright's inbreeding coefficients were calculated from fingerprint band-sharing values and compared to those obtained from allozyme data. There was little correlation between the relative magnitudes of the F-statistics calculated using the two techniques for comparisons between different social groups. In contrast, two alternative methods for calculating FST from DNA fingerprints gave reasonably concordant values although those based on band-sharing were consistently lower than those calculated by an 'allele' frequency approach. A negative FIS value was obtained from allozyme data. Such excess heterozygosity within social groups is expected even under random mating given the social structure and sex-biased dispersal but it is argued that the possibility of behavioural avoidance of inbreeding should not be discounted in this species. Estimates of genetic differentiation obtained from allozyme and DNA fingerprint data agreed closely with reported estimates for the yellow-bellied marmot, a species with a very similar social structure to the European rabbit.

  18. A revised method for calculation of life expectancy tables from individual death records which provides increased accuracy at advanced ages.

    PubMed

    Mathisen, R W; Mazess, R B

    1981-02-01

    The authors present a revised method for calculating life expectancy tables for populations where individual ages at death are known or can be estimated. The conventional and revised methods are compared using data for U.S. and Hungarian males in an attempt to determine the accuracy of each method in calculating life expectancy at advanced ages. Means of correcting errors caused by age rounding, age exaggeration, and infant mortality are presented

  19. A Monte Carlo study of the impact of the choice of rectum volume definition on estimates of equivalent uniform doses and the volume parameter

    NASA Astrophysics Data System (ADS)

    Kvinnsland, Yngve; Muren, Ludvig Paul; Dahl, Olav

    2004-08-01

    Calculations of normal tissue complication probability (NTCP) values for the rectum are difficult because it is a hollow, non-rigid, organ. Finding the true cumulative dose distribution for a number of treatment fractions requires a CT scan before each treatment fraction. This is labour intensive, and several surrogate distributions have therefore been suggested, such as dose wall histograms, dose surface histograms and histograms for the solid rectum, with and without margins. In this study, a Monte Carlo method is used to investigate the relationships between the cumulative dose distributions based on all treatment fractions and the above-mentioned histograms that are based on one CT scan only, in terms of equivalent uniform dose. Furthermore, the effect of a specific choice of histogram on estimates of the volume parameter of the probit NTCP model was investigated. It was found that the solid rectum and the rectum wall histograms (without margins) gave equivalent uniform doses with an expected value close to the values calculated from the cumulative dose distributions in the rectum wall. With the number of patients available in this study the standard deviations of the estimates of the volume parameter were large, and it was not possible to decide which volume gave the best estimates of the volume parameter, but there were distinct differences in the mean values of the values obtained.

  20. Mapping the Dark Matter with 6dFGS

    NASA Astrophysics Data System (ADS)

    Mould, Jeremy R.; Magoulas, C.; Springob, C.; Colless, M.; Jones, H.; Lucey, J.; Erdogdu, P.; Campbell, L.

    2012-05-01

    Fundamental plane distances from the 6dF Galaxy Redshift Survey are fitted to a model of the density field within 200/h Mpc. Likelihood is maximized for a single value of the local galaxy density, as expected in linear theory for the relation between overdensity and peculiar velocity. The dipole of the inferred southern hemisphere early type galaxy peculiar velocities is calculated within 150/h Mpc, before and after correction for the individual galaxy velocities predicted by the model. The former agrees with that obtained by other peculiar velocity studies (e.g. SFI++). The latter is only of order 150 km/sec and consistent with the expectations of the standard cosmological model and recent forecasts of the cosmic mach number, which show linearly declining bulk flow with increasing scale.

  1. Galactic dual population models of gamma-ray bursts

    NASA Technical Reports Server (NTRS)

    Higdon, J. C.; Lingenfelter, R. E.

    1994-01-01

    We investigate in more detail the properties of two-population models for gamma-ray bursts in the galactic disk and halo. We calculate the gamma-ray burst statistical properties, mean value of (V/V(sub max)), mean value of cos Theta, and mean value of (sin(exp 2) b), as functions of the detection flux threshold for bursts coming from both Galactic disk and massive halo populations. We consider halo models inferred from the observational constraints on the large-scale Galactic structure and we compare the expected values of mean value of (V/V(sub max)), mean value of cos Theta, and mean value of (sin(exp 2) b), with those measured by Burst and Transient Source Experiment (BATSE) and other detectors. We find that the measured values are consistent with solely Galactic populations having a range of halo distributions, mixed with local disk distributions, which can account for as much as approximately 25% of the observed BATSE bursts. M31 does not contribute to these modeled bursts. We also demonstrate, contrary to recent arguments, that the size-frequency distributions of dual population models are quite consistent with the BATSE observations.

  2. Combined calculi for photon orbital and spin angular momenta

    NASA Astrophysics Data System (ADS)

    Elias, N. M.

    2014-08-01

    Context. Wavelength, photon spin angular momentum (PSAM), and photon orbital angular momentum (POAM), completely describe the state of a photon or an electric field (an ensemble of photons). Wavelength relates directly to energy and linear momentum, the corresponding kinetic quantities. PSAM and POAM, themselves kinetic quantities, are colloquially known as polarization and optical vortices, respectively. Astrophysical sources emit photons that carry this information. Aims: PSAM characteristics of an electric field (intensity) are compactly described by the Jones (Stokes/Mueller) calculus. Similarly, I created calculi to represent POAM characteristics of electric fields and intensities in an astrophysical context. Adding wavelength dependence to all of these calculi is trivial. The next logical steps are to 1) form photon total angular momentum (PTAM = POAM + PSAM) calculi; 2) prove their validity using operators and expectation values; and 3) show that instrumental PSAM can affect measured POAM values for certain types of electric fields. Methods: I derive the PTAM calculi of electric fields and intensities by combining the POAM and PSAM calculi. I show how these quantities propagate from celestial sphere to image plane. I also form the PTAM operator (the sum of the POAM and PSAM operators), with and without instrumental PSAM, and calculate the corresponding expectation values. Results: Apart from the vector, matrix, dot product, and direct product symbols, the PTAM and POAM calculi appear superficially identical. I provide tables with all possible forms of PTAM calculi. I prove that PTAM expectation values are correct for instruments with and without instrumental PSAM. I also show that POAM measurements of "unfactored" PTAM electric fields passing through non-zero instrumental circular PSAM can be biased. Conclusions: The combined PTAM calculi provide insight into mathematically modeling PTAM sources and calibrating POAM- and PSAM-induced measurement errors.

  3. Determination of prescription dose for Cs-131 permanent implants using the BED formalism including resensitization correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Wei, E-mail: wei.luo@uky.edu; Molloy, Janelle; Aryal, Prakash

    2014-02-15

    Purpose: The current widely used biological equivalent dose (BED) formalism for permanent implants is based on the linear-quadratic model that includes cell repair and repopulation but not resensitization (redistribution and reoxygenation). The authors propose a BED formalism that includes all the four biological effects (4Rs), and the authors propose how it can be used to calculate appropriate prescription doses for permanent implants with Cs-131. Methods: A resensitization correction was added to the BED calculation for permanent implants to account for 4Rs. Using the same BED, the prescription doses with Au-198, I-125, and Pd-103 were converted to the isoeffective Cs-131 prescriptionmore » doses. The conversion factor F, ratio of the Cs-131 dose to the equivalent dose with the other reference isotope (F{sub r}: with resensitization, F{sub n}: without resensitization), was thus derived and used for actual prescription. Different values of biological parameters such as α, β, and relative biological effectiveness for different types of tumors were used for the calculation. Results: Prescription doses with I-125, Pd-103, and Au-198 ranging from 10 to 160 Gy were converted into prescription doses with Cs-131. The difference in dose conversion factors with (F{sub r}) and without (F{sub n}) resensitization was significant but varied with different isotopes and different types of tumors. The conversion factors also varied with different doses. For I-125, the average values of F{sub r}/F{sub n} were 0.51/0.46, for fast growing tumors, and 0.88/0.77 for slow growing tumors. For Pd-103, the average values of F{sub r}/F{sub n} were 1.25/1.15 for fast growing tumors, and 1.28/1.22 for slow growing tumors. For Au-198, the average values of F{sub r}/F{sub n} were 1.08/1.25 for fast growing tumors, and 1.00/1.06 for slow growing tumors. Using the biological parameters for the HeLa/C4-I cells, the averaged value of F{sub r} was 1.07/1.11 (rounded to 1.1), and the averaged value of F{sub n} was 1.75/1.18. F{sub r} of 1.1 has been applied to gynecological cancer implants with expected acute reactions and outcomes as expected based on extensive experience with permanent implants. The calculation also gave the average Cs-131 dose of 126 Gy converted from the I-125 dose of 144 Gy for prostate implants. Conclusions: Inclusion of an allowance for resensitization led to significant dose corrections for Cs-131 permanent implants, and should be applied to prescription dose calculation. The adjustment of the Cs-131 prescription doses with resensitization correction for gynecological permanent implants was consistent with clinical experience and observations. However, the Cs-131 prescription doses converted from other implant doses can be further adjusted based on new experimental results, clinical observations, and clinical outcomes.« less

  4. Determination of prescription dose for Cs-131 permanent implants using the BED formalism including resensitization correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Wei, E-mail: wei.luo@uky.edu; Molloy, Janelle; Aryal, Prakash

    Purpose: The current widely used biological equivalent dose (BED) formalism for permanent implants is based on the linear-quadratic model that includes cell repair and repopulation but not resensitization (redistribution and reoxygenation). The authors propose a BED formalism that includes all the four biological effects (4Rs), and the authors propose how it can be used to calculate appropriate prescription doses for permanent implants with Cs-131. Methods: A resensitization correction was added to the BED calculation for permanent implants to account for 4Rs. Using the same BED, the prescription doses with Au-198, I-125, and Pd-103 were converted to the isoeffective Cs-131 prescriptionmore » doses. The conversion factor F, ratio of the Cs-131 dose to the equivalent dose with the other reference isotope (F{sub r}: with resensitization, F{sub n}: without resensitization), was thus derived and used for actual prescription. Different values of biological parameters such as α, β, and relative biological effectiveness for different types of tumors were used for the calculation. Results: Prescription doses with I-125, Pd-103, and Au-198 ranging from 10 to 160 Gy were converted into prescription doses with Cs-131. The difference in dose conversion factors with (F{sub r}) and without (F{sub n}) resensitization was significant but varied with different isotopes and different types of tumors. The conversion factors also varied with different doses. For I-125, the average values of F{sub r}/F{sub n} were 0.51/0.46, for fast growing tumors, and 0.88/0.77 for slow growing tumors. For Pd-103, the average values of F{sub r}/F{sub n} were 1.25/1.15 for fast growing tumors, and 1.28/1.22 for slow growing tumors. For Au-198, the average values of F{sub r}/F{sub n} were 1.08/1.25 for fast growing tumors, and 1.00/1.06 for slow growing tumors. Using the biological parameters for the HeLa/C4-I cells, the averaged value of F{sub r} was 1.07/1.11 (rounded to 1.1), and the averaged value of F{sub n} was 1.75/1.18. F{sub r} of 1.1 has been applied to gynecological cancer implants with expected acute reactions and outcomes as expected based on extensive experience with permanent implants. The calculation also gave the average Cs-131 dose of 126 Gy converted from the I-125 dose of 144 Gy for prostate implants. Conclusions: Inclusion of an allowance for resensitization led to significant dose corrections for Cs-131 permanent implants, and should be applied to prescription dose calculation. The adjustment of the Cs-131 prescription doses with resensitization correction for gynecological permanent implants was consistent with clinical experience and observations. However, the Cs-131 prescription doses converted from other implant doses can be further adjusted based on new experimental results, clinical observations, and clinical outcomes.« less

  5. Promoting Physical Activity in Hong Kong Chinese Young People: Factors Influencing Their Subjective Task Values and Expectancy Beliefs in Physical Activity

    ERIC Educational Resources Information Center

    Pang, Bonnie

    2014-01-01

    According to Eccles et al.'s (1983) Expectancy Value Model, the two major constructs that influence young people's activity choice are subjective task value and expectancy beliefs (Eccles et al., 1983). Eccles et al. (1983) conceptually distinguished four dimensions of subjective task value: attainment value, intrinsic value, utility value and…

  6. Nuclear analysis of structural damage and nuclear heating on enhanced K-DEMO divertor model

    NASA Astrophysics Data System (ADS)

    Park, J.; Im, K.; Kwon, S.; Kim, J.; Kim, D.; Woo, M.; Shin, C.

    2017-12-01

    This paper addresses nuclear analysis on the Korean fusion demonstration reactor (K-DEMO) divertor to estimate the overall trend of nuclear heating values and displacement damages. The K-DEMO divertor model was created and converted by the CAD (Pro-Engineer™) and Monte Carlo automatic modeling programs as a 22.5° sector of the tokamak. The Monte Carlo neutron photon transport and ADVANTG codes were used in this calculation with the FENDL-2.1 nuclear data library. The calculation results indicate that the highest values appeared on the upper outboard target (OT) area, which means the OT is exposed to the highest radiation conditions among the three plasma-facing parts (inboard, central and outboard) in the divertor. Especially, much lower nuclear heating values and displacement damages are indicated on the lower part of the OT area than others. These are important results contributing to thermal-hydraulic and thermo-mechanical analyses on the divertor and also it is expected that the copper alloy materials may be partially used as a heat sink only at the lower part of the OT instead of the reduced activation ferritic-martensitic steel due to copper alloy’s high thermal conductivity.

  7. Value-at-Risk analysis using ARMAX GARCHX approach for estimating risk of banking subsector stock return’s

    NASA Astrophysics Data System (ADS)

    Dewi Ratih, Iis; Sutijo Supri Ulama, Brodjol; Prastuti, Mike

    2018-03-01

    Value at Risk (VaR) is one of the statistical methods used to measure market risk by estimating the worst losses in a given time period and level of confidence. The accuracy of this measuring tool is very important in determining the amount of capital that must be provided by the company to cope with possible losses. Because there is a greater losses to be faced with a certain degree of probability by the greater risk. Based on this, VaR calculation analysis is of particular concern to researchers and practitioners of the stock market to be developed, thus getting more accurate measurement estimates. In this research, risk analysis of stocks in four banking sub-sector, Bank Rakyat Indonesia, Bank Mandiri, Bank Central Asia and Bank Negara Indonesia will be done. Stock returns are expected to be influenced by exogenous variables, namely ICI and exchange rate. Therefore, in this research, stock risk estimation are done by using VaR ARMAX-GARCHX method. Calculating the VaR value with the ARMAX-GARCHX approach using window 500 gives more accurate results. Overall, Bank Central Asia is the only bank had the estimated maximum loss in the 5% quantile.

  8. WE-G-204-02: Utility of a Channelized Hotelling Model Observer Over a Large Range of Angiographic Exposure Levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fetterly, K; Favazza, C

    2015-06-15

    Purpose: Mathematical model observers provide a figure of merit that simultaneously considers a test object and the contrast, noise, and spatial resolution properties of an imaging system. The purpose of this work was to investigate the utility of a channelized Hotelling model observer (CHO) to assess system performance over a large range of angiographic exposure conditions. Methods: A 4 mm diameter disk shaped, iodine contrast test object was placed on a 20 cm thick Lucite phantom and 1204 image frames were acquired using fixed x-ray beam quality and for several detector target dose (DTD) values in the range 6 tomore » 240 nGy. The CHO was implemented in the spatial domain utilizing 96 Gabor functions as channels. Detectability index (DI) estimates were calculated using the “resubstitution” and “holdout” methods to train the CHO. Also, DI values calculated using discrete subsets of the data were used to estimate a minimally biased DI as might be expected from an infinitely large dataset. The relationship between DI, independently measured CNR, and changes in results expected assuming a quantum limited detector were assessed over the DTD range. Results: CNR measurements demonstrated that the angiography system is not quantum limited due to relatively increasing contamination from electronic noise that reduces CNR for low DTD. Direct comparison of DI versus CNR indicates that the CHO relatively overestimates DI for low DTD and/or underestimates DI values for high DTD. The relative magnitude of the apparent bias error in the DI values was ∼20% over the 40x DTD range investigated. Conclusion: For the angiography system investigated, the CHO can provide a minimally biased figure of merit if implemented over a restricted exposure range. However, bias leads to overestimates of DI for low exposures. This work emphasizes the need to verify CHO model performance during real-world application.« less

  9. Variational Calculation of the Ground State of Closed-Shell Nuclei Up to $A$ = 40

    DOE PAGES

    Lonardoni, Diego; Lovato, Alessandro; Pieper, Steven C.; ...

    2017-08-31

    Variational calculations of ground-state properties of 4He, 16O and 40Ca are carried out employing realistic phenomenological two- and three-nucleon potentials. The trial wave function includes twoand three-body correlations acting on a product of single-particle determinants. Expectation values are evaluated with a cluster expansion for the spin-isospin dependent correlations considering up to five-body cluster terms. The optimal wave function is obtained by minimizing the energy expectation value over a set of up to 20 parameters by means of a nonlinear optimization library. We present results for the binding energy, charge radius, point density, single-nucleon momentum distribution, charge form factor, and Coulombmore » sum rule. We find that the employed three-nucleon interaction becomes repulsive for A ≥ 16. In 16O the inclusion of such a force provides a better description of the properties of the nucleus. In 40Ca instead, the repulsive behavior of the three-body interaction fails to reproduce experimental data for the charge radius and the charge form factor. We find that the high-momentum region of the momentum distributions, determined by the short-range terms of nuclear correlations, exhibit a universal behavior independent of the particular nucleus. The comparison of the Coulomb sum rules for 4He, 16O, and 40Ca reported in this work will help elucidate in-medium modifications of the nucleon form factors.« less

  10. Magnetic field pitch angle and perpendicular velocity measurements from multi-point time-delay estimation of poloidal correlation reflectometry

    NASA Astrophysics Data System (ADS)

    Prisiazhniuk, D.; Krämer-Flecken, A.; Conway, G. D.; Happel, T.; Lebschy, A.; Manz, P.; Nikolaeva, V.; Stroth, U.; the ASDEX Upgrade Team

    2017-02-01

    In fusion machines, turbulent eddies are expected to be aligned with the direction of the magnetic field lines and to propagate in the perpendicular direction. Time delay measurements of density fluctuations can be used to calculate the magnetic field pitch angle α and perpendicular velocity {{v}\\bot} profiles. The method is applied to poloidal correlation reflectometry installed at ASDEX Upgrade and TEXTOR, which measure density fluctuations from poloidally and toroidally separated antennas. Validation of the method is achieved by comparing the perpendicular velocity (composed of the E× B drift and the phase velocity of turbulence {{v}\\bot}={{v}E× B}+{{v}\\text{ph}} ) with Doppler reflectometry measurements and with neoclassical {{v}E× B} calculations. An important condition for the application of the method is the presence of turbulence with a sufficiently long decorrelation time. It is shown that at the shear layer the decorrelation time is reduced, limiting the application of the method. The magnetic field pitch angle measured by this method shows the expected dependence on the magnetic field, plasma current and radial position. The profile of the pitch angle reproduces the expected shape and values. However, comparison with the equilibrium reconstruction code cliste suggests an additional inclination of turbulent eddies at the pedestal position (2-3°). This additional angle decreases towards the core and at the edge.

  11. Resolving the theory of planned behaviour's 'expectancy-value muddle' using dimensional salience.

    PubMed

    Newton, Joshua D; Ewing, Michael T; Burney, Sue; Hay, Margaret

    2012-01-01

    The theory of planned behaviour is one of the most widely used models of decision-making in the health literature. Unfortunately, the primary method for assessing the theory's belief-based expectancy-value models results in statistically uninterpretable findings, giving rise to what has become known as the 'expectancy-value muddle'. Moreover, existing methods for resolving this muddle are associated with various conceptual or practical limitations. This study addresses these issues by identifying and evaluating a parsimonious method for resolving the expectancy-value muddle. Three hundred and nine Australian residents aged 18-24 years rated the expectancy and value of 18 beliefs about posthumous organ donation. Participants also nominated their five most salient beliefs using a dimensional salience approach. Salient beliefs were perceived as being more likely to eventuate than non-salient beliefs, indicating that salient beliefs could be used to signify the expectancy component. The expectancy-value term was therefore represented by summing the value ratings of salient beliefs, an approach that predicted attitude (adjusted R2 = 0.21) and intention (adjusted R2 = 0.21). These findings suggest that the dimensional salience approach is a useful method for overcoming the expectancy-value muddle in applied research settings.

  12. Building fast well-balanced two-stage numerical schemes for a model of two-phase flows

    NASA Astrophysics Data System (ADS)

    Thanh, Mai Duc

    2014-06-01

    We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.

  13. The value of seasonal forecasting and crop mix adaptation to climate variability for agriculture under climate change

    NASA Astrophysics Data System (ADS)

    Choi, H. S.; Schneider, U.; Schmid, E.; Held, H.

    2012-04-01

    Changes to climate variability and frequency of extreme weather events are expected to impose damages to the agricultural sector. Seasonal forecasting and long range prediction skills have received attention as an option to adapt to climate change because seasonal climate and yield predictions could improve farmers' management decisions. The value of seasonal forecasting skill is assessed with a crop mix adaptation option in Spain where drought conditions are prevalent. Yield impacts of climate are simulated for six crops (wheat, barely, cotton, potato, corn and rice) with the EPIC (Environmental Policy Integrated Climate) model. Daily weather data over the period 1961 to 1990 are used and are generated by the regional climate model REMO as reference period for climate projection. Climate information and its consequent yield variability information are given to the stochastic agricultural sector model to calculate the value of climate information in the agricultural market. Expected consumers' market surplus and producers' revenue is compared with and without employing climate forecast information. We find that seasonal forecasting benefits not only consumers but also producers if the latter adopt a strategic crop mix. This mix differs from historical crop mixes by having higher shares of crops which fare relatively well under climate change. The corresponding value of information is highly sensitive to farmers' crop mix choices.

  14. Calculation of effective atomic number and electron density of essential biomolecules for electron, proton, alpha particle and multi-energetic photon interactions

    NASA Astrophysics Data System (ADS)

    Kurudirek, Murat; Onaran, Tayfur

    2015-07-01

    Effective atomic numbers (Zeff) and electron densities (Ne) of some essential biomolecules have been calculated for total electron interaction, total proton interaction and total alpha particle interaction using an interpolation method in the energy region 10 keV-1 GeV. Also, the spectrum weighted Zeff for multi-energetic photons has been calculated using Auto-Zeff program. Biomolecules consist of fatty acids, amino acids, carbohydrates and basic nucleotides of DNA and RNA. Variations of Zeff and Ne with kinetic energy of ionizing charged particles and effective photon energies of heterogeneous sources have been studied for the given materials. Significant variations in Zeff and Ne have been observed through the entire energy region for electron, proton and alpha particle interactions. Non-uniform variation has been observed for protons and alpha particles in low and intermediate energy regions, respectively. The maximum values of Zeff have found to be in higher energies for total electron interaction whereas maximum values have found to be in relatively low energies for total proton and total alpha particle interactions. When it comes to the multi-energetic photon sources, it has to be noted that the highest Zeff values were found at low energy region where photoelectric absorption is the pre-dominant interaction process. The lowest values of Zeff have been shown in biomolecules such as stearic acid, leucine, mannitol and thymine, which have highest H content in their groups. Variation in Ne seems to be more or less the same with the variation in Zeff for the given materials as expected.

  15. A comparison of positive and negative alcohol expectancy and value and their multiplicative composite as predictors of post-treatment abstinence survivorship.

    PubMed

    Jones, B T; McMahon, J

    1996-01-01

    Within social learning theory, positive alcohol expectancies represent motivation to drink and negative expectancies, motivation to restrain. It is also recognized that a subjective evaluation of expectancies ought to moderate their impact, although the evidence for this in social drinkers is problematic. This paper addresses the speculation that the moderating effect will be more evident in clinical populations. This study shows that (i) both expectancy and value reliably, independently and equally predict clients' abstinence survivorship following discharge from a treatment programme (and that this is almost entirely confined to the negative rather than positive terms). When (ii) expectancy evaluations are processed against expectancy through multiplicative composites (i.e. expectancy x value), their predictive power is only equivalent to either expectancy or value on its own. However (iii) when the multiplicative composite is assessed following the statistical guidelines advocated by Evans (1991) (i.e. within the same model as its constituents, expectancy and value) the increase in outcome variance explained by its inclusion is negligible and casts doubt upon its use in alcohol research. This does not appear to apply to value, however, and its possible role in treatment is discussed.

  16. Quantum heating as an alternative of reheating

    NASA Astrophysics Data System (ADS)

    Akhmedov, Emil T.; Bascone, Francesco

    2018-02-01

    To model a realistic situation for the beginning we consider massive real scalar ϕ4 theory in a (1 +1 )-dimensional asymptotically static Minkowski spacetime with an intermediate stage of expansion. To have an analytic headway we assume that scalars have a big mass. At past and future infinities of the background we have flat Minkowski regions which are joint by the inflationary expansion region. We use the tree-level Keldysh propagator in the theory in question to calculate the expectation value of the stress-energy tensor which is, thus, due to the excitations of the zero-point fluctuations. Then we show that even for large mass, if the de Sitter expansion stage is long enough, the quantum loop corrections to the expectation value of the stress-energy tensor are not negligible in comparison with the tree-level contribution. That is revealed itself via the excitation of the higher-point fluctuations of the exact modes: during the expansion stage a nonzero particle number density for the exact modes is generated. This density is not Planckian and serves as a quench which leads to a thermalization in the out Minkowski stage.

  17. Performance of 3D-space-based atoms-in-molecules methods for electronic delocalization aromaticity indices.

    PubMed

    Heyndrickx, Wouter; Salvador, Pedro; Bultinck, Patrick; Solà, Miquel; Matito, Eduard

    2011-02-01

    Several definitions of an atom in a molecule (AIM) in three-dimensional (3D) space, including both fuzzy and disjoint domains, are used to calculate electron sharing indices (ESI) and related electronic aromaticity measures, namely, I(ring) and multicenter indices (MCI), for a wide set of cyclic planar aromatic and nonaromatic molecules of different ring size. The results obtained using the recent iterative Hirshfeld scheme are compared with those derived from the classical Hirshfeld method and from Bader's quantum theory of atoms in molecules. For bonded atoms, all methods yield ESI values in very good agreement, especially for C-C interactions. In the case of nonbonded interactions, there are relevant deviations, particularly between fuzzy and QTAIM schemes. These discrepancies directly translate into significant differences in the values and the trends of the aromaticity indices. In particular, the chemically expected trends are more consistently found when using disjoint domains. Careful examination of the underlying effects reveals the different reasons why the aromaticity indices investigated give the expected results for binary divisions of 3D space. Copyright © 2010 Wiley Periodicals, Inc.

  18. Validation of a dynamic linked segment model to calculate joint moments in lifting.

    PubMed

    de Looze, M P; Kingma, I; Bussmann, J B; Toussaint, H M

    1992-08-01

    A two-dimensional dynamic linked segment model was constructed and applied to a lifting activity. Reactive forces and moments were calculated by an instantaneous approach involving the application of Newtonian mechanics to individual adjacent rigid segments in succession. The analysis started once at the feet and once at a hands/load segment. The model was validated by comparing predicted external forces and moments at the feet or at a hands/load segment to actual values, which were simultaneously measured (ground reaction force at the feet) or assumed to be zero (external moments at feet and hands/load and external forces, beside gravitation, at hands/load). In addition, results of both procedures, in terms of joint moments, including the moment at the intervertebral disc between the fifth lumbar and first sacral vertebra (L5-S1), were compared. A correlation of r = 0.88 between calculated and measured vertical ground reaction forces was found. The calculated external forces and moments at the hands showed only minor deviations from the expected zero level. The moments at L5-S1, calculated starting from feet compared to starting from hands/load, yielded a coefficient of correlation of r = 0.99. However, moments calculated from hands/load were 3.6% (averaged values) and 10.9% (peak values) higher. This difference is assumed to be due mainly to erroneous estimations of the positions of centres of gravity and joint rotation centres. The estimation of the location of L5-S1 rotation axis can affect the results significantly. Despite the numerous studies estimating the load on the low back during lifting on the basis of linked segment models, only a few attempts to validate these models have been made. This study is concerned with the validity of the presented linked segment model. The results support the model's validity. Effects of several sources of error threatening the validity are discussed. Copyright © 1992. Published by Elsevier Ltd.

  19. Calculation of Lung Cancer Volume of Target Based on Thorax Computed Tomography Images using Active Contour Segmentation Method for Treatment Planning System

    NASA Astrophysics Data System (ADS)

    Patra Yosandha, Fiet; Adi, Kusworo; Edi Widodo, Catur

    2017-06-01

    In this research, calculation process of the lung cancer volume of target based on computed tomography (CT) thorax images was done. Volume of the target calculation was done in purpose to treatment planning system in radiotherapy. The calculation of the target volume consists of gross tumor volume (GTV), clinical target volume (CTV), planning target volume (PTV) and organs at risk (OAR). The calculation of the target volume was done by adding the target area on each slices and then multiply the result with the slice thickness. Calculations of area using of digital image processing techniques with active contour segmentation method. This segmentation for contouring to obtain the target volume. The calculation of volume produced on each of the targets is 577.2 cm3 for GTV, 769.9 cm3 for CTV, 877.8 cm3 for PTV, 618.7 cm3 for OAR 1, 1,162 cm3 for OAR 2 right, and 1,597 cm3 for OAR 2 left. These values indicate that the image processing techniques developed can be implemented to calculate the lung cancer target volume based on CT thorax images. This research expected to help doctors and medical physicists in determining and contouring the target volume quickly and precisely.

  20. Ex-vessel neutron dosimetry analysis for westinghouse 4-loop XL pressurized water reactor plant using the RadTrack{sup TM} Code System with the 3D parallel discrete ordinates code RAPTOR-M3G

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, J.; Alpan, F. A.; Fischer, G.A.

    2011-07-01

    Traditional two-dimensional (2D)/one-dimensional (1D) SYNTHESIS methodology has been widely used to calculate fast neutron (>1.0 MeV) fluence exposure to reactor pressure vessel in the belt-line region. However, it is expected that this methodology cannot provide accurate fast neutron fluence calculation at elevations far above or below the active core region. A three-dimensional (3D) parallel discrete ordinates calculation for ex-vessel neutron dosimetry on a Westinghouse 4-Loop XL Pressurized Water Reactor has been done. It shows good agreement between the calculated results and measured results. Furthermore, the results show very different fast neutron flux values at some of the former plate locationsmore » and elevations above and below an active core than those calculated by a 2D/1D SYNTHESIS method. This indicates that for certain irregular reactor internal structures, where the fast neutron flux has a very strong local effect, it is required to use a 3D transport method to calculate accurate fast neutron exposure. (authors)« less

  1. Children's motivation in elementary physical education: an expectancy-value model of achievement choice.

    PubMed

    Xiang, Ping; McBride, Ron; Guan, Jianmin; Solmon, Melinda

    2003-03-01

    This study examined children's motivation in elementary physical education within an expectancy-value model developed by Eccles and her colleagues. Four hundred fourteen students in second and fourth grades completed questionnaires assessing their expectancy-related beliefs, subjective task values, and intention for future participation in physical education. Results indicated that expectancy-related beliefs and subjective task values were clearly distinguishable from one another across physical education and throwing. The two constructs were related to each other positively. Children's intention for future participation in physical education was positively associated with their subjective task values and/or expectancy-related beliefs. Younger children had higher motivation for learning in physical education than older children. Gender differences emerged and the findings provided empirical evidence supporting the validity of the expectancy-value model in elementary physical education.

  2. Fabrication of nanotweezers and their remote actuation by magnetic fields.

    PubMed

    Iss, Cécile; Ortiz, Guillermo; Truong, Alain; Hou, Yanxia; Livache, Thierry; Calemczuk, Roberto; Sabon, Philippe; Gautier, Eric; Auffret, Stéphane; Buda-Prejbeanu, Liliana D; Strelkov, Nikita; Joisten, Hélène; Dieny, Bernard

    2017-03-27

    A new kind of nanodevice that acts like tweezers through remote actuation by an external magnetic field is designed. Such device is meant to mechanically grab micrometric objects. The nanotweezers are built by using a top-down approach and are made of two parallelepipedic microelements, at least one of them being magnetic, bound by a flexible nanohinge. The presence of an external magnetic field induces a torque on the magnetic elements that competes with the elastic torque provided by the nanohinge. A model is established in order to evaluate the values of the balanced torques as a function of the tweezers opening angles. The results of the calculations are confronted to the expected values and validate the overall working principle of the magnetic nanotweezers.

  3. Fermionic Schwinger effect and induced current in de Sitter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashinaka, Takahiro; Department of Physics, Graduate School of Science, The University of Tokyo,Bunkyo-ku, Tokyo, 113-0033; Fujita, Tomohiro

    We explore Schwinger effect of spin 1/2 charged particles with static electric field in 1+3 dimensional de Sitter spacetime. We analytically calculate the vacuum expectation value of the spinor current which is induced by the produced particles in the electric field. The renormalization is performed with the adiabatic subtraction scheme. We find that the current becomes negative, namely it flows in the direction opposite to the electric field, if the electric field is weaker than a certain threshold value depending on the fermion mass, which is also known to happen in the case of scalar charged particles in 1+3 demore » Sitter spacetime. Contrary to the scalar case, however, the IR hyperconductivity is absent in the spinor case.« less

  4. Methane, Black Carbon, and Ethane Emissions from Natural Gas Flares in the Bakken Shale, North Dakota.

    PubMed

    Gvakharia, Alexander; Kort, Eric A; Brandt, Adam; Peischl, Jeff; Ryerson, Thomas B; Schwarz, Joshua P; Smith, Mackenzie L; Sweeney, Colm

    2017-05-02

    Incomplete combustion during flaring can lead to production of black carbon (BC) and loss of methane and other pollutants to the atmosphere, impacting climate and air quality. However, few studies have measured flare efficiency in a real-world setting. We use airborne data of plume samples from 37 unique flares in the Bakken region of North Dakota in May 2014 to calculate emission factors for BC, methane, ethane, and combustion efficiency for methane and ethane. We find no clear relationship between emission factors and aircraft-level wind speed or between methane and BC emission factors. Observed median combustion efficiencies for methane and ethane are close to expected values for typical flares according to the US EPA (98%). However, we find that the efficiency distribution is skewed, exhibiting log-normal behavior. This suggests incomplete combustion from flares contributes almost 1/5 of the total field emissions of methane and ethane measured in the Bakken shale, more than double the expected value if 98% efficiency was representative. BC emission factors also have a skewed distribution, but we find lower emission values than previous studies. The direct observation for the first time of a heavy-tail emissions distribution from flares suggests the need to consider skewed distributions when assessing flare impacts globally.

  5. Values and depressive symptoms in American Indian youth of the Northern Plains: examining the potential moderating roles of outcome expectancies and perceived community values.

    PubMed

    Mousseau, Alicia C; Scott, Walter D; Estes, David

    2014-03-01

    Very little is known about processes contributing to depressive experiences in American Indian youth. We explored the relationship between value priorities and depressive symptoms among 183 (65% female) American Indian youth in grades 9-12. In addition, two potential moderators of this relationship were examined: value outcome expectations (i.e., whether one expects that values will be realized or not) and perceived community values. We found that American Indian youth who endorsed higher levels of tradition/benevolence values reported fewer depressive symptoms. However, the relationship between endorsing power/materialism values and depressive symptoms depended on the extent to which youth perceived their communities as valuing power/materialism. Finally, value outcome expectancies appeared to relate more strongly to depressive symptoms than did value priorities. Overall, these findings support tribal community efforts to impart tradition/benevolence values to American Indian youth but also emphasize the importance of attending to value outcome expectations and the perceived values of the community in understanding American Indian youth's depressive experiences.

  6. Expected net present value of pure and mixed sexed semen artificial insemination strategies in dairy heifers.

    PubMed

    Olynk, N J; Wolf, C A

    2007-05-01

    Sexed semen has been a long-anticipated tool for dairy farmers to obtain more heifer calves, but challenges exist for integrating sexed semen into commercial dairy farm reproduction programs. The decreased conception rates (CR) experienced with sexed semen make virgin heifers better suited for insemination with sexed semen than lactating dairy cows. This research sought to identify when various sexed semen breeding strategies provided higher expected net present value (NPV) than conventional artificial insemination (AI) breeding schemes, indicating which breeding scheme is advisable under various scenarios. Budgets were developed to calculate the expected NPV of various AI breeding strategies incorporating conventional (non-sexed) and sexed semen. In the base budgets, heifer and bull calf values were held constant at $500 and $110, respectively. The percentage of heifers expected to be born after breeding with conventional and sexed semen used was 49.2 and 90%, respectively. Breeding costs per AI were held constant at $15.00 per AI for conventional semen and $45.00 per AI for sexed semen of approximately the same genetic value. Conventional semen CR of 58 and 65% were used, and an AI submission rate was set at 100%. Breeding strategies with sexed semen were assessed for breakeven heifer calf values and sexed semen costs to obtain a NPV equal to that achieved with conventional semen. Breakeven heifer calf values for pure sexed semen strategies with a constant 58 and 65% base CR in which sexed semen achieved 53% of the base CR are $732.11 and $664.26, respectively. Breakeven sexed semen costs per AI of $17.16 and $22.39, compared with $45.00 per AI, were obtained to obtain a NPV equal to that obtained with pure conventional semen for base CR of 58 and 65%, respectively. The strategy employing purely sexed semen, with base CR of both 58 and 65%, yielded a lower NPV than purely conventional semen in all but the best-case scenario in which sexed semen provides 90% of the CR of conventional semen. Other potential advantages of sexed semen that were not quantified in the scenarios include biosecurity-related concerns, decreased dystocia due to increased numbers of heifer calves, and implications for internal herd growth.

  7. Comprehensive analysis of statistical and model-based overlay lot disposition methods

    NASA Astrophysics Data System (ADS)

    Crow, David A.; Flugaur, Ken; Pellegrini, Joseph C.; Joubert, Etienne L.

    2001-08-01

    Overlay lot disposition algorithms in lithography occupy some of the highest leverage decision points in the microelectronic manufacturing process. In a typical large volume sub-0.18micrometers fab the lithography lot disposition decision is made about 500 times per day. Each decision will send a lot of wafers either to the next irreversible process step or back to rework in an attempt to improve unacceptable overlay performance. In the case of rework, the intention is that the reworked lot will represent better yield (and thus more value) than the original lot and that the enhanced lot value will exceed the cost of rework. Given that the estimated cost of reworking a critical-level lot is around 10,000 (based upon the opportunity cost of consuming time on a state-of-the-art DUV scanner), we are faced with the implication that the lithography lot disposition decision process impacts up to 5 million per day in decisions. That means that a 1% error rate in this decision process represents over 18 million per year lost in profit for a representative sit. Remarkably, despite this huge leverage, the lithography lot disposition decision algorithm usually receives minimal attention. In many cases, this lack of attention has resulted in the retention of sub-optimal algorithms from earlier process generations and a significant negative impact on the economic output of many high-volume manufacturing sites. An ideal lot- dispositioning algorithm would be an algorithm that results into the best economic decision being made every time - lots would only be reworked where the expected value (EV) of the reworked lot minus the expected value of the original lot exceeds the cost of the rework: EV(reworked lot)- EV(original lot)>COST(rework process) Calculating the above expected values in real-time has generally been deemed too complicated and maintenance-intensive to be practical for fab operations, so a simplified rule is typically used.

  8. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  9. Blood transfusion-acquired hemoglobin C.

    PubMed

    Suarez, A A; Polski, J M; Grossman, B J; Johnston, M F

    1999-07-01

    Unexpected and confusing laboratory test results can occur if a blood sample is inadvertently collected following a blood transfusion. A potential for transfusion-acquired hemoglobinopathy exists because heterozygous individuals show no significant abnormalities during the blood donor screening process. Such spurious results are infrequently reported in the medical literature. We report a case of hemoglobin C passively transferred during a red blood cell transfusion. The proper interpretation in our case was assisted by calculations comparing expected hemoglobin C concentration with the measured value. A review of the literature on transfusion-related preanalytic errors is provided.

  10. A binomial stochastic kinetic approach to the Michaelis-Menten mechanism

    NASA Astrophysics Data System (ADS)

    Lente, Gábor

    2013-05-01

    This Letter presents a new method that gives an analytical approximation of the exact solution of the stochastic Michaelis-Menten mechanism without computationally demanding matrix operations. The method is based on solving the deterministic rate equations and then using the results as guiding variables of calculating probability values using binomial distributions. This principle can be generalized to a number of different kinetic schemes and is expected to be very useful in the evaluation of measurements focusing on the catalytic activity of one or a few individual enzyme molecules.

  11. Efficiency at maximum power for an isothermal chemical engine with particle exchange at varying chemical potential

    NASA Astrophysics Data System (ADS)

    Koning, Jesper; Koga, Kenichiro; Indekeu, Joseph. O.

    2017-02-01

    We calculate the efficiency at maximum power (EMP) of an isothermal chemical cycle in which particle uptake occurs at a fixed chemical potential but particle release takes place at varying chemical potential. We obtain the EMP as a function of Δμ/ kT, where Δμ is the difference between the highest and lowest reservoir chemical potentials and T is the absolute temperature. In the linear response limit, Δμ ≪ kT, the EMP tends to the expected universal value 1/2.

  12. The Relationship between Saccadic Choice and Reaction Times with Manipulations of Target Value

    PubMed Central

    Milstein, David M.; Dorris, Michael C.

    2011-01-01

    Choosing the option with the highest expected value (EV; reward probability × reward magnitude) maximizes the intake of reward under conditions of uncertainty. However, human economic choices indicate that our value calculation has a subjective component whereby probability and reward magnitude are not linearly weighted. Using a similar economic framework, our goal was to characterize how subjective value influences the generation of simple motor actions. Specifically, we hypothesized that attributes of saccadic eye movements could provide insight into how rhesus monkeys, a well-studied animal model in cognitive neuroscience, subjectively value potential visual targets. In the first experiment, monkeys were free to choose by directing a saccade toward one of two simultaneously displayed targets, each of which had an uncertain outcome. In this task, choices were more likely to be allocated toward the higher valued target. In the second experiment, only one of the two possible targets appeared on each trial. In this task, saccadic reaction times (SRTs) decreased toward the higher valued target. Reward magnitude had a much stronger influence on both choices and SRTs than probability, whose effect was observed only when reward magnitude was similar for both targets. Across EV blocks, a strong relationship was observed between choice preferences and SRTs. However, choices tended to maximize at skewed values whereas SRTs varied more continuously. Lastly, SRTs were unchanged when all reward magnitudes were 1×, 1.5×, and 2× their normal amount, indicating that saccade preparation was influenced by the relative value of the targets rather than the absolute value of any single-target. We conclude that value is not only an important factor for deliberative decision making in primates, but also for the selection and preparation of simple motor actions, such as saccadic eye movements. More precisely, our results indicate that, under conditions of uncertainty, saccade choices and reaction times are influenced by the relative expected subjective value of potential movements. PMID:22028681

  13. Coordinating bracket torque and incisor inclination : Part 3: Validity of bracket torque values in achieving norm inclinations.

    PubMed

    Zimmer, Bernd; Sino, Hiba

    2018-03-19

    To analyze common values of bracket torque (Andrews, Roth, MBT, Ricketts) for their validity in achieving incisor inclinations that are considered normal by different cephalometric standards. Using the equations developed in part 1 (eU1 (BOP) = 90° - BT (U1) - TCA (U1) + α 1 - α 2 and eL1 (BOP) = 90° - BT (L1) - TCA (L1) + β 1 - β 2 ) (abbreviations see part 1) and the mean values (± SD) obtained as statistical measures in parts 1 and 2 of the study (α 1 and β 1 [1.7° ± 0.7°], α 2 [3.6° ± 0.3°], β 2 [3.2° ± 0.4°], TCA (U1) [24.6° ± 3.6°] and TCA (L1) [22.9° ± 4.3°]) expected (= theoretically anticipated) values were calculated for upper and lower incisors (U1 and L1) and compared to targeted (= cephalometric norm) values. For U1, there was no overlapping between the ranges of expected and targeted values, as the lowest targeted value of (58.3°; Ricketts) was higher than the highest expected value (56.5°; Andrews) relative to the bisected occlusal plane (BOP). Thus all of these torque systems will aim for flatter inclinations than prescribed by any of the norm values. Depending on target values, the various bracket systems fell short by 1.8-5.5° (Andrews), 6.8-10.5° (Roth), 11.8-15.5° (MBT), or 16.8-20.5° (Ricketts). For L1, there was good agreement of the MBT system with the Ricketts and Björk target values (Δ0.1° and Δ-0.8°, respectively), and both the Roth and Ricketts systems came close to the Bergen target value (both Δ2.3°). Depending on target values, the ranges of deviation for L1 were 6.3-13.2° for Andrews (Class II prescription), 2.3°-9.2° for Roth, -3.7 to -3.2° for MBT, and 2.3-9.2° for Ricketts. Common values of upper incisor bracket torque do not have acceptable validity in achieving normal incisor inclinations. A careful selection of lower bracket torque may provide satisfactory matching with some of the targeted norm values.

  14. The Effective Correlation Theory for Liquid 3He

    NASA Astrophysics Data System (ADS)

    Puoskari, M.; Kallio, A.

    1981-09-01

    We show that when the antisymmetry of liquid 3He is treated with the effective correlation theory of Lado, the optimal HNC solution gives very good agreement with the optimal FHNC theory when in the latter the long wave length properties due to Fermi cancellations are treated properly. When in addition elementary diagrams are calculated with the Pade approximation, we obtain ground state energies that agree quite well with the Monte-Carlo results of Ceperley, Chester and Kalos and Levesque, especially at low densities. In addition we calculate the contribution of the three-body factors in the variational wave function. For the expectation value of the ground state energy we obtain altogether - 1.62 ± 0.15 K at a saturation density 0.015 ± 0.001 Å-3.

  15. Nucleon, $$\\Delta$$ and $$\\Omega$$ excited states in $$N_f=2+1$$ lattice QCD

    DOE PAGES

    John Bulava; Edwards, Robert G.; Engelson, Eric; ...

    2010-07-22

    The energies of the excited states of the Nucleon,more » $$\\Delta$$ and $$\\Omega$$ are computed in lattice QCD, using two light quarks and one strange quark on anisotropic lattices. The calculation is performed at three values of the light quark mass, corresponding to pion masses $$m_{\\pi}$$ = 392(4), 438(3) and 521(3) MeV. We employ the variational method with a large basis of interpolating operators enabling six energies in each irreducible representation of the lattice to be distinguished clearly. We compare our calculation with the low-lying experimental spectrum, with which we find reasonable agreement in the pattern of states. In addition, the need to include operators that couple to the expected multi-hadron states in the spectrum is clearly identified.« less

  16. First principles studies of structure stability and lithium intercalation of ZnCo2 O4

    NASA Astrophysics Data System (ADS)

    Zhang, Yanning; Liu, Weiwei; Beijing Computational Science Research Center Team

    Among the metal oxides, which are the most widely investigated alternative anodes for use in lithium ion batteries (LIBs), binary and ternary transition metal oxides have received special attention due to their high capacity values. ZnCo2O4 is a promising candidate as anode for LIB, and one can expect a total capacity corresponding to 7.0 - 8.33 mol of recyclable Li per mole of ZnCo2O4. Here we studied the structural stability, electronic properties, lithium intercalation and diffusion barrier of ZnCo2O4 through density functional calculations. The calculated structural and energetic parameters are comparable with experiments. Our theoretical studies provide insights in understanding the mechanism of lithium ion displacement reactions in this ternary metal oxide.

  17. Calculating the momentum enhancement factor for asteroid deflection studies

    DOE PAGES

    Heberling, Tamra; Gisler, Galen; Plesko, Catherine; ...

    2017-10-17

    The possibility of kinetic-impact deflection of threatening near-Earth asteroids will be tested for the first time in the proposed AIDA (Asteroid Impact Deflection Assessment) mission, involving NASAs DART (Double Asteroid Redirection Test). The impact of the DART spacecraft onto the secondary of the binary asteroid 65803 Didymos at a speed of 5 to 7 km/s is expected to alter the mutual orbit by an observable amount. Furthermore, the velocity transferred to the secondary depends largely on the momentum enhancement factor, typically referred to as beta. Here, we use two hydrocodes developed at Los Alamos, RAGE and PAGOSA, to calculate anmore » approximate value for beta in laboratory-scale benchmark experiments. Convergence studies comparing the two codes show the importance of mesh size in estimating this crucial parameter.« less

  18. Calculating the momentum enhancement factor for asteroid deflection studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heberling, Tamra; Gisler, Galen; Plesko, Catherine

    The possibility of kinetic-impact deflection of threatening near-Earth asteroids will be tested for the first time in the proposed AIDA (Asteroid Impact Deflection Assessment) mission, involving NASAs DART (Double Asteroid Redirection Test). The impact of the DART spacecraft onto the secondary of the binary asteroid 65803 Didymos at a speed of 5 to 7 km/s is expected to alter the mutual orbit by an observable amount. Furthermore, the velocity transferred to the secondary depends largely on the momentum enhancement factor, typically referred to as beta. Here, we use two hydrocodes developed at Los Alamos, RAGE and PAGOSA, to calculate anmore » approximate value for beta in laboratory-scale benchmark experiments. Convergence studies comparing the two codes show the importance of mesh size in estimating this crucial parameter.« less

  19. Evaluating the Value of Information in the Presence of High Uncertainty

    DTIC Science & Technology

    2013-06-01

    in this hierarchy is subsumed in the Knowledge and Information layers. If information with high expected value is identified, it then passes up...be, the higher is its value. Based on the idea of expected utility of asking a question [36], Nelson [31] discusses different approaches for...18] formalizes the expected value of a sample of information using the concept of pre-posterior analysis as the expected increase in utility by

  20. Exact finite volume expectation values of \\overline{Ψ}Ψ in the massive Thirring model from light-cone lattice correlators

    NASA Astrophysics Data System (ADS)

    Hegedűs, Árpád

    2018-03-01

    In this paper, using the light-cone lattice regularization, we compute the finite volume expectation values of the composite operator \\overline{Ψ}Ψ between pure fermion states in the Massive Thirring Model. In the light-cone regularized picture, this expectation value is related to 2-point functions of lattice spin operators being located at neighboring sites of the lattice. The operator \\overline{Ψ}Ψ is proportional to the trace of the stress-energy tensor. This is why the continuum finite volume expectation values can be computed also from the set of non-linear integral equations (NLIE) governing the finite volume spectrum of the theory. Our results for the expectation values coming from the computation of lattice correlators agree with those of the NLIE computations. Previous conjectures for the LeClair-Mussardo-type series representation of the expectation values are also checked.

  1. How often should we expect to be wrong? Statistical power, P values, and the expected prevalence of false discoveries.

    PubMed

    Marino, Michael J

    2018-05-01

    There is a clear perception in the literature that there is a crisis in reproducibility in the biomedical sciences. Many underlying factors contributing to the prevalence of irreproducible results have been highlighted with a focus on poor design and execution of experiments along with the misuse of statistics. While these factors certainly contribute to irreproducibility, relatively little attention outside of the specialized statistical literature has focused on the expected prevalence of false discoveries under idealized circumstances. In other words, when everything is done correctly, how often should we expect to be wrong? Using a simple simulation of an idealized experiment, it is possible to show the central role of sample size and the related quantity of statistical power in determining the false discovery rate, and in accurate estimation of effect size. According to our calculations, based on current practice many subfields of biomedical science may expect their discoveries to be false at least 25% of the time, and the only viable course to correct this is to require the reporting of statistical power and a minimum of 80% power (1 - β = 0.80) for all studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A framework for estimating health state utility values within a discrete choice experiment: modeling risky choices.

    PubMed

    Robinson, Angela; Spencer, Anne; Moffatt, Peter

    2015-04-01

    There has been recent interest in using the discrete choice experiment (DCE) method to derive health state utilities for use in quality-adjusted life year (QALY) calculations, but challenges remain. We set out to develop a risk-based DCE approach to derive utility values for health states that allowed 1) utility values to be anchored directly to normal health and death and 2) worse than dead health states to be assessed in the same manner as better than dead states. Furthermore, we set out to estimate alternative models of risky choice within a DCE model. A survey was designed that incorporated a risk-based DCE and a "modified" standard gamble (SG). Health state utility values were elicited for 3 EQ-5D health states assuming "standard" expected utility (EU) preferences. The DCE model was then generalized to allow for rank-dependent expected utility (RDU) preferences, thereby allowing for probability weighting. A convenience sample of 60 students was recruited and data collected in small groups. Under the assumption of "standard" EU preferences, the utility values derived within the DCE corresponded fairly closely to the mean results from the modified SG. Under the assumption of RDU preferences, the utility values estimated are somewhat lower than under the assumption of standard EU, suggesting that the latter may be biased upward. Applying the correct model of risky choice is important whether a modified SG or a risk-based DCE is deployed. It is, however, possible to estimate a probability weighting function within a DCE and estimate "unbiased" utility values directly, which is not possible within a modified SG. We conclude by setting out the relative strengths and weaknesses of the 2 approaches in this context. © The Author(s) 2014.

  3. Step scaling and the Yang-Mills gradient flow

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2014-06-01

    The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.

  4. Predicting Success in an Online Course Using Expectancies, Values, and Typical Mode of Instruction

    ERIC Educational Resources Information Center

    Zimmerman, Whitney Alicia

    2017-01-01

    Expectancies of success and values were used to predict success in an online undergraduate-level introductory statistics course. Students who identified as primarily face-to-face learners were compared to students who identified as primarily online learners. Expectancy value theory served as a model. Expectancies of success were operationalized as…

  5. SU-E-T-261: Plan Quality Assurance of VMAT Using Fluence Images Reconstituted From Log-Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsuta, Y; Shimizu, E; Matsunaga, K

    2014-06-01

    Purpose: A successful VMAT plan delivery includes precise modulations of dose rate, gantry rotational and multi-leaf collimator (MLC) shapes. One of the main problem in the plan quality assurance is dosimetric errors associated with leaf-positional errors are difficult to analyze because they vary with MU delivered and leaf number. In this study, we calculated integrated fluence error image (IFEI) from log-files and evaluated plan quality in the area of all and individual MLC leaves scanned. Methods: The log-file reported the expected and actual position for inner 20 MLC leaves and the dose fraction every 0.25 seconds during prostate VMAT onmore » Elekta Synergy. These data were imported to in-house software that developed to calculate expected and actual fluence images from the difference of opposing leaf trajectories and dose fraction at each time. The IFEI was obtained by adding all of the absolute value of the difference between expected and actual fluence images corresponding. Results: In the area all MLC leaves scanned in the IFEI, the average and root mean square (rms) were 2.5 and 3.6 MU, the area of errors below 10, 5 and 3 MU were 98.5, 86.7 and 68.1 %, the 95 % of area was covered with less than error of 7.1 MU. In the area individual MLC leaves scanned in the IFEI, the average and rms value were 2.1 – 3.0 and 3.1 – 4.0 MU, the area of errors below 10, 5 and 3 MU were 97.6 – 99.5, 81.7 – 89.5 and 51.2 – 72.8 %, the 95 % of area was covered with less than error of 6.6 – 8.2 MU. Conclusion: The analysis of the IFEI reconstituted from log-file was provided detailed information about the delivery in the area of all and individual MLC leaves scanned.« less

  6. Separate Populations of Neurons in Ventral Striatum Encode Value and Motivation

    PubMed Central

    Gentry, Ronny N.; Goldstein, Brandon L.; Hearn, Taylor N.; Barnett, Brian R.; Kashtelyan, Vadim; Roesch, Matthew R.

    2013-01-01

    Neurons in the ventral striatum (VS) fire to cues that predict differently valued rewards. It is unclear whether this activity represents the value associated with the expected reward or the level of motivation induced by reward anticipation. To distinguish between the two, we trained rats on a task in which we varied value independently from motivation by manipulating the size of the reward expected on correct trials and the threat of punishment expected upon errors. We found that separate populations of neurons in VS encode expected value and motivation. PMID:23724077

  7. Assessing the Expected Value of Research Studies in Reducing Uncertainty and Improving Implementation Dynamics.

    PubMed

    Grimm, Sabine E; Dixon, Simon; Stevens, John W

    2017-07-01

    With low implementation of cost-effective health technologies being a problem in many health systems, it is worth considering the potential effects of research on implementation at the time of health technology assessment. Meaningful and realistic implementation estimates must be of dynamic nature. To extend existing methods for assessing the value of research studies in terms of both reduction of uncertainty and improvement in implementation by considering diffusion based on expert beliefs with and without further research conditional on the strength of evidence. We use expected value of sample information and expected value of specific implementation measure concepts accounting for the effects of specific research studies on implementation and the reduction of uncertainty. Diffusion theory and elicitation of expert beliefs about the shape of diffusion curves inform implementation dynamics. We illustrate use of the resulting dynamic expected value of research in a preterm birth screening technology and results are compared with those from a static analysis. Allowing for diffusion based on expert beliefs had a significant impact on the expected value of research in the case study, suggesting that mistakes are made where static implementation levels are assumed. Incorporating the effects of research on implementation resulted in an increase in the expected value of research compared to the expected value of sample information alone. Assessing the expected value of research in reducing uncertainty and improving implementation dynamics has the potential to complement currently used analyses in health technology assessments, especially in recommendations for further research. The combination of expected value of research, diffusion theory, and elicitation described in this article is an important addition to the existing methods of health technology assessment.

  8. Predicting Problem Behaviors with Multiple Expectancies: Expanding Expectancy-Value Theory

    ERIC Educational Resources Information Center

    Borders, Ashley; Earleywine, Mitchell; Huey, Stanley J.

    2004-01-01

    Expectancy-value theory emphasizes the importance of outcome expectancies for behavioral decisions, but most tests of the theory focus on a single behavior and a single expectancy. However, the matching law suggests that individuals consider expected outcomes for both the target behavior and alternative behaviors when making decisions. In this…

  9. A novel patient-centered "intention-to-treat" metric of U.S. lung transplant center performance.

    PubMed

    Maldonado, Dawn A; RoyChoudhury, Arindam; Lederer, David J

    2018-01-01

    Despite the importance of pretransplantation outcomes, 1-year posttransplantation survival is typically considered the primary metric of lung transplant center performance in the United States. We designed a novel lung transplant center performance metric that incorporates both pre- and posttransplantation survival time. We performed an ecologic study of 12 187 lung transplant candidates listed at 56 U.S. lung transplant centers between 2006 and 2012. We calculated an "intention-to-treat" survival (ITTS) metric as the percentage of waiting list candidates surviving at least 1 year after transplantation. The median center-level 1-year posttransplantation survival rate was 84.1%, and the median center-level ITTS was 66.9% (mean absolute difference 19.6%, 95% limits of agreement 4.3 to 35.1%). All but 10 centers had ITTS values that were significantly lower than 1-year posttransplantation survival rates. Observed ITTS was significantly lower than expected ITTS for 7 centers. These data show that one third of lung transplant candidates do not survive 1 year after transplantation, and that 12% of centers have lower than expected ITTS. An "intention-to-treat" survival metric may provide a more realistic expectation of patient outcomes at transplant centers and may be of value to transplant centers and policymakers. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.

  10. A benefit-cost analysis of retrofitting diesel vehicles with particulate filters in the Mexico City metropolitan area.

    PubMed

    Stevens, Gretchen; Wilson, Andrew; Hammitt, James K

    2005-08-01

    In the Mexico City metropolitan area, poor air quality is a public health concern. Diesel vehicles contribute significantly to the emissions that are most harmful to health. Harmful diesel emissions can be reduced by retrofitting vehicles with one of several technologies, including diesel particulate filters. We quantified the social costs and benefits, including health benefits, of retrofitting diesel vehicles in Mexico City with catalyzed diesel particulate filters, actively regenerating diesel particulate filters, or diesel oxidation catalysts, either immediately or in 2010, when capital costs are expected to be lower. Retrofit with either type of diesel particulate filter or an oxidation catalyst is expected to provide net benefits to society beginning immediately and in 2010. At current prices, retrofit with an oxidation catalyst provides greatest net benefits. However, as capital costs decrease, retrofit with diesel particulate filters is expected to provide greater net benefits. In both scenarios, retrofit of older, dirtier vehicles that circulate only within the city provides greatest benefits, and retrofit with oxidation catalysts provides greater health benefits per dollar spent than retrofit with particulate filters. Uncertainty about the magnitude of net benefits of a retrofit program is significant. Results are most sensitive to values used to calculate benefits, such as the concentration-response coefficient, intake fraction (a measure of exposure), and the monetary value of health benefits.

  11. Electron imaging with Medipix2 hybrid pixel detector.

    PubMed

    McMullan, G; Cattermole, D M; Chen, S; Henderson, R; Llopart, X; Summerfield, C; Tlustos, L; Faruqi, A R

    2007-01-01

    The electron imaging performance of Medipix2 is described. Medipix2 is a hybrid pixel detector composed of two layers. It has a sensor layer and a layer of readout electronics, in which each 55 microm x 55 microm pixel has upper and lower energy discrimination and MHz rate counting. The sensor layer consists of a 300 microm slab of pixellated monolithic silicon and this is bonded to the readout chip. Experimental measurement of the detective quantum efficiency, DQE(0) at 120 keV shows that it can reach approximately 85% independent of electron exposure, since the detector has zero noise, and the DQE(Nyquist) can reach approximately 35% of that expected for a perfect detector (4/pi(2)). Experimental measurement of the modulation transfer function (MTF) at Nyquist resolution for 120 keV electrons using a 60 keV lower energy threshold, yields a value that is 50% of that expected for a perfect detector (2/pi). Finally, Monte Carlo simulations of electron tracks and energy deposited in adjacent pixels have been performed and used to calculate expected values for the MTF and DQE as a function of the threshold energy. The good agreement between theory and experiment allows suggestions for further improvements to be made with confidence. The present detector is already very useful for experiments that require a high DQE at very low doses.

  12. Comparison of simple additive weighting (SAW) and composite performance index (CPI) methods in employee remuneration determination

    NASA Astrophysics Data System (ADS)

    Karlitasari, L.; Suhartini, D.; Benny

    2017-01-01

    The process of determining the employee remuneration for PT Sepatu Mas Idaman currently are still using Microsoft Excel-based spreadsheet where in the spreadsheet there is the value of criterias that must be calculated for every employee. This can give the effect of doubt during the assesment process, therefore resulting in the process to take much longer time. The process of employee remuneration determination is conducted by the assesment team based on some criterias that have been predetermined. The criteria used in the assessment process are namely the ability to work, human relations, job responsibility, discipline, creativity, work, achievement of targets, and absence. To ease the determination of employee remuneration to be more efficient and effective, the Simple Additive Weighting (SAW) method is used. SAW method can help in decision making for a certain case, and the calculation that generates the greatest value will be chosen as the best alternative. Other than SAW, also by using another method was the CPI method which is one of the calculating method in decision making based on performance index. Where SAW method was more faster by 89-93% compared to CPI method. Therefore it is expected that this application can be an evaluation material for the need of training and development for employee performances to be more optimal.

  13. CFD Analysis to Calculate the Optimal Air Velocity in Drying Green Tea Process Using Fluidized Bed Dryer

    NASA Astrophysics Data System (ADS)

    Yohana, Eflita; Nugraha, Afif Prasetya; Diana, Ade Eva; Mahawan, Ilham; Nugroho, Sri

    2018-02-01

    Tea processing is basically distinguished into three types which black tea, green tea, and oolong tea. Green tea is processed by heating and drying the leaves. Green tea factories in Indonesia are generally using the process of drying by panning the leaves. It is more recommended to use the fluidization process to speed up the drying process as the quality of the tea can be maintained. Bubbling fluidization is expected to occur in this research. It is a process of bubbles are formed in the fluidization. The effectiveness of the drying process in a fluidized bed dryer machine needs to be improved by using a CFD simulation method to proof that umf < u < ut, where the average velocity value is limited by the minimum and the maximum velocity of the calculation the experimental data. The minimum and the maximum velocity value of the fluidization is 0.96 m/s and 8.2 m/s. The result of the simulation obtained that the average velocity of the upper bed part is 1.81 m/s. From the results obtained, it can be concluded that the calculation and the simulation data is in accordance with the condition of bubbling fluidization in fluidized bed dryer.

  14. Nonperturbative comparison of clover and highly improved staggered quarks in lattice QCD and the properties of the Φ meson

    DOE PAGES

    Chakraborty, Bipasha; Davies, C. T. H.; Donald, G. C.; ...

    2017-10-02

    Here, we compare correlators for pseudoscalar and vector mesons made from valence strange quarks using the clover quark and highly improved staggered quark (HISQ) formalisms in full lattice QCD. We use fully nonperturbative methods to normalise vector and axial vector current operators made from HISQ quarks, clover quarks and from combining HISQ and clover fields. This allows us to test expectations for the renormalisation factors based on perturbative QCD, with implications for the error budget of lattice QCD calculations of the matrix elements of clover-staggeredmore » $b$-light weak currents, as well as further HISQ calculations of the hadronic vacuum polarisation. We also compare the approach to the (same) continuum limit in clover and HISQ formalisms for the mass and decay constant of the $$\\phi$$ meson. Our final results for these parameters, using single-meson correlators and neglecting quark-line disconnected diagrams are: $$m_{\\phi} =$$ 1.023(5) GeV and $$f_{\\phi} = $$ 0.238(3) GeV in good agreement with experiment. These results come from calculations in the HISQ formalism using gluon fields that include the effect of $u$, $d$, $s$ and $c$ quarks in the sea with three lattice spacing values and $$m_{u/d}$$ values going down to the physical point.« less

  15. Crystal field parameters in UCI 4: Experiment versus theory

    NASA Astrophysics Data System (ADS)

    Zolnierek, Z.; Gajek, Z.; Malek, Ch. Khan

    1984-08-01

    Crystal field effect on U 4+ ion with the 3H 4 ground term in tetragonal ligand field of UCl 4 has been studied in detail. Crystal field parameters determined experimentally from optical spectroscopy and magnetic susceptibility are in good agreement with CFP sets derived from the modified point charge model and the ab initio method. Theoretical calculations lead to overestimating the A44< r4> and lowering the A02< r2> values in comparison to those found in the experiments. The discrepancies are, however, within an accuracy of calculations. A large reduction of expectation values of the magnetic moment operator for the eigenvectors of lowest CF levels (17.8%), determined from magnetic susceptibility, cannot be attributed to the overlap and covalency effects only. The detailed calculations have shown that the latter effects provide about 4.6% reduction of respective matrix elements, and the applied J-J mixing procedure increases this factor up to 6.5%. Since similar, as in UCl 4, reduction factor(≈15%) has already been observed in a number of different uranium compounds, it seems likely that this feature is involved in the intrinsic properties of the U 4+ ion. We endeavor to explain this effect in terms of configuration interaction mechanisms.

  16. Magnetotransport of single crystalline YSb

    DOE PAGES

    Ghimire, N. J.; Botana, A. S.; Phelan, D.; ...

    2016-05-10

    Here, we report magnetic field dependent transport measurements on a single crystal of cubic YSb together with first principles calculations of its electronic structure. The transverse magnetoresistance does not saturate up to 9 T and attains a value of 75 000% at 1.8 K. The Hall coefficient is electron-like at high temperature, changes sign to hole-like between 110 and 50 K, and again becomes electron-like below 50 K. First principles calculations show that YSb is a compensated semimetal with a qualitatively similar electronic structure to that of isostructural LaSb and LaBi, but with larger Fermi surface volume. The measured electron carrier density and Hall mobility calculated at 1.8 K, based on a single band approximation, aremore » $$6.5\\times {{10}^{20}}$$ cm –3 and $$6.2\\times {{10}^{4}}$$ cm 2 Vs –1, respectively. These values are comparable with those reported for LaBi and LaSb. Like LaBi and LaSb, YSb undergoes a magnetic field-induced metal-insulator-like transition below a characteristic temperature T m, with resistivity saturation below 13 K. Thickness dependent electrical resistance measurements show a deviation of the resistance behavior from that expected for a normal metal; however, they do not unambiguously establish surface conduction as the mechanism for the resistivity plateau.« less

  17. Dosimetric characteristics of electron beams produced by a mobile accelerator for IORT.

    PubMed

    Pimpinella, M; Mihailescu, D; Guerra, A S; Laitano, R F

    2007-10-21

    Energy and angular distributions of electron beams with different energies were simulated by Monte Carlo calculations. These beams were generated by the NOVAC7 system (Hitesys, Italy), a mobile electron accelerator specifically dedicated to intra-operative radiation therapy (IORT). The electron beam simulations were verified by comparing the measured dose distributions with the corresponding calculated distributions. As expected, a considerable difference was observed in the energy and angular distributions between the IORT beams studied in the present work and the electron beams produced by conventional accelerators for non-IORT applications. It was also found that significant differences exist between the IORT beams used in this work and other IORT beams with different collimation systems. For example, the contribution from the scattered electrons to the total dose was found to be up to 15% higher in the NOVAC7 beams. The water-to-air stopping power ratios of the IORT beams used in this work were calculated on the basis of the beam energy distributions obtained by the Monte Carlo simulations. These calculated stopping power ratios, s(w,air), were compared with the corresponding s(w,air) values recommended by the TRS-381 and TRS-398 IAEA dosimetry protocols in order to estimate the deviations between a dosimetry based on generic parameters and a dosimetry based on parameters specifically obtained for the actual IORT beams. The deviations in the s(w,air) values were found to be as large as up to about 1%. Therefore, we recommend that a preliminary analysis should always be made when dealing with IORT beams in order to assess to what extent the possible differences in the s(w,air) values have to be accounted for or may be neglected on the basis of the specific accuracy needed in clinical dosimetry.

  18. Specific net present value: an improved method for assessing modularisation costs in water services with growing demand.

    PubMed

    Maurer, M

    2009-05-01

    A specific net present value (SNPV) approach is introduced as a criterion in economic engineering decisions. The SNPV expresses average costs, including the growth rate and plant utilisation over the planning horizon, factors that are excluded from a standard net present value approach. The use of SNPV favours alternatives that are cheaper per service unit and are therefore closer to the costs that a user has to cover. It also shows that demand growth has a similar influence on average costs as an economy of scale. In a high growth scenario, solutions providing less idle capacity can have higher present value costs and still be economically favourable. The SNPV approach is applied in two examples to calculate acceptable additional costs for modularisation and comparable costs for on-site treatment (OST) as an extreme form of modularisation. The calculations show that: (i) the SNPV approach is suitable for quantifying the comparable costs of an OST system in a different scenario; (ii) small systems with projected high demand growth rates and high real interest rates are the most probable entry market for OST water treatment systems; (iii) operating expenses are currently the main economic weakness of membrane-based wastewater OST systems; and (iv) when high growth in demand is expected, up to 100% can be additionally invested in modularisation and staging the expansion of a treatment plant.

  19. Value of information in natural resource management: technical developments and application to pink-footed geese

    USGS Publications Warehouse

    Williams, Byron K.; Johnson, Fred A.

    2015-01-01

    The “value of information” (VOI) is a generic term for the increase in value resulting from better information to guide management, or alternatively, the value foregone under uncertainty about the impacts of management (Yokota and Thompson, Medical Decision Making 2004;24: 287). The value of information can be characterized in terms of several metrics, including the expected value of perfect information and the expected value of partial information. We extend the technical framework for the value of information by further developing the relationship between value metrics for partial and perfect information and describing patterns of their performance. We use two different expressions for the expected value of partial information to highlight its relationship to the expected value of perfect information. We also develop the expected value of partial information for hierarchical uncertainties. We highlight patterns in the value of information for the Svalbard population of the pink-footed goose (Anser brachyrhynchus), a population that is subject to uncertainty in both reproduction and survival functions. The framework for valuing information is seen as having widespread potential in resource decision making, and serves as a motivation for resource monitoring, assessment, and collaboration.

  20. Heat flux measurements on ceramics with thin film thermocouples

    NASA Technical Reports Server (NTRS)

    Holanda, Raymond; Anderson, Robert C.; Liebert, Curt H.

    1993-01-01

    Two methods were devised to measure heat flux through a thick ceramic using thin film thermocouples. The thermocouples were deposited on the front and back face of a flat ceramic substrate. The heat flux was applied to the front surface of the ceramic using an arc lamp Heat Flux Calibration Facility. Silicon nitride and mullite ceramics were used; two thicknesses of each material was tested, with ceramic temperatures to 1500 C. Heat flux ranged from 0.05-2.5 MW/m2(sup 2). One method for heat flux determination used an approximation technique to calculate instantaneous values of heat flux vs time; the other method used an extrapolation technique to determine the steady state heat flux from a record of transient data. Neither method measures heat flux in real time but the techniques may easily be adapted for quasi-real time measurement. In cases where a significant portion of the transient heat flux data is available, the calculated transient heat flux is seen to approach the extrapolated steady state heat flux value as expected.

  1. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  2. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE PAGES

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    2014-10-10

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  3. Luminescence isochron dating: a new approach using different grain sizes.

    PubMed

    Zhao, H; Li, S H

    2002-01-01

    A new approach to isochron dating is described using different sizes of quartz and K-feldspar grains. The technique can be applied to sites with time-dependent external dose rates. It is assumed that any underestimation of the equivalent dose (De) using K-feldspar is by a factor F, which is independent of grain size (90-350 microm) for a given sample. Calibration of the beta source for different grain sizes is discussed, and then the sample ages are calculated using the differences between quartz and K-feldspar De from grains of similar size. Two aeolian sediment samples from north-eastern China are used to illustrate the application of the new method. It is confirmed that the observed values of De derived using K-feldspar underestimate the expected doses (based on the quartz De) but, nevertheless, these K-feldspar De values correlate linearly with the calculated internal dose rate contribution, supporting the assumption that the underestimation factor F is independent of grain size. The isochron ages are also compared with the results obtained using quartz De and the measured external dose rates.

  4. Sorption influenced transport of ionizable pharmaceuticals onto a natural sandy aquifer sediment at different pH.

    PubMed

    Schaffer, Mario; Boxberger, Norman; Börnick, Hilmar; Licha, Tobias; Worch, Eckhard

    2012-04-01

    The pH-dependent transport of eight selected ionizable pharmaceuticals was investigated by using saturated column experiments. Seventy-eight different breakthrough curves on a natural sandy aquifer material were produced and compared for three different pH levels at otherwise constant conditions. The experimentally obtained K(OC) data were compared with calculated K(OC) values derived from two different logK(OW)-logK(OC) correlation approaches. A significant pH-dependence on sorption was observed for all compounds with pK(a) in the considered pH range. Strong retardation was measured for several compounds despite their hydrophilic character. Besides an overall underestimation of K(OC), the comparison between calculated and measured values only yields meaningful results for the acidic and neutral compounds. Basic compounds retarded much stronger than expected, particularly at low pH when their cationic species dominated. This is caused by additional ionic interactions, such as cation exchange processes, which are insufficiently considered in the applied K(OC) correlations. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Tables for simplifying calculations of activities produced by thermal neutrons

    USGS Publications Warehouse

    Senftle, F.E.; Champion, W.R.

    1954-01-01

    The method of calculation described is useful for the types of work of which examples are given. It is also useful in making rapid comparison of the activities that might be expected from several different elements. For instance, suppose it is desired to know which of the three elements, cobalt, nickel, or vanadium is, under similar conditions, activated to the greatest extent by thermal neutrons. If reference is made to a cross-section table only, the values may be misleading unless properly interpreted by a suitable comparison of half-lives and abundances. In this table all the variables have been combined and the desired information can be obtained directly from the values of A 3??, the activity produced per gram per second of irradiation, under the stated conditions. Hence, it is easily seen that, under similar circumstances of irradiation, vanadium is most easily activated even though the cross section of one of the cobalt isotopes is nearly five times that of vanadium and the cross section of one of the nickel isotopes is three times that of vanadium. ?? 1954 Societa?? Italiana di Fisica.

  6. Consequences of neglecting the interannual variability of the solar resource: A case study of photovoltaic power among the Hawaiian Islands

    DOE PAGES

    Bryce, Richard; Losada Carreno, Ignacio; Kumler, Andrew; ...

    2018-04-05

    The interannual variability of the solar irradiance and meteorological conditions are often ignored in favor of single-year data sets for modeling power generation and evaluating the economic value of photovoltaic (PV) power systems. Yet interannual variability significantly impacts the generation from one year to another of renewable power systems such as wind and PV. Consequently, the interannual variability of power generation corresponds to the interannual variability of capital returns on investment. The penetration of PV systems within the Hawaiian Electric Companies' portfolio has rapidly accelerated in recent years and is expected to continue to increase given the state's energy objectivesmore » laid out by the Hawaii Clean Energy Initiative. We use the National Solar Radiation Database (1998-2015) to characterize the interannual variability of the solar irradiance and meteorological conditions across the State of Hawaii. These data sets are passed to the National Renewable Energy Laboratory's System Advisory Model (SAM) to calculate an 18-year PV power generation data set to characterize the variability of PV power generation. We calculate the interannual coefficient of variability (COV) for annual average global horizontal irradiance (GHI) on the order of 2% and COV for annual capacity factor on the order of 3% across the Hawaiian archipelago. Regarding the interannual variability of seasonal trends, we calculate the COV for monthly average GHI values on the order of 5% and COV for monthly capacity factor on the order of 10%. We model residential-scale and utility-scale PV systems and calculate the economic returns of each system via the payback period and the net present value. We demonstrate that studies based on single-year data sets for economic evaluations reach conclusions that deviate from the true values realized by accounting for interannual variability.« less

  7. Consequences of neglecting the interannual variability of the solar resource: A case study of photovoltaic power among the Hawaiian Islands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryce, Richard; Losada Carreno, Ignacio; Kumler, Andrew

    The interannual variability of the solar irradiance and meteorological conditions are often ignored in favor of single-year data sets for modeling power generation and evaluating the economic value of photovoltaic (PV) power systems. Yet interannual variability significantly impacts the generation from one year to another of renewable power systems such as wind and PV. Consequently, the interannual variability of power generation corresponds to the interannual variability of capital returns on investment. The penetration of PV systems within the Hawaiian Electric Companies' portfolio has rapidly accelerated in recent years and is expected to continue to increase given the state's energy objectivesmore » laid out by the Hawaii Clean Energy Initiative. We use the National Solar Radiation Database (1998-2015) to characterize the interannual variability of the solar irradiance and meteorological conditions across the State of Hawaii. These data sets are passed to the National Renewable Energy Laboratory's System Advisory Model (SAM) to calculate an 18-year PV power generation data set to characterize the variability of PV power generation. We calculate the interannual coefficient of variability (COV) for annual average global horizontal irradiance (GHI) on the order of 2% and COV for annual capacity factor on the order of 3% across the Hawaiian archipelago. Regarding the interannual variability of seasonal trends, we calculate the COV for monthly average GHI values on the order of 5% and COV for monthly capacity factor on the order of 10%. We model residential-scale and utility-scale PV systems and calculate the economic returns of each system via the payback period and the net present value. We demonstrate that studies based on single-year data sets for economic evaluations reach conclusions that deviate from the true values realized by accounting for interannual variability.« less

  8. Understanding Students' Motivation in Sport and Physical Education: From the Expectancy-Value Model and Self-Efficacy Theory Perspectives

    ERIC Educational Resources Information Center

    Gao, Zan; Lee, Amelia M.; Harrison, Louis, Jr.

    2008-01-01

    In this article, the roles of individuals' expectancy beliefs and incentives (i.e., task value, outcome expectancy) in sport and physical education are examined from expectancy-value model and self-efficacy theory perspectives. Overviews of the two theoretical frameworks and the conceptual and measurement issues are provided, followed by a review…

  9. Effects of expected-value information and display format on recognition of aircraft subsystem abnormalities

    NASA Technical Reports Server (NTRS)

    Palmer, Michael T.; Abbott, Kathy H.

    1994-01-01

    This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.

  10. Evaluation of algorithms for geological thermal-inertia mapping

    NASA Technical Reports Server (NTRS)

    Miller, S. H.; Watson, K.

    1977-01-01

    The errors incurred in producing a thermal inertia map are of three general types: measurement, analysis, and model simplification. To emphasize the geophysical relevance of these errors, they were expressed in terms of uncertainty in thermal inertia and compared with the thermal inertia values of geologic materials. Thus the applications and practical limitations of the technique were illustrated. All errors were calculated using the parameter values appropriate to a site at the Raft River, Id. Although these error values serve to illustrate the magnitudes that can be expected from the three general types of errors, extrapolation to other sites should be done using parameter values particular to the area. Three surface temperature algorithms were evaluated: linear Fourier series, finite difference, and Laplace transform. In terms of resulting errors in thermal inertia, the Laplace transform method is the most accurate (260 TIU), the forward finite difference method is intermediate (300 TIU), and the linear Fourier series method the least accurate (460 TIU).

  11. Burden of suicide in Poland in 2012: how could it be measured and how big is it?

    PubMed

    Orlewska, Katarzyna; Orlewska, Ewa

    2018-04-01

    The aim of our study was to estimate the health-related and economic burden of suicide in Poland in 2012 and to demonstrate the effects of using different assumptions on the disease burden estimation. Years of life lost (YLL) were calculated by multiplying the number of deaths by the remaining life expectancy. Local expected YLL (LEYLL) and standard expected YLL (SEYLL) were computed using Polish life expectancy tables and WHO standards, respectively. In the base case analysis LEYLL and SEYLL were computed with 3.5 and 0% discount rates, respectively, and no age-weighting. Premature mortality costs were calculated using a human capital approach, with discounting at 5%, and are reported in Polish zloty (PLN) (1 euro = 4.3 PLN). The impact of applying different assumptions on base-case estimates was tested in sensitivity analyses. The total LEYLLs and SEYLLs due to suicide were 109,338 and 279,425, respectively, with 88% attributable to male deaths. The cost of male premature mortality (2,808,854,532 PLN) was substantially higher than for females (177,852,804 PLN). Discounting and age-weighting have a large effect on the base case estimates of LEYLLs. The greatest impact on the estimates of suicide-related premature mortality costs was due to the value of the discount rate. Our findings provide quantitative evidence on the burden of suicide. In our opinion each of the demonstrated methods brings something valuable to the evaluation of the impact of suicide on a given population, but LEYLLs and premature mortality costs estimated according to national guidelines have the potential to be useful for local public health policymakers.

  12. Participation motives in physical education: an expectancy-value approach.

    PubMed

    Goudas, Marios; Dermitzaki, Irini

    2004-12-01

    This study applied an expectancy-value approach in examining participation motives of students in physical education. As predicted outcome expectancy, a variable formed by the combination of outcome value and outcome likelihood correlated significantly higher with motivational indices than these two factors.

  13. Using the Expectancy-Value Theory of Motivation to Predict Behavioral and Emotional Risk among High School Students

    ERIC Educational Resources Information Center

    Dever, Bridget V.

    2016-01-01

    Within the expectancy-value framework, much work has been done linking expectancies and task values to academic outcomes such as performance, persistence, and choice. Research on the associations between student motivation (including efficacy and task values) and behavioral and emotional problems, however, is nascent. The present study examined a…

  14. Development of an index to rank dairy females on expected lifetime profit.

    PubMed

    Kelleher, M M; Amer, P R; Shalloo, L; Evans, R D; Byrne, T J; Buckley, F; Berry, D P

    2015-06-01

    The objective of this study was to develop an index to rank dairy females on expected profit for the remainder of their lifetime, taking cognizance of both additive and nonadditive genetic merit, permanent environmental effects, and current states of the animal including the most recent calving date and cow parity. The cow own worth (COW) index is intended to be used for culling the expected least profitable females in a herd, as well as inform purchase and pricing decisions for trading of females. The framework of the COW index consisted of the profit accruing from (1) the current lactation, (2) future lactations, and (3) net replacement cost differential. The COW index was generated from estimated performance values (sum of additive genetic merit, nonadditive genetic merit, and permanent environmental effects) of traits, their respective net margin values, and transition probability matrices for month of calving, survival, and somatic cell count; the transition matrices were to account for predicted change in a cow's state in the future. Transition matrices were generated from 3,156,109 lactation records from the Irish national database between the years 2010 and 2013. Phenotypic performance records for 162,981 cows in the year 2012 were used to validate the COW index. Genetic and permanent environmental effects (where applicable) were available for these cows from the 2011 national genetic evaluations and used to calculate the COW index and their national breeding index values (includes only additive genetic effects). Cows were stratified per quartile within herd, based on their COW index value and national breeding index value. The correlation between individual animal COW index value and national breeding index value was 0.65. Month of calving of the cow in her current lactation explained 18% of the variation in the COW index, with the parity of the cow explaining an additional 3 percentage units of the variance in the COW index. Females ranking higher on the COW index yielded more milk and milk solids and calved earlier in the calving season than their lower ranking contemporaries. The difference in phenotypic performance between the best and worst quartiles was larger for cows ranked on COW index than cows ranked on the national breeding index. The COW index is useful to rank females before culling or purchasing decisions on expected profit and is complementary to the national breeding index, which identifies the most suitable females for breeding replacements. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Towards the Application of Structure-Property Relationship Modeling in Materials Science: Predicting the Seebeck Coefficient for Ionic Liquid/Redox Couple Systems.

    PubMed

    Sosnowska, Anita; Barycki, Maciej; Gajewicz, Agnieszka; Bobrowski, Maciej; Freza, Sylwia; Skurski, Piotr; Uhl, Stefanie; Laux, Edith; Journot, Tony; Jeandupeux, Laure; Keppner, Herbert; Puzyn, Tomasz

    2016-06-03

    This work focuses on determining the influence of both ionic-liquid (IL) type and redox couple concentration on Seebeck coefficient values of such a system. The quantitative structure-property relationship (QSPR) and read-across techniques are proposed as methods to identify structural features of ILs (mixed with LiI/I2 redox couple), which have the most influence on the Seebeck coefficient (Se ) values of the system. ILs consisting of small, symmetric cations and anions with high values of vertical electron binding energy are recognized as those with the highest values of Se . In addition, the QSPR model enables the values of Se to be predicted for each IL that belongs to the applicability domain of the model. The influence of the redox-couple concentration on values of Se is also quantitatively described. Thus, it is possible to calculate how the value of Se will change with changing redox-couple concentration. The presence of the LiI/I2 redox couple in lower concentrations increases the values of Se , as expected. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Measuring the quality of haemophilia care across different settings: a set of performance indicators derived from demographics data.

    PubMed

    Iorio, A; Stonebraker, J S; Brooker, M; Soucie, J M

    2017-01-01

    Haemophilia is a rare disease for which quality of care varies around the world. We propose data-driven indicators as surrogate measures for the provision of haemophilia care across countries and over time. The guiding criteria for selection of possible indicators were ease of calculation and direct applicability to a wide range of countries with basic data collection capacities. General population epidemiological data and haemophilia A population data from the World Federation of Hemophilia (WFH) Annual Global Survey (AGS) for the years 2013 and 2010 in a sample of 10 countries were used for this pilot exercise. Three indicators were identified: (i) the percentage difference between the observed and the expected haemophilia A incidence, which would be close to null when all of the people with haemophilia A (PWHA) theoretically expected in a country would be known and reported to the AGS; (ii) the percentage of the total number of PWHA with severe disease; and (iii) the ratio of adults to children among PWHA standardized to the ratio of adults to children for males in the general population, which would be close to one if the survival of PWHA is equal to that of the general population. Country-specific values have been calculated for the 10 countries. We have identified and evaluated three promising indicators of quality of care in haemophilia. Further evaluation on a wider set of data from the AGS will be needed to confirm their value and further explore their measurement properties. © 2016 John Wiley & Sons Ltd.

  17. Measuring the quality of haemophilia care across different settings: a set of performance indicators derived from demographics data

    PubMed Central

    IORIO, A.; STONEBRAKER, J. S.; BROOKER, M.; SOUCIE, J. M.

    2017-01-01

    Background Haemophilia is a rare disease for which quality of care varies around the world. We propose data-driven indicators as surrogate measures for the provision of haemophilia care across countries and over time. Materials and methods The guiding criteria for selection of possible indicators were ease of calculation and direct applicability to a wide range of countries with basic data collection capacities. General population epidemiological data and haemophilia A population data from the World Federation of Hemophilia (WFH) Annual Global Survey (AGS) for the years 2013 and 2010 in a sample of 10 countries were used for this pilot exercise. Results Three indicators were identified: (i) the percentage difference between the observed and the expected haemophilia A incidence, which would be close to null when all of the people with haemophilia A (PWHA) theoretically expected in a country would be known and reported to the AGS; (ii) the percentage of the total number of PWHA with severe disease; and (iii) the ratio of adults to children among PWHA standardized to the ratio of adults to children for males in the general population, which would be close to one if the survival of PWHA is equal to that of the general population. Country-specific values have been calculated for the 10 countries. Conclusions We have identified and evaluated three promising indicators of quality of care in haemophilia. Further evaluation on a wider set of data from the AGS will be needed to confirm their value and further explore their measurement properties. PMID:27928881

  18. Quantum Spectra and Dynamics

    NASA Astrophysics Data System (ADS)

    Arce, Julio Cesar

    1992-01-01

    This work focuses on time-dependent quantum theory and methods for the study of the spectra and dynamics of atomic and molecular systems. Specifically, we have addressed the following two problems: (i) Development of a time-dependent spectral method for the construction of spectra of simple quantum systems--This includes the calculation of eigenenergies, the construction of bound and continuum eigenfunctions, and the calculation of photo cross-sections. Computational applications include the quadrupole photoabsorption spectra and dissociation cross-sections of molecular hydrogen from various vibrational states in its ground electronic potential -energy curve. This method is seen to provide an advantageous alternative, both from the computational and conceptual point of view, to existing standard methods. (ii) Explicit time-dependent formulation of photoabsorption processes --Analytical solutions of the time-dependent Schrodinger equation are constructed and employed for the calculation of probability densities, momentum distributions, fluxes, transition rates, expectation values and correlation functions. These quantities are seen to establish the link between the dynamics and the calculated, or measured, spectra and cross-sections, and to clarify the dynamical nature of the excitation, transition and ejection processes. Numerical calculations on atomic and molecular hydrogen corroborate and complement the previous results, allowing the identification of different regimes during the photoabsorption process.

  19. Cost-effectiveness analysis of treatment alternatives for beef bulls with preputial prolapse.

    PubMed

    Kasari, T R; McGrann, J M; Hooper, R N

    1997-10-01

    To develop an economic model for comparing cost-effectiveness of medical and surgical treatment versus replacement of beef bulls with preputial prolapse. Economic analysis. Estimates determined from medical records of bulls treated for preputial prolapse at our hospital and from information about treatment of bulls published elsewhere. Annual depreciation cost for treatment (ADC(T)) and replacement (ADC(R)) were calculated. Total investment for an injured bull equaled the sum of salvage value, maintenance cost, and expected cost of the treatment option under consideration. Total investment for a replacement bull was purchase price. Net present value of cost was calculated for each year of bull use. Sensitivity analyses were constructed to determine the value that would warrant treatment of an injured bull. The decision to treat was indicated when ADC(T) was less than ADC(R). In our example, it was more cost-effective for owners to cull an injured bull. The ADC(R) was $97 less than ADC(T) for medical treatment ($365 vs $462) and $280 less than ADC(T) for surgical treatment ($365 vs $645). Likewise, net present value of cost values indicated that it was more cost-effective for owners to cull an injured bull. Sensitivity analysis indicated treatment decisions were justified on the basis of replacement value or planned number of breeding seasons remaining for the bull. The model described here can be used by practitioners to provide an objective basis to guide decision making of owners who seek advice on whether to treat or replace bulls with preputial prolapse.

  20. Ground states of larger nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pieper, S.C.; Wiringa, R.B.; Pandharipande, V.R.

    1995-08-01

    The methods used for the few-body nuclei require operations on the complete spin-isospin vector; the size of this vector makes such methods impractical for nuclei with A > 8. During the last few years we developed cluster expansion methods that do not require operations on the complete vector. We use the same Hamiltonians as for the few-body nuclei and variational wave functions of form similar to the few-body wave functions. The cluster expansions are made for the noncentral parts of the wave functions and for the operators whose expectation values are being evaluated. The central pair correlations in the wavemore » functions are treated exactly and this requires the evaluation of 3A-dimensional integrals which are done with Monte Carlo techniques. Most of our effort was on {sup 16}O, other p-shell nuclei, and {sup 40}Ca. In 1993 the Mathematics and Computer Science Division acquired a 128-processor IBM SP which has a theoretical peak speed of 16 Gigaflops (GFLOPS). We converted our program to run on this machine. Because of the large memory on each node of the SP, it was easy to convert the program to parallel form with very low communication overhead. Considerably more effort was needed to restructure the program from one oriented towards long vectors for the Cray computers at NERSC to one that makes efficient use of the cache of the RS6000 architecture. The SP made possible complete five-body cluster calculations of {sup 16}O for the first time; previously we could only do four-body cluster calculations. These calculations show that the expectation value of the two-body potential is converging less rapidly than we had thought, while that of the three-body potential is more rapidly convergent; the net result is no significant change to our predicted binding energy for {sup 16}O using the new Argonne v{sub 18} potential and the Urbana IX three-nucleon potential. This result is in good agreement with experiment.« less

  1. 43 CFR 11.84 - Damage determination phase-implementation guidance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...

  2. 43 CFR 11.84 - Damage determination phase-implementation guidance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...

  3. 43 CFR 11.84 - Damage determination phase-implementation guidance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...

  4. Investigation into the semimagic nature of the tin isotopes through electromagnetic moments

    DOE PAGES

    Allmond, J. M.; Stuchbery, A. E.; Galindo-Uribarri, A.; ...

    2015-10-19

    A complete set of electromagnetic moments, B(E2;0 + 1 2 + 1), Q(2 + 1), and g(2 + 1), have been measured from Coulomb excitation of semi-magic 112,114,116,118,120,122,124Sn (Z = 50) on natural carbon and titanium targets. The magnitude of the B(E2) values, measured to a precision of ~4%, disagree with a recent lifetime study [Phys. Lett. B 695, 110 (2011)] that employed the Doppler- shift attenuation method. The B(E2) values show an overall enhancement compared with recent theoretical calculations and a clear asymmetry about midshell, contrary to naive expectations. A new static electric quadrupole moment, Q(2 + 1), hasmore » been measured for 114Sn. The static quadrupole moments are generally consistent with zero but reveal an enhancement near midshell; this had not been previously observed. The magnetic dipole moments are consistent with previous measurements and show a near monotonic decrease in value with neutron number. The current theory calculations fail to reproduce the electromagnetic moments of the tin isotopes. The role of 2p-2h and 4p-4h intruders, which are lowest in energy at mid shell and outside of current model spaces, needs to be investigated in the future.« less

  5. Electronic properties of 3R-CuAlO2 under pressure: Three theoretical approaches

    NASA Astrophysics Data System (ADS)

    Christensen, N. E.; Svane, A.; Laskowski, R.; Palanivel, B.; Modak, P.; Chantis, A. N.; van Schilfgaarde, M.; Kotani, T.

    2010-01-01

    The pressure variation in the structural parameters, u and c/a , of the delafossite CuAlO2 is calculated within the local-density approximation (LDA). Further, the electronic structures as obtained by different approximations are compared: LDA, LDA+U , and a recently developed “quasiparticle self-consistent GW ” (QSGW) approximation. The structural parameters obtained by the LDA agree very well with experiments but, as expected, gaps in the formal band structure are underestimated as compared to optical experiments. The (in LDA too high lying) Cu3d states can be down shifted by LDA+U . The magnitude of the electric field gradient (EFG) as obtained within the LDA is far too small. It can be “fitted” to experiments in LDA+U but a simultaneous adjustment of the EFG and the gap cannot be obtained with a single U value. QSGW yields reasonable values for both quantities. LDA and QSGW yield significantly different values for some of the band-gap deformation potentials but calculations within both approximations predict that 3R-CuAlO2 remains an indirect-gap semiconductor at all pressures in its stability range 0-36 GPa, although the smallest direct gap has a negative pressure coefficient.

  6. Development of In-Fiber Reflective Bragg Gratings as Shear Stress Monitors in Aerodynamic Facilities

    NASA Technical Reports Server (NTRS)

    Parmar, Devendra S.; Sprinkle, Danny R.; Singh, Jag J.

    1998-01-01

    Bragg gratings centered at nominal wavelengths of 1290 nm and 1300 run were inscribed in a 9/125 microns germano-silicate optical fiber, using continuous wave frequency doubled Ar+ laser radiation at 244 nm. Such gratings have been used extensively as temperature and strain monitors in smart structures. They have, however, never been used for measuring aerodynamic shear stresses. As a test of their sensitivity as shear stress monitors, a Bragg fiber attached to a metal plate was subjected to laminar flows in a glass pipe. An easily measurable large flow-induced wavelength shift (Delta Lambda(sub B)) was observed in the Bragg reflected wavelength. Thereafter, the grating was calibrated by making one time, simultaneous measurements of Delta Lambda(sub B) and the coefficient of skin friction (C(sub f)) with a skin friction balance, as a function of flow rates in a subsonic wind tunnel. Onset of fan-induced transition in the tunnel flow provided a unique flow rate for correlating Delta Lambda(sub B) and (C(sub f) values needed for computing effective modulus of rigidity (N(sub eff)) of the fiber attached to the metal plate. This value Of N(sub eff) is expected to remain constant throughout the elastic stress range expected during the Bragg grating aerodynamic tests. It has been used for calculating the value of Cf at various tunnel speeds, on the basis of measured values of Bragg wavelength shifts at those speeds.

  7. Non-Markovianity quantifier of an arbitrary quantum process

    NASA Astrophysics Data System (ADS)

    Debarba, Tiago; Fanchini, Felipe F.

    2017-12-01

    Calculating the degree of non-Markovianity of a quantum process, for a high-dimensional system, is a difficult task given complex maximization problems. Focusing on the entanglement-based measure of non-Markovianity we propose a numerically feasible quantifier for finite-dimensional systems. We define the non-Markovianity measure in terms of a class of entanglement quantifiers named witnessed entanglement which allow us to write several entanglement based measures of non-Markovianity in a unique formalism. In this formalism, we show that the non-Markovianity, in a given time interval, can be witnessed by calculating the expectation value of an observable, making it attractive for experimental investigations. Following this property we introduce a quantifier base on the entanglement witness in an interval of time; we show that measure is a bonafide measure of non-Markovianity. In our example, we use the generalized robustness of entanglement, an entanglement measure that can be readily calculated by a semidefinite programming method, to study impurity atoms coupled to a Bose-Einstein condensate.

  8. Carbonyls in the metropolitan area of Mexico City: calculation of the total photolytic rate constants Kp(s(-1)) and photolytic lifetime (tau) of ambient formaldehyde and acetaldehyde.

    PubMed

    Báez, Armando P; Torres, Ma del Carmen B; García, Rocío M; Padilla, Hugo G

    2002-01-01

    A great number of studies on the ambient levels of formaldehyde and other carbonyls in the urban rural and maritime atmospheres have been published because of their chemical and toxicological characteristics, and adverse health effects. Due to their toxicological effects, it was considered necessary to measure these compounds at different sites in the metropolitan area of Mexico City, and to calculate the total rate of photolytic constants and the photolytic lifetime of formaldehyde and acetaldehyde. Four sites were chosen. Sampling was carried out at different seasons and atmospheric conditions. The results indicated that formaldehyde was the most abundant carbonyl, followed by acetone and acetaldehyde. Data sets obtained from the 4 sites were chosen to calculate the total rate of photolysis and the photolytic lifetime for formaldehyde and acetaldehyde. Maximum photolytic rate values were obtained at the maximum actinic fluxes, as was to be expected.

  9. Multireference configuration interaction calculations of the first six ionization potentials of the uranium atom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bross, David H.; Parmar, Payal; Peterson, Kirk A.

    The first 6 ionization potentials (IPs) of the uranium atom have been calculated using multireference configuration interaction (MRCI+Q) with extrapolations to the complete basis set (CBS) limit using new all-electron correlation consistent basis sets. The latter were carried out with the third-order Douglas-Kroll-Hess Hamiltonian. Correlation down through the 5s5p5d electrons have been taken into account, as well as contributions to the IPs due to the Lamb shift. Spin-orbit coupling contributions calculated at the 4-component Kramers restricted configuration interaction level, as well as the Gaunt term computed at the Dirac-Hartree-Fock level, were added to the best scalar relativistic results. As amore » result, the final ionization potentials are expected to be accurate to at least 5 kcal/mol (0.2 eV), and thus more reliable than the current experimental values of IP 3 through IP 6.« less

  10. Cost-effectiveness and value of information analysis of nutritional support for preventing pressure ulcers in high-risk patients: implement now, research later.

    PubMed

    Tuffaha, Haitham W; Roberts, Shelley; Chaboyer, Wendy; Gordon, Louisa G; Scuffham, Paul A

    2015-04-01

    Pressure ulcers are a major cause of mortality, morbidity, and increased healthcare cost. Nutritional support may reduce the incidence of pressure ulcers in hospitalised patients who are at risk of pressure ulcer and malnutrition. To evaluate the cost-effectiveness of nutritional support in preventing pressure ulcers in high-risk hospitalised patients, and to assess the value of further research to inform the decision to implement this intervention using value of information analysis (VOI). The analysis was from the perspective of Queensland Health, Australia using a decision model with evidence derived from a systematic review and meta-analysis. Resources were valued using 2014 prices and the time horizon of the analysis was one year. Monte Carlo simulation was used to estimate net monetary benefits (NB) and to calculate VOI measures. Compared with standard hospital diet, nutritional support was cost saving at AU$425 per patient, and more effective with an average 0.005 quality-adjusted life years (QALY) gained. At a willingness-to-pay of AU$50,000 per QALY, the incremental NB was AU$675 per patient, with a probability of 87 % that nutritional support is cost-effective. The expected value of perfect information was AU$5 million and the expected value of perfect parameter information was highest for the relative risk of developing a pressure ulcer at AU$2.5 million. For a future trial investigating the relative effectiveness of the interventions, the expected net benefit of research would be maximised at AU$100,000 with 1,200 patients in each arm if nutritional support was perfectly implemented. The opportunity cost of withholding the decision to implement the intervention until the results of the future study are available would be AU$14 million. Nutritional support is cost-effective in preventing pressure ulcers in high-risk hospitalised patients compared with standard diet. Future research to reduce decision uncertainty is worthwhile; however, given the opportunity losses associated with delaying the implementation, "implement and research" is the approach recommended for this intervention.

  11. More Value through Greater Differentiation: Gender Differences in Value Beliefs about Math

    ERIC Educational Resources Information Center

    Gaspard, Hanna; Dicke, Anna-Lena; Flunger, Barbara; Schreier, Brigitte; Häfner, Isabelle; Trautwein, Ulrich; Nagengast, Benjamin

    2015-01-01

    Expectancy-value theory (Eccles et al., 1983) is a prominent approach to explaining gender differences in math-related academic choices, with value beliefs acting as an important explanatory factor. Expectancy-value theory defines 4 value components: intrinsic value, attainment value, utility value, and cost. The present study followed up on…

  12. ‘Transport to Where?’: Reflections on the problem of value and time à propos an awkward practice in medical research

    PubMed Central

    Geissler, P. Wenzel

    2011-01-01

    Based upon Kenyan ethnography, this article examines the gap between the bioethics aversion to value transfers in clinical trials, and research participants’ and researchers’ expectations of these. This article focuses upon so-called ‘transport reimbursement’ (TR): monetary payments to participants that are framed as mere refund of transport expenses, but which are of considerable value to recipients. The interest in this case lies not so much in the unsurprising gap between regulatory norms and poor study subjects’ lives, but in the way in which this discrepancy between bioethical discourse and materialities of survival is silenced. In spite of the general awareness that TR indeed is about the material value of research, about value calculation, and expectations of return, it is not publicly discussed as such – unless ironically, in jest, or in private. This double-blindness around ‘reimbursement’ has provoked discussions among ethicists and anthropologists, some of which propose that the work that generates scientific value should be recognised as labour and participants, accordingly, paid. Here, this paper argues that such a re-vision of trial participation as work rather than as a gift for the public good, risks abrogating the possibility of ‘the public’ that is not only a precondition of public medical science, but also its potential product. The supposedly radical solution of tearing away the veils of misrecognition that ‘free’ gifting ideology lays upon the realities of free labour, though analytically plausible, fails to recognise the utopian openings within clinical trial transactions that point beyond the present – towards larger forms of social association, and towards future alignments of scientific possibilities and human lives. PMID:23914253

  13. Erodibility of selected soils and estimates of sediment yields in the San Juan Basin, New Mexico

    USGS Publications Warehouse

    Summer, Rebecca M.

    1981-01-01

    Onsite rainfall-simulation experiments were conducted to derive field-erodibility indexes for rangeland soils and soils disturbed by mining in coal fields of northwestern New Mexico. Mean indexes on rangeland soils range from 0 grams (of detached soil) on dune soil to 121 grams on wash-transport zones. Mean field-erodibility-index values of soils disturbed by mining range from 16 to 32 grams; they can be extrapolted to nearby coal fields where future mining is expected. Because field-erodibility-index data allow differentiation of erodibilities across a variable landscape, these indexes were used to adjust values of K, the erodibility factor of the Universal Soil Loss Equation. Estimates of soil loss and sediment yield were then calculated for a small basin following mining. (USGS)

  14. Automatic low-order aberration correction based on geometrical optics for slab lasers.

    PubMed

    Yu, Xin; Dong, Lizhi; Lai, Boheng; Yang, Ping; Liu, Yong; Kong, Qingfeng; Yang, Kangjian; Tang, Guomao; Xu, Bing

    2017-02-20

    In this paper, we present a method based on geometry optics to simultaneously correct low-order aberrations and reshape the beams of slab lasers. A coaxial optical system with three lenses is adapted. The positions of the three lenses are directly calculated based on the beam parameters detected by wavefront sensors. The initial sizes of the input beams are 1.8  mm×11  mm, and peak-to-valley (PV) values of the wavefront range up to several tens of microns. After automatic correction, the dimensions may reach nearly 22  mm×22  mm as expected, and PV values of the wavefront are less than 2 μm. The effectiveness and precision of this method are verified with experiments.

  15. Live animal measurements, carcass composition and plasma hormone and metabolite concentrations in male progeny of sires differing in genetic merit for beef production.

    PubMed

    Clarke, A M; Drennan, M J; McGee, M; Kenny, D A; Evans, R D; Berry, D P

    2009-07-01

    In genetic improvement programmes for beef cattle, the effect of selecting for a given trait or index on other economically important traits, or their predictors, must be quantified to ensure no deleterious consequential effects go unnoticed. The objective was to compare live animal measurements, carcass composition and plasma hormone and metabolite concentrations of male progeny of sires selected on an economic index in Ireland. This beef carcass index (BCI) is expressed in euros and based on weaning weight, feed intake, carcass weight and carcass conformation and fat scores. The index is used to aid in the genetic comparison of animals for the expected profitability of their progeny at slaughter. A total of 107 progeny from beef sires of high (n = 11) or low (n = 11) genetic merit for the BCI were compared in either a bull (slaughtered at 16 months of age) or steer (slaughtered at 24 months of age) production system, following purchase after weaning (8 months of age) from commercial beef herds. Data were analysed as a 2 × 2 factorial design (two levels of genetic merit by two production systems). Progeny of high BCI sires had heavier carcasses, greater (P < 0.01) muscularity scores after weaning, greater (P < 0.05) skeletal scores and scanned muscle depth pre-slaughter, higher (P < 0.05) plasma insulin concentrations and greater (P < 0.01) animal value (obtained by multiplying carcass weight by carcass value, which was based on the weight of meat in each cut by its commercial value) than progeny of low BCI sires. Regression of progeny performance on sire genetic merit was also undertaken across the entire data set. In steers, the effect of BCI on carcass meat proportion, calculated carcass value (c/kg) and animal value was positive (P < 0.01), while a negative association was observed for scanned fat depth pre-slaughter and carcass fat proportion (P < 0.01), but there was no effect in bulls. The effect of sire expected progeny difference (EPD) for carcass weight followed the same trends as BCI. Muscularity scores, carcass meat proportion and calculated carcass value increased, whereas scanned fat depth, carcass fat and bone proportions decreased with increasing sire EPD for conformation score. The opposite association was observed for sire EPD for fat score. Results from this study show that selection using the BCI had positive effects on live animal muscularity, carcass meat proportion, proportions of high-value cuts and carcass value in steer progeny, which are desirable traits in beef production.

  16. The mixing length parameter alpha. [in stellar structure calculations

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.

    1990-01-01

    The standard mixing length theory, MLT, treats turbulent eddies as if they were isotropic, while the largest eddies that carry most of the flux are highly anisotropic. Recently, an anisotropic MLT was constructed, and the relevant equations derived. It is shown that these new equations can actually be cast in a form that is formally identical to that of the standard isotropic MLT, provided the mixing length parameter, derived from stellar structure calculations, is interpreted as an intermediate, auxiliary function alpha(x), where x, the degree of anisotropy is given as a function of the thermodynamic variables of the problem. The relation between alpha(x) and the physically relevant alpha(l = Hp) is also given. Once the value alpha is deduced, it is found to be a function of the local thermodynamic quantities, as expected.

  17. Expectations and Values of University Students in Transition: Evidence from an Australian Classroom

    ERIC Educational Resources Information Center

    Pearson, Cecil A. L.; Chatterjee, Samir R.

    2004-01-01

    Reforms in the functioning and purpose of higher education during the past 2 decades have created profound changes in the expectations and values of university students worldwide. Indeed, the values of entrepreneurship, vocational relevance, and commercial success have considerably displaced the traditional expectations of knowledge acquisition…

  18. College Students' Motivation toward Weight Training: An Application of Expectancy-Value Model

    ERIC Educational Resources Information Center

    Gao, Zan; Xiang, Ping

    2008-01-01

    Guided by an expectancy-value model of achievement choice (Eccles et al., 1983; Wigfield & Eccles, 2000), the relationships among expectancy-related beliefs, subjective task values (importance, interest, and usefulness), and achievement outcomes (intention, engagement, and performance) were examined in a college-level beginning weight training…

  19. Content Specificity of Expectancy Beliefs and Task Values in Elementary Physical Education

    ERIC Educational Resources Information Center

    Chen, Ang; Martin, Robert; Ennis, Catherine D.; Sun, Haichun

    2008-01-01

    The curriculum may superimpose a content-specific context that mediates motivation (Bong, 2001). This study examined content specificity of the expectancy-value motivation in elementary school physical education. Students' expectancy beliefs and perceived task values from a cardiorespiratory fitness unit, a muscular fitness unit, and a traditional…

  20. Adolescent Expectancy-Value Motivation, Achievement in Physical Education, and Physical Activity Participation

    ERIC Educational Resources Information Center

    Zhu, Xihe; Chen, Ang

    2013-01-01

    This study examined the relation between adolescent expectancy-value motivation, achievements, and after-school physical activity participation. Adolescents (N = 854) from 12 middle schools completed an expectancy-value motivation questionnaire, pre and posttests in psychomotor skill and health-related fitness knowledge tests, and a three-day…

  1. Expected rate of fisheries-induced evolution is slow.

    PubMed

    Andersen, Ken H; Brander, Keith

    2009-07-14

    Commercial fisheries exert high mortalities on the stocks they exploit, and the consequent selection pressure leads to fisheries-induced evolution of growth rate, age and size at maturation, and reproductive output. Productivity and yields may decline as a result, but little is known about the rate at which such changes are likely to occur. Fisheries-induced evolution of exploited populations has recently become a subject of concern for policy makers, fisheries managers, and the general public, with prominent calls for mitigating management action. We make a general evolutionary impact assessment of fisheries by calculating the expected rate of fisheries-induced evolution and the consequent changes in yield. Rates of evolution are expected to be approximately 0.1-0.6% per year, and the consequent reductions in fisheries yield are <0.7% per year. These rates are at least a factor of 5 lower than published values based on experiments and analyses of population time series, and we explain why the published rates may be overestimates. Dealing with evolutionary effects of fishing is less urgent than reducing the direct detrimental effects of overfishing on exploited stocks and on their marine ecosystems.

  2. Optimization of Multiple Related Negotiation through Multi-Negotiation Network

    NASA Astrophysics Data System (ADS)

    Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi

    In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.

  3. Acute oral and percutaneous toxicity of pesticides to mallards: Correlations with mammalian toxicity data

    USGS Publications Warehouse

    Hudson, R.H.; Haegele, M.A.; Tucker, R.K.

    1979-01-01

    Acute oral (po) and 24-hr percutaneous (perc) LD50 values for 21 common pesticides (19 anticholinesterases, of which 18 were organophosphates, and one was a carbamate; one was an organochlorine central nervous system stimulant; and one was an organonitrogen pneumotoxicant) were determined in mallards (Anas platyrhynchos). Three of the pesticides tested were more toxic percutaneously than orally. An index to the percutaneous hazard of a pesticide, the dermal toxicity index (DTI = po LD50/perc LD50 ? 100), was also calculated for each pesticide. These toxicity values in mallards were compared with toxicity data for rats from the literature. Significant positive correlations were found between log po and log percutaneous LD50 values in mallards (r = 0.65, p 0.10). Variations in percutaneous methodologies are discussed with reference to interspecies variation in toxicity values. It is recommended that a mammalian DTI value approaching 30 be used as a guideline for the initiation of percutaneous toxicity studies in birds, when the po LD50 and/or projected percutaneous LD50 are less than expected field exposure levels.

  4. Scalar field vacuum expectation value induced by gravitational wave background

    NASA Astrophysics Data System (ADS)

    Jones, Preston; McDougall, Patrick; Ragsdale, Michael; Singleton, Douglas

    2018-06-01

    We show that a massless scalar field in a gravitational wave background can develop a non-zero vacuum expectation value. We draw comparisons to the generation of a non-zero vacuum expectation value for a scalar field in the Higgs mechanism and with the dynamical Casimir vacuum. We propose that this vacuum expectation value, generated by a gravitational wave, can be connected with particle production from gravitational waves and may have consequences for the early Universe where scalar fields are thought to play an important role.

  5. Effect of storage conditions on the calorific value of municipal solid waste.

    PubMed

    Nzioka, Antony Mutua; Hwang, Hyeon-Uk; Kim, Myung-Gyun; Yan, Cao Zheng; Lee, Chang-Soo; Kim, Young-Ju

    2017-08-01

    Storage conditions are considered to be an important factor as far as waste material characteristics are concerned. This experimental investigation was conducted using municipal solid waste (MSW) with a high moisture content and varying composition of organic waste. The objective of this study was to understand the effect of storage conditions and temperature on the moisture content and calorific value of the waste. Samples were subjected to two different storage conditions and investigated at specified temperatures. The composition of sample materials investigated was varied for each storage condition and temperature respectively. Gross calorific value was determined experimentally while net calorific value was calculated using empirical formulas proposed by other researchers. Results showed minimal changes in moisture content as well as in gross and net calorific values when the samples were subjected to sealed storage conditions. Moisture content reduced due to the ventilation process and the rate of moisture removal increased with a rise in storage temperature. As expected, rate of moisture removal had a positive effect on gross and net calorific values. Net calorific values also increased at varying rates with a simultaneous decrease in moisture content. Experimental investigation showed the effectiveness of ventilation in improving the combustion characteristics of the waste.

  6. Axisymmetric Eigenmodes of Spheroidal Pure Electron Plasmas

    NASA Astrophysics Data System (ADS)

    Kawai, Yosuke; Saitoh, Haruhiko; Yoshida, Zensho; Kiwamoto, Yasuhito

    2010-11-01

    The axisymmetric electrostatic eigenmodes of spheroidal pure electron plasmas have been studied experimentally. It is confirmed that the observed spheroidal plasma attains a theoretically expected equilibrium density distribution, with the exception of a low-density halo distribution surrounding the plasma. When the eigenmode frequency observed for the plasma is compared with the frequency predicted by the dispersion relation derived under ideal conditions wherein the temperature is zero and the boundary is located at an infinite distance from the plasma, it is observed that the absolute value of the observed frequency is systematically higher than the theoretical prediction. Experimental examinations and numerical calculations indicate that the upward shift of the eigenmode frequency cannot be accounted for solely by the finite temperature effect, but is significantly affected by image charges induced on the conducting boundary and the resulting distortion of the density profile from the theoretical expectation.

  7. Economic feasibility of the sugar beet-to-ethylene value chain.

    PubMed

    Althoff, Jeroen; Biesheuvel, Kees; De Kok, Ad; Pelt, Henk; Ruitenbeek, Matthijs; Spork, Ger; Tange, Jan; Wevers, Ronald

    2013-09-01

    As part of a long-term strategy toward renewable feedstock, a feasibility study into options for the production of bioethylene by integrating the sugar beet-to-ethanol-to-ethylene value chain. Seven business cases were studied and tested for actual economic feasibility of alternative sugar-to-ethanol-to-ethylene routes in comparison to fossil-fuel alternatives. An elaborate model was developed to assess the relevant operational and financial aspects of each business case. The calculations indicate that bioethylene from sugar beet is not commercially viable under current market conditions. In light of expected global energy and feedstock prices it is also reasonable to expect that this will not change in the near future. To consider biorenewable sources as starting material, they need to be low in cost (compared to sugar beets) and also require less capital and energy-intensive methods for the conversion to chemicals. In general, European sugar prices will be too high for many chemical applications. Future efforts for in sugar-to-chemicals routes should, therefore, focus on integrated process routes and process intensification and/or on products that contain a significant part of the original carbohydrate backbone. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    NASA Astrophysics Data System (ADS)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  9. Immunogenetic and population genetic analyses of Iberian cattle.

    PubMed

    Kidd, K K; Stone, W H; Crimella, C; Carenzi, C; Casati, M; Rognoni, G

    1980-01-01

    Blood samples were collected from more than 100 animals in each of 2 Spanish cattle breeds (Retinto and De Lidia), 2 Portuguese breeds (Alentejana and Mertolenga), and American Longhorn cattle. All samples for the 4 Iberian breeds were tested for 20 polymorphic systems; American Longhorn were tested for 19 of the 20. For each breed an average inbreeding coefficient was estimated by a comparison of the observed and expected heterozygosity at 7 or 8 codominant systems tested. All breeds had positive values but only 3 breeds had estimates of inbreeding that were statistically significantly different from 0: De Lidia with f = 0.17, Retinto with f = 0.08 and Mertolenga with f = 0.05. The De Lidia breed especially may be suffering from inbreeding depression since this high value is greater than expected if all of the animals were progeny of half-sib matings. Genetic distances were calculated from the gene frequency data on these 5 breeds plus 9 other European breeds. Analyses of these distances show a closely related group of the 4 Iberian breeds and American Longhorn, confirming the close relationships among the Iberian breeds and the Iberian, probably Portuguese, origin of American Longhorn cattle.

  10. Radiation dose to workers due to the inhalation of dust during granite fabrication.

    PubMed

    Zwack, L M; McCarthy, W B; Stewart, J H; McCarthy, J F; Allen, J G

    2014-03-01

    There has been very little research conducted to determine internal radiation doses resulting from worker exposure to ionising radiation in granite fabrication shops. To address this issue, we estimated the effective radiation dose of granite workers in US fabrication shops who were exposed to the maximum respirable dust and silica concentrations allowed under current US regulations, and also to concentrations reported in the literature. Radiation doses were calculated using standard methods developed by the International Commission on Radiological Protection. The calculated internal doses were very low, and below both US occupational standards (50 mSv yr(-1)) and limits applicable to the general public (1 mSv yr(-1)). Workers exposed to respirable granite dust concentrations at the US Occupational Safety and Health Administration (OSHA) respirable dust permissible exposure limit (PEL) of 5 mg m(-3) over a full year had an estimated radiation dose of 0.062 mSv yr(-1). Workers exposed to respirable granite dust concentrations at the OSHA silica PEL and at the American Conference of Governmental Industrial Hygienists Threshold Limit Value for a full year had expected radiation doses of 0.007 mSv yr(-1) and 0.002 mSv yr(-1), respectively. Using data from studies of respirable granite dust and silica concentrations measured in granite fabrication shops, we calculated median expected radiation doses that ranged from <0.001 to 0.101 mSv yr(-1).

  11. Generation of Squeezed Light Using Photorefractive Degenerate Two-Wave Mixing

    NASA Technical Reports Server (NTRS)

    Lu, Yajun; Wu, Meijuan; Wu, Ling-An; Tang, Zheng; Li, Shiqun

    1996-01-01

    We present a quantum nonlinear model of two-wave mixing in a lossless photorefractive medium. A set of equations describing the quantum nonlinear coupling for the field operators is obtained. It is found that, to the second power term, the commutation relationship is maintained. The expectation values for the photon number concur with those of the classical electromagnetic theory when the initial intensities of the two beams are strong. We also calculate the quantum fluctuations of the two beams initially in the coherent state. With an appropriate choice of phase, quadrature squeezing or number state squeezing can be produced.

  12. The Quantum Phase-Dynamical Properties of the Squeezed Vacuum State Intensity-Couple Interacting with the Atom

    NASA Technical Reports Server (NTRS)

    Fan, An-Fu; Sun, Nian-Chun; Zhou, Xin

    1996-01-01

    The Phase-dynamical properties of the squeezed vacuum state intensity-couple interacting with the two-level atom in an ideal cavity are studied using the Hermitian phase operator formalism. Exact general expressions for the phase distribution and the associated expectation value and variance of the phase operator have been derived. we have also obtained the analytic results of the phase variance for two special cases-weakly and strongly squeezed vacuum. The results calculated numerically show that squeezing has a significant effect on the phase properties of squeezed vacuum.

  13. Estimates of production and structure of nuclei with Z = 119

    NASA Astrophysics Data System (ADS)

    Adamian, G. G.; Antonenko, N. V.; Lenske, H.

    2018-02-01

    The comparative analysis of the hot fusion reactions 50Ti +247-249Bk and 51V +246-248Cm for synthesis of element 119 is made with the dinuclear system model and the prediction of nuclear properties of the microscopic-macroscopic approach, where the closed proton shell at Z ≥ 120 is expected. The quasiparticle structures of nuclei in the α-decay chain of 295119 and a possible spread of alpha energies are studied. The calculated values of Qα are compared with available experimental data. The termination of the α-decay chain of 295119 is revealed.

  14. Bimolecular reactions of carbenes: Proton transfer mechanism

    NASA Astrophysics Data System (ADS)

    Abu-Saleh, Abd Al-Aziz A.; Almatarneh, Mansour H.; Poirier, Raymond A.

    2018-04-01

    Here we report the bimolecular reaction of trifluoromethylhydroxycarbene conformers and the water-mediated mechanism of the 1,2-proton shift for the unimolecular trans-conformer by using quantum chemical calculations. The CCSD(T)/cc-pVTZ//MP2/cc-pVDZ potential-energy profile of the bimolecular reaction of cis- and trans-trifluoromethylhydroxycarbene, shows the lowest gas-phase barrier height of 13 kJ mol-1 compared to the recently reported value of 128 kJ mol-1 for the unimolecular reaction. We expect bimolecular reactions of carbene's stereoisomers will open a valuable field for new and useful synthetic strategies.

  15. Vacuum fluctuations of the supersymmetric field in curved background

    NASA Astrophysics Data System (ADS)

    Bilić, Neven; Domazet, Silvije; Guberina, Branko

    2012-01-01

    We study a supersymmetric model in curved background spacetime. We calculate the effective action and the vacuum expectation value of the energy momentum tensor using a covariant regularization procedure. A soft supersymmetry breaking induces a nonzero contribution to the vacuum energy density and pressure. Assuming the presence of a cosmic fluid in addition to the vacuum fluctuations of the supersymmetric field an effective equation of state is derived in a self-consistent approach at one loop order. The net effect of the vacuum fluctuations of the supersymmetric fields in the leading adiabatic order is a renormalization of the Newton and cosmological constants.

  16. Comparative effects of pH and Vision herbicide on two life stages of four anuran amphibian species.

    PubMed

    Edginton, Andrea N; Sheridan, Patrick M; Stephenson, Gerald R; Thompson, Dean G; Boermans, Herman J

    2004-04-01

    Vision, a glyphosate-based herbicide containing a 15% (weight:weight) polyethoxylated tallow amine surfactant blend, and the concurrent factor of pH were tested to determine their interactive effects on early life-stage anurans. Ninety-six-hour laboratory static renewal studies, using the embryonic and larval life stages (Gosner 25) of Rana clamitans, R. pipiens, Bufo americanus, and Xenopus laevis, were performed under a central composite rotatable design. Mortality and the prevalence of malformations were modeled using generalized linear models with a profile deviance approach for obtaining confidence intervals. There was a significant (p < 0.05) interaction of pH with Vision concentration in all eight models, such that the toxicity of Vision was amplified by elevated pH. The surfactant is the major toxic component of Vision and is hypothesized, in this study, to be the source of the pH interaction. Larvae of B. americanus and R. clamitans were 1.5 to 3.8 times more sensitive than their corresponding embryos, whereas X. laevis and R. pipiens larvae were 6.8 to 8.9 times more sensitive. At pH values above 7.5, the Vision concentrations expected to kill 50% of the test larvae in 96-h (96-h lethal concentration [LC50]) were predicted to be below the expected environmental concentration (EEC) as calculated by Canadian regulatory authorities. The EEC value represents a worst-case scenario for aerial Vision application and is calculated assuming an application of the maximum label rate (2.1 kg acid equivalents [a.e.]/ha) into a pond 15 cm in depth. The EEC of 1.4 mg a.e./L (4.5 mg/L Vision) was not exceeded by 96-h LC50 values for the embryo test. The larvae of the four species were comparable in sensitivity. Field studies should be completed using the more sensitive larval life stage to test for Vision toxicity at actual environmental concentrations.

  17. Efficient Research Design: Using Value-of-Information Analysis to Estimate the Optimal Mix of Top-down and Bottom-up Costing Approaches in an Economic Evaluation alongside a Clinical Trial.

    PubMed

    Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee

    2016-04-01

    In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £ 35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset. © The Author(s) 2016.

  18. External validation of a prehospital risk score for critical illness.

    PubMed

    Kievlan, Daniel R; Martin-Gill, Christian; Kahn, Jeremy M; Callaway, Clifton W; Yealy, Donald M; Angus, Derek C; Seymour, Christopher W

    2016-08-11

    Identification of critically ill patients during prehospital care could facilitate early treatment and aid in the regionalization of critical care. Tools to consistently identify those in the field with or at higher risk of developing critical illness do not exist. We sought to validate a prehospital critical illness risk score that uses objective clinical variables in a contemporary cohort of geographically and temporally distinct prehospital encounters. We linked prehospital encounters at 21 emergency medical services (EMS) agencies to inpatient electronic health records at nine hospitals in southwestern Pennsylvania from 2010 to 2012. The primary outcome was critical illness during hospitalization, defined as an intensive care unit stay with delivery of organ support (mechanical ventilation or vasopressor use). We calculated the prehospital risk score using demographics and first vital signs from eligible EMS encounters, and we tested the association between score variables and critical illness using multivariable logistic regression. Discrimination was assessed using the AUROC curve, and calibration was determined by plotting observed versus expected events across score values. Operating characteristics were calculated at score thresholds. Among 42,550 nontrauma, non-cardiac arrest adult EMS patients, 1926 (4.5 %) developed critical illness during hospitalization. We observed moderate discrimination of the prehospital critical illness risk score (AUROC 0.73, 95 % CI 0.72-0.74) and adequate calibration based on observed versus expected plots. At a score threshold of 2, sensitivity was 0.63 (95 % CI 0.61-0.75), specificity was 0.73 (95 % CI 0.72-0.73), negative predictive value was 0.98 (95 % CI 0.98-0.98), and positive predictive value was 0.10 (95 % CI 0.09-0.10). The risk score performance was greater with alternative definitions of critical illness, including in-hospital mortality (AUROC 0.77, 95 % CI 0.7 -0.78). In an external validation cohort, a prehospital risk score using objective clinical data had moderate discrimination for critical illness during hospitalization.

  19. Frequency-Dependent Viscosity of Xenon Near the Critical Point

    NASA Technical Reports Server (NTRS)

    Berg, Robert F.; Moldover, Michael R.; Zimmerli, Gregory A.

    1999-01-01

    We used a novel, overdamped oscillator aboard the Space Shuttle to measure the viscosity eta of xenon near its critical density rho(sub c), and temperature T(sub c). In microgravity, useful data were obtained within 0.1 mK of T(sub c), corresponding to a reduced temperature t = (T -T(sub c))/T(sub c) = 3 x 10(exp -7). The data extend two decades closer to T(sub c) than the best ground measurements, and they directly reveal the expected power-law behavior eta proportional to t(sup -(nu)z(sub eta)). Here nu is the correlation length exponent, and our result for the small viscosity exponent is z(sub eta) = 0.0690 +/- 0.0006. (All uncertainties are one standard uncertainty.) Our value for z(sub eta) depends only weakly on the form of the viscosity crossover function, and it agrees with the value 0.067 +/- 0.002 obtained from a recent two-loop perturbation expansion. The measurements spanned the frequency range 2 Hz less than or equal to f less than or equal to 12 Hz and revealed viscoelasticity when t less than or equal to 10(exp -1), further from T(sub c) than predicted. The viscoelasticity scales as Af(tau), where tau is the fluctuation-decay time. The fitted value of the viscoelastic time-scale parameter A is 2.0 +/- 0.3 times the result of a one-loop perturbation calculation. Near T(sub c), the xenon's calculated time constant for thermal diffusion exceeded days. Nevertheless, the viscosity results were independent of the xenon's temperature history, indicating that the density was kept near rho(sub c), by judicious choices of the temperature vs. time program. Deliberately bad choices led to large density inhomogeneities. At t greater than 10(exp -5), the xenon approached equilibrium much faster than expected, suggesting that convection driven by microgravity and by electric fields slowly stirred the sample.

  20. NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.

  1. The artificial pancreas: evaluating risk of hypoglycaemia following errors that can be expected with prolonged at-home use.

    PubMed

    Wolpert, H; Kavanagh, M; Atakov-Castillo, A; Steil, G M

    2016-02-01

    Artificial pancreas systems show benefit in closely monitored at-home studies, but may not have sufficient power to assess safety during infrequent, but expected, system or user errors. The aim of this study was to assess the safety of an artificial pancreas system emulating the β-cell when the glucose value used for control is improperly calibrated and participants forget to administer pre-meal insulin boluses. Artificial pancreas control was performed in a clinic research centre on three separate occasions each lasting from 10 p.m. to 2 p.m. Sensor glucose values normally used for artificial pancreas control were replaced with scaled blood glucose values calculated to be 20% lower than, equal to or 33% higher than the true blood glucose. Safe control was defined as blood glucose between 3.9 and 8.3 mmol/l. Artificial pancreas control resulted in fasting scaled blood glucose values not different from target (6.67 mmol/l) at any scaling factor. Meal control with scaled blood glucose 33% higher than blood glucose resulted in supplemental carbohydrate to prevent hypoglycaemia in four of six participants during breakfast, and one participant during the night. In all instances, scaled blood glucose reported blood glucose as safe. Outpatient trials evaluating artificial pancreas performance based on sensor glucose may not detect hypoglycaemia when sensor glucose reads higher than blood glucose. Because these errors are expected to occur, in-hospital artificial pancreas studies using supplemental carbohydrate in anticipation of hypoglycaemia, which allow safety to be assessed in a controlled non-significant environment should be considered as an alternative. Inpatient studies provide a definitive alternative to model-based computer simulations and can be conducted in parallel with closely monitored outpatient artificial pancreas studies used to assess benefit. © 2015 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.

  2. Probabilistic and deterministic evaluation of uncertainty in a local scale multi-risk analysis

    NASA Astrophysics Data System (ADS)

    Lari, S.; Frattini, P.; Crosta, G. B.

    2009-04-01

    We performed a probabilistic multi-risk analysis (QPRA) at the local scale for a 420 km2 area surrounding the town of Brescia (Northern Italy). We calculated the expected annual loss in terms of economical damage and life loss, for a set of risk scenarios of flood, earthquake and industrial accident with different occurrence probabilities and different intensities. The territorial unit used for the study was the census parcel, of variable area, for which a large amount of data was available. Due to the lack of information related to the evaluation of the hazards, to the value of the exposed elements (e.g., residential and industrial area, population, lifelines, sensitive elements as schools, hospitals) and to the process-specific vulnerability, and to a lack of knowledge of the processes (floods, industrial accidents, earthquakes), we assigned an uncertainty to the input variables of the analysis. For some variables an homogeneous uncertainty was assigned on the whole study area, as for instance for the number of buildings of various typologies, and for the event occurrence probability. In other cases, as for phenomena intensity (e.g.,depth of water during flood) and probability of impact, the uncertainty was defined in relation to the census parcel area. In fact assuming some variables homogeneously diffused or averaged on the census parcels, we introduce a larger error for larger parcels. We propagated the uncertainty in the analysis using three different models, describing the reliability of the output (risk) as a function of the uncertainty of the inputs (scenarios and vulnerability functions). We developed a probabilistic approach based on Monte Carlo simulation, and two deterministic models, namely First Order Second Moment (FOSM) and Point Estimate (PE). In general, similar values of expected losses are obtained with the three models. The uncertainty of the final risk value is in the three cases around the 30% of the expected value. Each of the models, nevertheless, requires different assumptions and computational efforts, and provides results with different level of detail.

  3. Achievement, Motivation, and Educational Choices: A Longitudinal Study of Expectancy and Value Using a Multiplicative Perspective

    ERIC Educational Resources Information Center

    Guo, Jiesi; Parker, Philip D.; Marsh, Herbert W.; Morin, Alexandre J. S.

    2015-01-01

    Drawing on the expectancy-value model, the present study explored individual and gender differences in university entry and selection of educational pathway (e.g., science, technology, engineering, and mathematics [STEM] course selection). In particular, we examined the multiplicative effects of expectancy and task values on educational outcomes…

  4. Adolescent Expectancy-Value Motivation and Learning: A Disconnected Case in Physical Education

    ERIC Educational Resources Information Center

    Zhu, Xihe; Chen, Ang

    2010-01-01

    This study reports adolescent expectancy-value motivation, and its relation to fitness knowledge and psychomotor skill learning in physical education. Students (N = 854) from 12 middle schools provided data on expectancy-value motivation, fitness knowledge and psychomotor skill learning. Results from dependent t-test and MANOVA indicated that 8th…

  5. Expectancy-Value and Cognitive Process Outcomes in Mathematics Learning: A Structural Equation Analysis

    ERIC Educational Resources Information Center

    Phan, Huy P.

    2014-01-01

    Existing research has yielded evidence to indicate that the expectancy-value theoretical model predicts students' learning in various achievement contexts. Achievement values and self-efficacy expectations, for example, have been found to exert positive effects on cognitive process and academic achievement outcomes. We tested a conceptual model…

  6. Children's Perceived Cost for Exercise: Application of an Expectancy-Value Paradigm

    ERIC Educational Resources Information Center

    Chiang, Evelyn S.; Byrd, Sandra P.; Molin, Ashley J.

    2011-01-01

    Expectancy-value models of motivation have been applied to understanding children's choices in areas such as academics and sports. Here, an expectancy-value paradigm is applied to exercising (defined as engaging in physical activity). The notion of perceived cost is highlighted in particular. Two hundred twenty children in third, fourth, and fifth…

  7. Ninth Graders' Energy Balance Knowledge and Physical Activity Behavior: An Expectancy-Value Perspective

    ERIC Educational Resources Information Center

    Chen, Senlin; Chen, Ang

    2012-01-01

    Expectancy beliefs and task values are two essential motivators in physical education. This study was designed to identify the relation between the expectancy-value constructs (Eccles & Wigfield, 1995) and high school students' physical activity behavior as associated with their energy balance knowledge. High school students (N = 195) in two…

  8. Measurement Invariance of Expectancy-Value Questionnaire in Physical Education

    ERIC Educational Resources Information Center

    Zhu, Xihe; Sun, Haichun; Chen, Ang; Ennis, Catherine

    2012-01-01

    Expectancy-Value Questionnaire (EVQ) measures student expectancy beliefs and task values of the domain content (Eccles & Wigfield, 1995). In this study the authors examine measurement invariance of EVQ in the domain of physical education between elementary and middle-school students. Participants included 811 students (3rd-5th grades) from 13…

  9. Adolescents' Expectancies for Success and Achievement Task Values during the Middle and High School Years.

    ERIC Educational Resources Information Center

    Wigfield, Allan; Tonks, Stephen

    This chapter discusses the development of achievement motivation during adolescence from the perspective of expectancy-value theory, and explains how adolescents' expectancies for success and achievement values change during adolescence, particularly during educational transitions such as that from elementary to middle school and from middle to…

  10. Cost Perception and the Expectancy-Value Model of Achievement Motivation.

    ERIC Educational Resources Information Center

    Anderson, Patricia N.

    The expectancy-value model of achievement motivation, first described by J. Atkinson (1957) and refined by J. Eccles and her colleagues (1983, 1992, 1994) predicts achievement motivation based on expectancy for success and perceived task value. Cost has been explored very little. To explore the possibility that cost is different from expectancy…

  11. Psychometric evaluation of dietary self-efficacy and outcome expectation scales in female college freshmen.

    PubMed

    Kedem, Leia E; Evans, Ellen M; Chapman-Novakofski, Karen

    2014-11-01

    Lifestyle interventions commonly measure psychosocial beliefs as precursors to positive behavior change, but often overlook questionnaire validation. This can affect measurement accuracy if the survey has been developed for a different population, as differing behavioral influences may affect instrument validity. The present study aimed to explore psychometric properties of self-efficacy and outcome expectation scales-originally developed for younger children-in a population of female college freshmen (N = 268). Exploratory principal component analysis was used to investigate underlying data patterns and assess validity of previously published subscales. Composite scores for reliable subscales (Cronbach's α ≥ .70) were calculated to help characterize self-efficacy and outcome expectation beliefs in this population. The outcome expectation factor structure clearly comprised of positive (α = .81-.90) and negative outcomes (α = .63-.67). The self-efficacy factor structure included themes of motivation and effort (α = .75-.94), but items pertaining to hunger and availability cross-loaded often. Based on cross-loading patterns and low Cronbach's alpha values, respectively, self-efficacy items regarding barriers to healthy eating and negative outcome expectation items should be refined to improve reliability. Composite scores suggested that eating healthfully was associated with positive outcomes, but self-efficacy to do so was lower. Thus, dietary interventions for college students may be more successful by including skill-building activities to enhance self-efficacy and increase the likelihood of behavior change. © The Author(s) 2014.

  12. Could CT screening for lung cancer ever be cost effective in the United Kingdom?

    PubMed Central

    Whynes, David K

    2008-01-01

    Background The absence of trial evidence makes it impossible to determine whether or not mass screening for lung cancer would be cost effective and, indeed, whether a clinical trial to investigate the problem would be justified. Attempts have been made to resolve this issue by modelling, although the complex models developed to date have required more real-world data than are currently available. Being founded on unsubstantiated assumptions, they have produced estimates with wide confidence intervals and of uncertain relevance to the United Kingdom. Method I develop a simple, deterministic, model of a screening regimen potentially applicable to the UK. The model includes only a limited number of parameters, for the majority of which, values have already been established in non-trial settings. The component costs of screening are derived from government guidance and from published audits, whilst the values for test parameters are derived from clinical studies. The expected health gains as a result of screening are calculated by combining published survival data for screened and unscreened cohorts with data from Life Tables. When a degree of uncertainty over a parameter value exists, I use a conservative estimate, i.e. one likely to make screening appear less, rather than more, cost effective. Results The incremental cost effectiveness ratio of a single screen amongst a high-risk male population is calculated to be around £14,000 per quality-adjusted life year gained. The average cost of this screening regimen per person screened is around £200. It is possible that, when obtained experimentally in any future trial, parameter values will be found to differ from those previously obtained in non-trial settings. On the basis both of differing assumptions about evaluation conventions and of reasoned speculations as to how test parameters and costs might behave under screening, the model generates cost effectiveness ratios as high as around £20,000 and as low as around £7,000. Conclusion It is evident that eventually being able to identify a cost effective regimen of CT screening for lung cancer in the UK is by no means an unreasonable expectation. PMID:18302756

  13. Multiple isotope analyses of the pike tapeworm Triaenophorus nodulosus reveal peculiarities in consumer-diet discrimination patterns.

    PubMed

    Behrmann-Godel, J; Yohannes, E

    2015-03-01

    Previous studies of dietary isotope discrimination have led to the general expectation that a consumer will exhibit enriched stable isotope levels relative to its diet. Parasite-host systems are specific consumer-diet pairs in which the consumer (parasite) feeds exclusively on one dietary source: host tissue. However, the small numbers of studies previously carried out on isotopic discrimination in parasite-host (ΔXP-HT) systems have yielded controversial results, showing some parasites to be isotopically depleted relative to their food source, while others are enriched or in equilibrium with their hosts. Although the mechanism for these deviations from expectations remains to be understood, possible influences of specific feeding niche or selection for only a few nutritional components by the parasite are discussed. ΔXP-HT for multiple isotopes (δ13C, δ15N, δ34S) were measured in the pike tapeworm Triaenophorus nodulosus and two of its life-cycle fish hosts, perch Perca fluviatilis and pike Esox lucius, within which T. nodulosus occupies different feeding locations. Variability in the value of ΔXP-HT calculated for the parasite and its different hosts indicates an influence of feeding location on isotopic discrimination. In perch liver ΔXP-HT was relatively more negative for all three stable isotopes. In pike gut ΔXP-HT was more positive for δ13C, as expected in conventional consumer-diet systems. For parasites feeding on pike gut, however, the δ15N and δ34S isotope values were comparable with those of the host. We discuss potential causes of these deviations from expectations, including the effect of specific parasite feeding niches, and conclude that ΔXP-HT should be critically evaluated for trophic interactions between parasite and host before general patterns are assumed.

  14. Expectancy-Value and Children's Science Achievement: Parents Matter

    ERIC Educational Resources Information Center

    Thomas, Julie A.; Strunk, Kamden K.

    2017-01-01

    This longitudinal study explored the ways parents' and teachers' expectancy for success influences 3rd-5th children's expectancy for success and achievement in science. Guided by an open-systems perspective and functional (Ballantine & Roberts, 2007) and expectancy-value (Eccles, 2005, 2007) theories, we focused on school related socialization…

  15. Time and expected value of sample information wait for no patient.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2008-01-01

    The expected value of sample information (EVSI) from prospective trials has previously been modeled as the product of EVSI per patient, and the number of patients across the relevant time horizon less those "used up" in trials. However, this implicitly assumes the eligible patient population to which information from a trial can be applied across a time horizon are independent of time for trial accrual, follow-up and analysis. This article demonstrates that in calculating the EVSI of a trial, the number of patients who benefit from trial information should be reduced by those treated outside as well as within the trial over the time until trial evidence is updated, including time for accrual, follow-up and analysis. Accounting for time is shown to reduce the eligible patient population: 1) independent of the size of trial in allowing for time of follow-up and analysis, and 2) dependent on the size of trial for time of accrual, where the patient accrual rate is less than incidence. Consequently, the EVSI and expected net gain (ENG) at any given trial size are shown to be lower when accounting for time, with lower ENG reinforced in the case of trials undertaken while delaying decisions by additional opportunity costs of time. Appropriately accounting for time reduces the EVSI of trial design and increase opportunity costs of trials undertaken with delay, leading to lower likelihood of trialing being optimal and smaller trial designs where optimal.

  16. Spectrophotometric Measurements of the Carbonate Ion Concentration: Aragonite Saturation States in the Mediterranean Sea and Atlantic Ocean.

    PubMed

    Fajar, Noelia M; García-Ibáñez, Maribel I; SanLeón-Bartolomé, Henar; Álvarez, Marta; Pérez, Fiz F

    2015-10-06

    Measurements of ocean pH, alkalinity, and carbonate ion concentrations ([CO3(2-)]) during three cruises in the Atlantic Ocean and one in the Mediterranean Sea were used to assess the reliability of the recent spectrophotometric [CO3(2-)] methodology and to determine aragonite saturation states. Measurements of [CO3(2-)] along the Atlantic Ocean showed high consistency with the [CO3(2-)] values calculated from pH and alkalinity, with negligible biases (0.4 ± 3.4 μmol·kg(-1)). In the warm, salty, high alkalinity and high pH Mediterranean waters, the spectrophotometric [CO3(2-)] methodology underestimates the measured [CO3(2-)] (4.0 ± 5.0 μmol·kg(-1)), with anomalies positively correlated to salinity. These waters also exhibited high in situ [CO3(2-)] compared to the expected aragonite saturation. The very high buffering capacity allows the Mediterranean Sea waters to remain over the saturation level of aragonite for long periods of time. Conversely, the relatively thick layer of undersaturated waters between 500 and 1000 m depths in the Tropical Atlantic is expected to progress to even more negative undersaturation values. Moreover, the northern North Atlantic presents [CO3(2-)] slightly above the level of aragonite saturation, and the expected anthropogenic acidification could result in reductions of the aragonite saturation levels during future decades, acting as a stressor for the large population of cold-water-coral communities.

  17. Expectancy-value theory in persistence of learning effects in schizophrenia: role of task value and perceived competency.

    PubMed

    Choi, Jimmy; Fiszdon, Joanna M; Medalia, Alice

    2010-09-01

    Expectancy-value theory, a widely accepted model of motivation, posits that expectations of success on a learning task and the individual value placed on the task are central determinants of motivation to learn. This is supported by research in healthy controls suggesting that beliefs of self-and-content mastery can be so influential they can predict the degree of improvement on challenging cognitive tasks even more so than general cognitive ability. We examined components of expectancy-value theory (perceived competency and task value), along with baseline arithmetic performance and neuropsychological performance, as possible predictors of learning outcome in a sample of 70 outpatients with schizophrenia randomized to 1 of 2 different arithmetic learning conditions and followed up after 3 months. Results indicated that as with nonpsychiatric samples, perceived self-competency for the learning task was significantly related to perceptions of task value attributed to the learning task. Baseline expectations of success predicted persistence of learning on the task at 3-month follow-up, even after accounting for variance attributable to different arithmetic instruction, baseline arithmetic ability, attention, and self-reports of task interest and task value. We also found that expectation of success is a malleable construct, with posttraining improvements persisting at follow-up. These findings support the notion that expectancy-value theory is operative in schizophrenia. Thus, similar to the nonpsychiatric population, treatment benefits may be enhanced and better maintained if remediation programs also focus on perceptions of self-competency for the training tasks. Treatment issues related to instilling self-efficacy in cognitive recovery programs are discussed.

  18. Comparison of two equation-of-state models for partially ionized aluminum: Zel'dovich and Raizer's model versus the activity expansion code

    NASA Astrophysics Data System (ADS)

    Harrach, Robert J.; Rogers, Forest J.

    1981-09-01

    Two equation-of-state (EOS) models for multipy ionized matter are evaluated for the case of an aluminum plasma in the temperature range from about one eV to several hundred eV, spanning conditions of weak to strong ionization. Specifically, the simple analytical mode of Zel'dovich and Raizer and the more comprehensive model comprised by Rogers' plasma physics avtivity expansion code (ACTEX) are used to calculate the specific internal energy ɛ and average degree of ionization Z¯*, as functons of temperature T and density ρ. In the absence of experimental data, these results are compared against each other, covering almost five orders-of-magnitude variation in ɛ and the full range of Z¯* We find generally good agreement between the two sets of results, especially for low densities and for temperatures near the upper end of the rage. Calculated values of ɛ(T) agree to within ±30% over nearly the full range in T for densities below about 1 g/cm3. Similarly, the two models predict values of Z¯*(T) which track each other fairly well; above 20 eV the discrepancy is less than ±20% fpr ρ≲1 g/cm3. Where the calculations disagree, we expect the ACTEX code to be more accurate than Zel'dovich and Raizer's model, by virtue of its more detailed physics content.

  19. Efficient and precise calculation of the b-matrix elements in diffusion-weighted imaging pulse sequences.

    PubMed

    Zubkov, Mikhail; Stait-Gardner, Timothy; Price, William S

    2014-06-01

    Precise NMR diffusion measurements require detailed knowledge of the cumulative dephasing effect caused by the numerous gradient pulses present in most NMR pulse sequences. This effect, which ultimately manifests itself as the diffusion-related NMR signal attenuation, is usually described by the b-value or the b-matrix in the case of multidirectional diffusion weighting, the latter being common in diffusion-weighted NMR imaging. Neglecting some of the gradient pulses introduces an error in the calculated diffusion coefficient reaching in some cases 100% of the expected value. Therefore, ensuring the b-matrix calculation includes all the known gradient pulses leads to significant error reduction. Calculation of the b-matrix for simple gradient waveforms is rather straightforward, yet it grows cumbersome when complexly shaped and/or numerous gradient pulses are introduced. Making three broad assumptions about the gradient pulse arrangement in a sequence results in an efficient framework for calculation of b-matrices as well providing some insight into optimal gradient pulse placement. The framework allows accounting for the diffusion-sensitising effect of complexly shaped gradient waveforms with modest computational time and power. This is achieved by using the b-matrix elements of the simple unmodified pulse sequence and minimising the integration of the complexly shaped gradient waveform in the modified sequence. Such re-evaluation of the b-matrix elements retains all the analytical relevance of the straightforward approach, yet at least halves the amount of symbolic integration required. The application of the framework is demonstrated with the evaluation of the expression describing the diffusion-sensitizing effect, caused by different bipolar gradient pulse modules. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Food Web Bioaccumulation Model for Resident Killer Whales from the Northeastern Pacific Ocean as a Tool for the Derivation of PBDE-Sediment Quality Guidelines.

    PubMed

    Alava, Juan José; Ross, Peter S; Gobas, Frank A P C

    2016-01-01

    Resident killer whale populations in the NE Pacific Ocean are at risk due to the accumulation of pollutants, including polybrominated diphenyl ethers (PBDEs). To assess the impact of PBDEs in water and sediments in killer whale critical habitat, we developed a food web bioaccumulation model. The model was designed to estimate PBDE concentrations in killer whales based on PBDE concentrations in sediments and the water column throughout a lifetime of exposure. Calculated and observed PBDE concentrations exceeded the only toxicity reference value available for PBDEs in marine mammals (1500 μg/kg lipid) in southern resident killer whales but not in northern resident killer whales. Temporal trends (1993-2006) for PBDEs observed in southern resident killer whales showed a doubling time of ≈5 years. If current sediment quality guidelines available in Canada for polychlorinated biphenyls are applied to PBDEs, it can be expected that PBDE concentrations in killer whales will exceed available toxicity reference values by a large margin. Model calculations suggest that a PBDE concentration in sediments of approximately 1.0 μg/kg dw produces PBDE concentrations in resident killer whales that are below the current toxicity reference value for 95 % of the population, with this value serving as a precautionary benchmark for a management-based approach to reducing PBDE health risks to killer whales. The food web bioaccumulation model may be a useful risk management tool in support of regulatory protection for killer whales.

  1. Motivational beliefs, values, and goals.

    PubMed

    Eccles, Jacquelynne S; Wigfield, Allan

    2002-01-01

    This chapter reviews the recent research on motivation, beliefs, values, and goals, focusing on developmental and educational psychology. The authors divide the chapter into four major sections: theories focused on expectancies for success (self-efficacy theory and control theory), theories focused on task value (theories focused on intrinsic motivation, self-determination, flow, interest, and goals), theories that integrate expectancies and values (attribution theory, the expectancy-value models of Eccles et al., Feather, and Heckhausen, and self-worth theory), and theories integrating motivation and cognition (social cognitive theories of self-regulation and motivation, the work by Winne & Marx, Borkowski et al., Pintrich et al., and theories of motivation and volition). The authors end the chapter with a discussion of how to integrate theories of self-regulation and expectancy-value models of motivation and suggest new directions for future research.

  2. A Personal Value-Based Model of College Students' Aptitudes and Expected Choice Behavior Regarding Retailing Careers.

    ERIC Educational Resources Information Center

    Shim, Soyeon; Warrington, Patti; Goldsberry, Ellen

    1999-01-01

    A study of 754 retail management students developed a value-based model of career attitude and expected choice behavior. Findings indicate that personal values had an influence on all aspects of retail career attitudes, which then had a direct effect on expected choice behavior. (Contains 55 references.) (Author/JOW)

  3. Is Recess an Achievement Context? An Application of Expectancy-Value Theory to Playground Choices

    ERIC Educational Resources Information Center

    Spencer-Cavaliere, Nancy; Dunn, Janice Causgrove; Watkinson, E. Jane

    2009-01-01

    This study investigated the application of an expectancy-value model to children's activity choices on the playground at recess. The purpose was to test the prediction that expectancies for success and subjective task values are related to decisions to engage in specific recess activities such as climbing, playing soccer, or skipping rope.…

  4. Frame of Reference Effects on Values in Mathematics: Evidence from German Secondary School Students

    ERIC Educational Resources Information Center

    Cambria, Jenna; Brandt, Holger; Nagengast, Benjamin; Trautwein, Ulrich

    2017-01-01

    Expectancy-value theory of achievement motivation identifies two classes of beliefs that are important predictors of educational choices and achievement: expectancies and values. It is well known that high achieving peers can have a negative impact on self-concept and other measures of expected success: holding individual achievement constant,…

  5. Spreadsheet Modeling of (Q,R) Inventory Policies

    ERIC Educational Resources Information Center

    Cobb, Barry R.

    2013-01-01

    This teaching brief describes a method for finding an approximately optimal combination of order quantity and reorder point in a continuous review inventory model using a discrete expected shortage calculation. The technique is an alternative to a model where expected shortage is calculated by integration, and can allow students who have not had a…

  6. [Assessment of psychometric properties of the academic involvement questionnaire, expectations version].

    PubMed

    Pérez V, Cristhian; Ortiz M, Liliana; Fasce H, Eduardo; Parra P, Paula; Matus B, Olga; McColl C, Peter; Torres A, Graciela; Meyer K, Andrea; Márquez U, Carolina; Ortega B, Javiera

    2015-11-01

    Academic Involvement Questionnaire, Expectations version (CIA-A), assesses the expectations of involvement in studies. It is a relevant predictor of student success. However, the evidence of its validity and reliability in Chile is low, and in the case of Medical students, there is no evidence at all. To evaluate the factorial structure and internal consistency of the CIA-A in Chilean Medical school freshmen. The survey was applied to 340 Medicine freshmen, chosen by non-probability quota sampling. They answered a back-translated version of CIA-A from Portuguese to Spanish, plus a sociodemographic questionnaire. For psychometric analysis of the CIA-A, an exploratory factor analysis was carried on, the reliability of the factors was calculated, a descriptive analysis was conducted and their correlation was assessed. Five factors were identified: vocational, institutional and social involvement, use of resources and student participation. Their reliabilities ranged between Cronbach's alpha values of 0.71 to 0.87. Factors also showed statistically significant correlations between each other. Identified factor structure is theoretically consistent with the structure of original version. It just disagrees in one factor. In addition, the factors' internal consistency were adequate for using them in research. This supports the construct validity and reliability of the CIA-A to assess involvement expectations in medical school freshmen.

  7. Maternal ethnicity and variation of fetal femur length calculations when screening for Down syndrome.

    PubMed

    Kovac, Christine M; Brown, Jennifer A; Apodaca, Christina C; Napolitano, Peter G; Pierce, Brian; Patience, Troy; Hume, Roderick F; Calhoun, Byron C

    2002-07-01

    To determine whether current methods for detecting Down syndrome based on fetal femur length calculations are influenced by ethnicity. The study population consisted of all fetuses scanned between 14 and 20 completed weeks' gestation from April 1, 1997, to January 1, 2000. The expected femur length was calculated from the biparietal diameter. The variance from the expected femur length, compared with the biparietal diameter, was calculated, and the mean variations were compared by maternal race. Ethnic-specific formulas for expected femur length were derived by simple regression. There was a statistically significant difference in femur length in the Asian group compared with all other groups, as well as the white group compared with the black and Asian groups (P < .05). However, there was no significant difference between the black and Hispanic groups or the white and Hispanic groups. The Asian group had the largest variation, with the measured femur length being less than the expected femur length. All groups studied had a mean expected femur length less than the mean measured femur length. On the basis of the ethnic-specific formulas for femur length, there was a significant decrease in patients that would undergo further evaluation for Down syndrome. There is a significant difference in the mean expected femur length by biparietal diameter among fetuses in the second trimester with regard to ethnicity. Using ethnic-specific formulas for expected femur length can have a considerable impact on the use of sonographic risk factors for Down syndrome screening. Further data are required for use of femur length as a screening tool in the genetic sonogram.

  8. An economic evaluation of planned immediate versus delayed birth for preterm prelabour rupture of membranes: findings from the PPROMT randomised controlled trial.

    PubMed

    Lain, S J; Roberts, C L; Bond, D M; Smith, J; Morris, J M

    2017-03-01

    This study is an economic evaluation of immediate birth compared with expectant management in women with preterm prelabour rupture of the membranes near term (PPROMT). A cost-effectiveness analysis alongside the PPROMT randomised controlled trial. Obstetric departments in 65 hospitals across 11 countries. Women with a singleton pregnancy with ruptured membranes between 34 +0 and 36 +6 weeks gestation. Women were randomly allocated to immediate birth or expectant management. Costs to the health system were identified and valued. National hospital costing data from both the UK and Australia were used. Average cost per recruit in each arm was calculated and 95% confidence intervals were estimated using bootstrap re-sampling. Averages costs during antenatal care, delivery and postnatal care, and by country were estimated. Total mean cost difference between immediate birth and expectant management arms of the trial. From 11 countries 923 women were randomised to immediate birth and 912 were randomised to expectant management. Total mean costs per recruit were £8852 for immediate birth and £8740 for expectant delivery resulting in a mean difference in costs of £112 (95% CI: -431 to 662). The expectant management arm had significantly higher antenatal costs, whereas the immediate birth arm had significantly higher delivery and neonatal costs. There was large variation between total mean costs by country. This economic evaluation found no evidence that expectant management was more or less costly than immediate birth. Outpatient management may offer opportunities for cost savings for those women with delayed delivery. For women with preterm prelabour rupture of the membranes, the relative benefits and harms of immediate and expectant management should inform counselling as costs are similar. © 2016 Royal College of Obstetricians and Gynaecologists.

  9. Predicting who will major in a science discipline: Expectancy-value theory as part of an ecological model for studying academic communities

    NASA Astrophysics Data System (ADS)

    Sullins, Ellen S.; Hernandez, Delia; Fuller, Carol; Shiro Tashiro, Jay

    Research on factors that shape recruitment and retention in undergraduate science majors currently is highly fragmented and in need of an integrative research framework. Such a framework should incorporate analyses of the various levels of organization that characterize academic communities (i.e., the broad institutional level, the departmental level, and the student level), and should also provide ways to study the interactions occurring within and between these structural levels. We propose that academic communities are analogous to ecosystems, and that the research paradigms of modern community ecology can provide the necessary framework, as well as new and innovative approaches to a very complex area. This article also presents the results of a pilot study that demonstrates the promise of this approach at the student level. We administered a questionnaire based on expectancy-value theory to undergraduates enrolled in introductory biology courses. Itself an integrative approach, expectancy-value theory views achievement-related behavior as a joint function of the person's expectancy of success in the behavior and the subjective value placed on such success. Our results indicated: (a) significant gender differences in the underlying factor structures of expectations and values related to the discipline of biology, (b) expectancy-value factors significantly distinguished biology majors from nonmajors, and (c) expectancy-value factors significantly predicted students' intent to enroll in future biology courses. We explore the expectancy-value framework as an operationally integrative framework in our ecological model for studying academic communities, especially in the context of assessing the underrepresentation of women and minorities in the sciences. Future research directions as well as practical implications are also discussed.

  10. Time lag estimates for nitrate travel through the vadose zone in Southland, New Zealand

    NASA Astrophysics Data System (ADS)

    Wilson, Scott; Chanut, Pierre; Ledgard, George; Rissmann, Clint

    2014-05-01

    A regional-scale study was carried out to calculate the travel time of a nitrate particle from the ground surface into shallow groundwater. The aim of the study was to obtain preliminary answers to two questions. Firstly, if leaching limits are set, how long would it take to see an improvement in shallow groundwater quality? Secondly, have groundwater nitrate concentrations reached equilibrium from recent dairy expansion in the region, or could we expect future increases? We applied a methodology that provides a balance between the detail and generalisation that is required for a regional-scale study. Steady-state advective transport through the vadose zone was modelled with water retention curves. These curves enable an estimate of the average volumetric water content of the vadose zone. The percentage saturation can then be used to calculate the vadose zone transit time if effective porosity, depth to the water table and annual average soil drainage are known. A time for mixing in the uppermost part of the aquifer has also been calculated. Two different vadose zone water retention curve models were used for comparison, the Brooks-Corey (1964), and the Van Genuchten (1980) methods. The water retention curves were parameterised by sediment texture via the Rawls and Brakensiek (1985) pedotransfer functions. Hydraulic properties were derived by positioning sediment textural descriptions on the Folk textural triangle, estimates of effective porosity from literature, and hydraulic conductivity values from aquifer tests. Uncertainty of parameter estimates was included by assigning standard deviations and appropriate probability distributions. Vadose zone saturation was modelled at 6,450 sites across the region with a Monte Carlo simulation involving 10,000 realisations. This generated a probability distribution of saturation for each site. Average volumetric water content of the vadose zone ranged from 8.5 to 40.7 % for the Brooks-Corey model and 12.9 to 36.3% for the Van Genuchten model. The large number of 1-D calculations allows the results to be presented spatially. About 80% of the region is expected to have a transit time of less than five years, and 90% less than two years. Older transit times are associated with mid Pleistocene outwash gravels. These deposits have lower permeability, and are also located at higher elevations above the rivers. The results indicate that shallow groundwater beneath properties in most of Southland will respond rapidly to a reduction in leaching rates. Large future increases in nitrate concentrations are only expected in discrete areas beneath older more elevated outwash gravel deposits. Preliminary validation of the modelled values has been carried out by comparison with tritium ages at the top of the aquifer and the results are encouraging.

  11. Association of Malignancy Prevalence With Test Properties and Performance of the Gene Expression Classifier in Indeterminate Thyroid Nodules.

    PubMed

    Al-Qurayshi, Zaid; Deniwar, Ahmed; Thethi, Tina; Mallik, Tilak; Srivastav, Sudesh; Murad, Fadi; Bhatia, Parisha; Moroz, Krzysztof; Sholl, Andrew B; Kandil, Emad

    2017-04-01

    It is crucial for clinicians to know the malignancy prevalence within each indeterminate cytologic category to estimate the performance of the gene expression classifier (GEC). To examine the variability in the performance of the GEC. This retrospective cohort study of patients with Bethesda category III and IV thyroid nodules used single-institution data from January 1, 2013, through February 29, 2016. Expected negative predictive value (NPV) was calculated by adopting published sensitivity and specificity. Observed NPV was calculated based on the true-negative rate. Outcomes were compared with pooled data from 11 studies published January 1, 2010, to January 31, 2016. A total of 145 patients with 154 thyroid nodules were included in the study (mean [SD] age, 56.0 [16.2] years; 106 females [73.1%]). Malignancy prevalence was 45%. On the basis of this prevalence, the expected NPV is 85% and the observed NPV is 69%. If the prevalence is assumed to be 25%, the expected NPV would be 94%, whereas the observed NPV would be 85%. Pooled data analysis of 11 studies comprising 1303 participants revealed a malignancy prevalence of 31% (95% CI, 29%-34%) and a pooled NPV of 92% (95% CI, 87%-96%). In this study, variability in the performance of the GEC was not solely a function of malignancy prevalence and may have been attributable to intrinsic variability of the test sensitivity and specificity. The utility of the GEC in practice is elusive because of this variability. A better definition of the GEC's intrinsic properties is needed.

  12. Broadband Spectroscopy Using Two Suzaku Observations of the HMXB GX 301-2

    NASA Technical Reports Server (NTRS)

    Suchy, Slawomir; Fuerst, Felix; Pottschmidt, Katja; Caballero, Isabel; Kreykenbohm, Ingo; Wilms, Joern; Markowitz, Alex; Rothschild, Richard E.

    2012-01-01

    We present the analysis of two Suzaku observations of GX 301-2 at two orbital phases after the periastron passage. Variations in the column density of the line-of-sight absorber are observed, consistent with accretion from a clumpy wind. In addition to a CRSF, multiple fluorescence emission lines were detected in both observations. The variations in the pulse profiles and the CRSF throughout the pulse phase have a signature of a magnetic dipole field. Using a simple dipole model we calculated the expected magnetic field values for different pulse phases and were able to extract a set of geometrical angles, loosely constraining the dipole geometry in the neutron star. From the variation of the CRSF width and energy, we found a geometrical solution for the dipole, making the inclination consistent with previously published values.

  13. Broadband Spectroscopy Using Two Suzaku Observations of the HMXB GX 301-2

    NASA Astrophysics Data System (ADS)

    Suchy, Slawomir; Fürst, Felix; Pottschmidt, Katja; Caballero, Isabel; Kreykenbohm, Ingo; Wilms, Jörn; Markowitz, Alex; Rothschild, Richard E.

    2012-02-01

    We present the analysis of two Suzaku observations of GX 301-2 at two orbital phases after the periastron passage. Variations in the column density of the line-of-sight absorber are observed, consistent with accretion from a clumpy wind. In addition to a cyclotron resonance scattering feature (CRSF), multiple fluorescence emission lines were detected in both observations. The variations in the pulse profiles and the CRSF throughout the pulse phase have a signature of a magnetic dipole field. Using a simple dipole model we calculated the expected magnetic field values for different pulse phases and were able to extract a set of geometrical angles, loosely constraining the dipole geometry in the neutron star. From the variation of the CRSF width and energy, we found a geometrical solution for the dipole, making the inclination consistent with previously published values.

  14. Gamma-ray Transition Matrix Elements in ^21Na: First TIGRESS Radioactive Beam Experiment

    NASA Astrophysics Data System (ADS)

    Hackman, Greg

    2007-04-01

    Modern shell model calculations should be expected to reliably reproduce the properties of the deformed five-particle nucleus ^21Na. However the lowest-lying B(E2) value deduced from lifetime and mixing ratio measurements disagrees with models by an unacceptably large factor of two. To measure the B(E2) values directly, a beam of ^21Na at 1.7 MeV/u from the TRIUMF ISAC facility was directed upon a 0.5 mg/cm^2 ^natTi target. Gamma-ray yield in coincidence with inelastically scattered heavy ions was measured with two TIGRESS high energy- and position-resolution germanium detector units and the BAMBINO highly segmented silicon detector system. The result resolves the discrepancy between the shell model and prior measurements. This represents the first radioactive in-beam experiment with TIGRESS.

  15. SU-G-BRC-08: Evaluation of Dose Mass Histogram as a More Representative Dose Description Method Than Dose Volume Histogram in Lung Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, J; Eldib, A; Ma, C

    2016-06-15

    Purpose: Dose-volume-histogram (DVH) is widely used for plan evaluation in radiation treatment. The concept of dose-mass-histogram (DMH) is expected to provide a more representative description as it accounts for heterogeneity in tissue density. This study is intended to assess the difference between DVH and DMH for evaluating treatment planning quality. Methods: 12 lung cancer treatment plans were exported from the treatment planning system. DVHs for the planning target volume (PTV), the normal lung and other structures of interest were calculated. DMHs were calculated in a similar way as DVHs expect that the voxel density converted from the CT number wasmore » used in tallying the dose histogram bins. The equivalent uniform dose (EUD) was calculated based on voxel volume and mass, respectively. The normal tissue complication probability (NTCP) in relation to the EUD was calculated for the normal lung to provide quantitative comparison of DVHs and DMHs for evaluating the radiobiological effect. Results: Large differences were observed between DVHs and DMHs for lungs and PTVs. For PTVs with dense tumor cores, DMHs are higher than DVHs due to larger mass weighing in the high dose conformal core regions. For the normal lungs, DMHs can either be higher or lower than DVHs depending on the target location within the lung. When the target is close to the lower lung, DMHs show higher values than DVHs because the lower lung has higher density than the central portion or the upper lung. DMHs are lower than DVHs for targets in the upper lung. The calculated NTCPs showed a large range of difference between DVHs and DMHs. Conclusion: The heterogeneity of lung can be well considered using DMH for evaluating target coverage and normal lung pneumonitis. Further studies are warranted to quantify the benefits of DMH over DVH for plan quality evaluation.« less

  16. Maternity care: a narrative overview of what women expect across their care continuum.

    PubMed

    Clark, Kim; Beatty, Shelley; Reibel, Tracy

    2015-04-01

    to provide a narrative overview of the values schema underpinning women׳s expectations of public maternity-care services using an episodes-of-care framework. focus-group discussions and in-depth interviews were undertaken with Western Australian women who had opted for public maternity care to determine the values schema apparent in their expectations of their care. public maternity-care services in metropolitan (i.e. Armadale, Osborne Park and Rockingham) and regional (i.e. Broome, Geraldton, Bunbury) Western Australia. women interviewed were found to have consistent values schema underpinning their maternity-care expectations and evaluations. the current study suggests that while women׳s choices and experiences of maternity care may differ on a range of dimensions, the values schema underlying their care expectations and subsequent evaluations are similar. The study findings resonate with past Australian research regarding women׳s expectations of public maternity care, but complement it by providing a coherent narrative of core underpinning stage-specific values schema. These may assist maternity-care policy makers, practitioners and researchers seeking to better understand and comprehensively respond to women׳s maternity-care expectations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. 26 CFR 1.170A-12 - Valuation of a remainder interest in real property for contributions made after July 31, 1969.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... expected useful life of 45 years, at the end of which time it is expected to have a value of $10,000, and... is $50,000 (the value of the house ($60,000) less its expected value at the end of 45 years ($10,000... section. If the remainder interest that has been contributed follows a term for years, the value of the...

  18. 26 CFR 1.170A-12 - Valuation of a remainder interest in real property for contributions made after July 31, 1969.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... expected useful life of 45 years, at the end of which time it is expected to have a value of $10,000, and... is $50,000 (the value of the house ($60,000) less its expected value at the end of 45 years ($10,000... section. If the remainder interest that has been contributed follows a term for years, the value of the...

  19. Estimating Seismic Hazards from the Catalog of Taiwan Earthquakes from 1900 to 2014 in Terms of Maximum Magnitude

    NASA Astrophysics Data System (ADS)

    Chen, Kuei-Pao; Chang, Wen-Yen

    2017-04-01

    Maximum expected earthquake magnitude is an important parameter when designing mitigation measures for seismic hazards. This study calculated the maximum magnitude of potential earthquakes for each cell in a 0.1° × 0.1° grid of Taiwan. Two zones vulnerable to maximum magnitudes of M w ≥6.0, which will cause extensive building damage, were identified: one extends from Hsinchu southward to Taichung, Nantou, Chiayi, and Tainan in western Taiwan; the other extends from Ilan southward to Hualian and Taitung in eastern Taiwan. These zones are also characterized by low b values, which are consistent with high peak ground shaking. We also employed an innovative method to calculate (at intervals of M w 0.5) the bounds and median of recurrence time for earthquakes of magnitude M w 6.0-8.0 in Taiwan.

  20. Rare-Earth Fourth-Order Multipole Moment in Cubic ErCo2 Probed by Linear Dichroism in Core-Level Photoemission

    NASA Astrophysics Data System (ADS)

    Abozeed, Amina A.; Kadono, Toshiharu; Sekiyama, Akira; Fujiwara, Hidenori; Higashiya, Atsushi; Yamasaki, Atsushi; Kanai, Yuina; Yamagami, Kohei; Tamasaku, Kenji; Yabashi, Makina; Ishikawa, Tetsuya; Andreev, Alexander V.; Wada, Hirofumi; Imada, Shin

    2018-03-01

    We developed a method to experimentally quantify the fourth-order multipole moment of the rare-earth 4f orbital. Linear dichroism (LD) in the Er 3d5/2 core-level photoemission spectra of cubic ErCo2 was measured using bulk-sensitive hard X-ray photoemission spectroscopy. Theoretical calculation reproduced the observed LD, and the result showed that the observed result does not contradict the suggested Γ 83 ground state. Theoretical calculation further showed a linear relationship between the LD size and the size of the fourth-order multipole moment of the Er3+ ion, which is proportional to the expectation value < O40 + 5O44> , where Onm are the Stevens operators. These analyses indicate that the LD in 3d photoemission spectra can be used to quantify the average fourth-order multipole moment of rare-earth atoms in a cubic crystal electric field.

  1. Band head spin assignment of superdeformed bands in Hg isotopes through power index formula

    NASA Astrophysics Data System (ADS)

    Sharma, Honey; Mittal, H. M.

    2018-05-01

    The power index formula has been used to obtain the band head spin (I 0) of all the superdeformed (SD) bands in Hg isotopes. A least squares fitting approach is used. The root mean square deviations between the determined and the observed transition energies are calculated by extracting the model parameters using the power index formula. Whenever definite spins are available, the determined and the observed transition energies are in accordance with each other. The computed values of dynamic moment of inertia J (2) obtained by using the power index formula and its deviation with the rotational frequency is also studied. Excellent agreement is shown between the calculated and the experimental results for J (2) versus the rotational frequency. Hence, the power index formula works very well for all the SD bands in Hg isotopes expect for 195Hg(2, 3, 4).

  2. Comparison of Artificial Immune System and Particle Swarm Optimization Techniques for Error Optimization of Machine Vision Based Tool Movements

    NASA Astrophysics Data System (ADS)

    Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod

    2015-10-01

    In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.

  3. Maternal body weight and first trimester screening for chromosomal anomalies.

    PubMed

    Khambalia, Amina Z; Roberts, Christine L; Morris, Jonathan; Tasevski, Vitomir; Nassar, Natasha

    2014-10-01

    Prenatal risk ratios for Down syndrome adjust for maternal weight because maternal serum biomarker levels decrease with increasing maternal weight. This is accomplished by converting serum biomarker values into a multiple of the expected median (MoM) for women of the same gestational age. Weight is frequently not recorded, and the impact of using MoMs not adjusted for weight for calculating risk ratios is unknown. The aim of this study is to examine the effect of missing weight on first trimester Down syndrome risk ratios by comparing risk ratios calculated using weight-unadjusted-and-adjusted MoMs. Findings at the population level indicate that the impact of not adjusting for maternal weight on first trimester screening results for chromosomal anomalies would lead to under-identification of 84 per 10,000 pregnancies. © 2014 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.

  4. ON CRITICAL MASS ANALYSIS OF JRR-2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1961-01-01

    The critica mass of the JRR-2 was found to be 15 fuel elements, instead of 8 as expected, when the reactor reached criticaity. The critica mass was analyzed by AMF and JAERI a few years ago, but afterwards some modifications have been made of the stucture for the reinforcement, for example, during the construction. The critical mass is recalculated perfectly and the difference bctween 15 and S fuel elements is discussed. The deviation of the critical mass is mainly caused by the effects of control rods, fuel elcments, grid-plate, etc., in the reflector; only heavy water or light water wasmore » conaidered as the reflector in the previous calculation. A simple method is used to calculate the critical mass. The effective multiplication factor for the core with 15 fuel elements is obtained about 2% higher than the experimental value. This difference is also discussed in detail. (auth)« less

  5. Investigating and improving student understanding of the expectation values of observables in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Marshman, Emily; Singh, Chandralekha

    2017-07-01

    The expectation value of an observable is an important concept in quantum mechanics since measurement outcomes are, in general, probabilistic and we only have information about the probability distribution of measurement outcomes in a given quantum state of a system. However, we find that upper-level undergraduate and PhD students in physics have both conceptual and procedural difficulties when determining the expectation value of a physical observable in a given quantum state in terms of the eigenstates and eigenvalues of the corresponding operator, especially when using Dirac notation. Here we first describe the difficulties that these students have with determining the expectation value of an observable in Dirac notation. We then discuss how the difficulties found via student responses to written surveys and individual interviews were used as a guide in the development of a quantum interactive learning tutorial (QuILT) to help students develop a good grasp of the expectation value. The QuILT strives to help students integrate conceptual understanding and procedural skills to develop a coherent understanding of the expectation value. We discuss the effectiveness of the QuILT in helping students learn this concept from in-class evaluations.

  6. Knowledge-based segmentation of pediatric kidneys in CT for measuring parenchymal volume

    NASA Astrophysics Data System (ADS)

    Brown, Matthew S.; Feng, Waldo C.; Hall, Theodore R.; McNitt-Gray, Michael F.; Churchill, Bernard M.

    2000-06-01

    The purpose of this work was to develop an automated method for segmenting pediatric kidneys in contrast-enhanced helical CT images and measuring the volume of the renal parenchyma. An automated system was developed to segment the abdomen, spine, aorta and kidneys. The expected size, shape, topology an X-ray attenuation of anatomical structures are stored as features in an anatomical model. These features guide 3-D threshold-based segmentation and then matching of extracted image regions to anatomical structures in the model. Following segmentation, the kidney volumes are calculated by summing included voxels. To validate the system, the kidney volumes of 4 swine were calculated using our approach and compared to the 'true' volumes measured after harvesting the kidneys. Automated volume calculations were also performed retrospectively in a cohort of 10 children. The mean difference between the calculated and measured values in the swine kidneys was 1.38 (S.D. plus or minus 0.44) cc. For the pediatric cases, calculated volumes ranged from 41.7 - 252.1 cc/kidney, and the mean ratio of right to left kidney volume was 0.96 (S.D. plus or minus 0.07). These results demonstrate the accuracy of the volumetric technique that may in the future provide an objective assessment of renal damage.

  7. Survey Evidence on the Willingness of U.S. Consumers to Pay for Automotive Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, David L; Evans, David H; Hiestand, John

    2013-01-01

    Prospect theory, which was awarded the Nobel Prize in Economics in 2002, holds that human beings faced with a risky bet will tend to value potential losses about twice as much as potential gains. Previous research has demonstrated that prospect theory could be sufficient to explain an energy paradox in the market for automotive fuel economy. This paper analyzes data from four random sample surveys of 1,000 U.S. households each in 2004, 2011, 2012 and 2013. Households were asked about willingness to pay for future fuel savings as well as the annual fuel savings necessary to justify a given upfrontmore » payment. Payback periods inferred from household responses are consistent over time and across different formulations of questions. Mean calculated payback periods are short, about 3 years, but there is substantial dispersion among individual responses. Calculated payback periods do not appear to be correlated with the attributes of respondents. Respondents were able to quantitatively describe their uncertainty about both vehicle fuel economy and future fuel prices. Simulation of loss averse behavior based on this stated uncertainty illustrate how loss aversion could lead consumers to substantially undervalue future fuel savings relative to their expected value.« less

  8. Atomistic study of the electronic contact resistivity between the half-Heusler alloys (HfCoSb, HfZrCoSb, HfZrNiSn) and the metal Ag

    NASA Astrophysics Data System (ADS)

    He, Yuping; Léonard, François; Spataru, Catalin D.

    2018-06-01

    Half-Heusler (HH) alloys have shown promising thermoelectric properties in the medium- and high-temperature range. To harness these material properties for thermoelectric applications, it is important to realize electrical contacts with low electrical contact resistivity. However, little is known about the detailed structural and electronic properties of such contacts and the expected values of contact resistivity. Here, we employ atomistic ab initio calculations to study electrical contacts in a subclass of HH alloys consisting of the compounds HfCoSb, HfZrCoSb, and HfZrNiSn. By using Ag as a prototypical metal, we show that the termination of the HH material critically determines the presence or absence of strong deformations at the interface. Our study includes contacts to doped materials, and the results indicate that the p -type materials generally form ohmic contacts while the n -type materials have a small Schottky barrier. We calculate the temperature dependence of the contact resistivity in the low- to medium-temperature range and provide quantitative values that set lower limits for these systems.

  9. Stress-strain relationship of PDMS micropillar for force measurement application

    NASA Astrophysics Data System (ADS)

    Johari, Shazlina; Shyan, L. Y.

    2017-11-01

    There is an increasing interest to use polydimethylsiloxane (PDMS) based materials as bio-transducers for force measurements in the order of micro to nano Newton. The accuracy of these devices relies on appropriate material characterization of PDMS and modelling to convert the micropillar deformations into the corresponding forces. Previously, we have reported on fabricated PDMS micropillar that acts as a cylindrical cantilever and was experimentally used to measure the force of the nematode C. elegans. In this research, similar PDMS micropillars are designed and simulated using ANSYS software. The simulation involves investigating two main factors that is expected to affect the force measurement performance; pillar height and diameter. Results show that the deformation increases when pillar height is increased and the deformation is inversely proportional to the pillar diameter. The maximum deformation obtained is 713 um with pillar diameter of 20 um and pillar height of 100 um. Results of stress and strain show similar pattern, where their values decreases as pillar diameter and height is increased. The simulated results are also compared with the calculated displacement. The trend for both calculated and simulated values are similar with 13% average difference.

  10. Developing hybrid approaches to predict pKa values of ionizable groups

    PubMed Central

    Witham, Shawn; Talley, Kemper; Wang, Lin; Zhang, Zhe; Sarkar, Subhra; Gao, Daquan; Yang, Wei

    2011-01-01

    Accurate predictions of pKa values of titratable groups require taking into account all relevant processes associated with the ionization/deionization. Frequently, however, the ionization does not involve significant structural changes and the dominating effects are purely electrostatic in origin allowing accurate predictions to be made based on the electrostatic energy difference between ionized and neutral forms alone using a static structure. On another hand, if the change of the charge state is accompanied by a structural reorganization of the target protein, then the relevant conformational changes have to be taken into account in the pKa calculations. Here we report a hybrid approach that first predicts the titratable groups, which ionization is expected to cause conformational changes, termed “problematic” residues, then applies a special protocol on them, while the rest of the pKa’s are predicted with rigid backbone approach as implemented in multi-conformation continuum electrostatics (MCCE) method. The backbone representative conformations for “problematic” groups are generated with either molecular dynamics simulations with charged and uncharged amino acid or with ab-initio local segment modeling. The corresponding ensembles are then used to calculate the pKa of the “problematic” residues and then the results are averaged. PMID:21744395

  11. Verification of monitor unit calculations for non-IMRT clinical radiotherapy: report of AAPM Task Group 114.

    PubMed

    Stern, Robin L; Heaton, Robert; Fraser, Martin W; Goddu, S Murty; Kirby, Thomas H; Lam, Kwok Leung; Molineu, Andrea; Zhu, Timothy C

    2011-01-01

    The requirement of an independent verification of the monitor units (MU) or time calculated to deliver the prescribed dose to a patient has been a mainstay of radiation oncology quality assurance. The need for and value of such a verification was obvious when calculations were performed by hand using look-up tables, and the verification was achieved by a second person independently repeating the calculation. However, in a modern clinic using CT/MR/PET simulation, computerized 3D treatment planning, heterogeneity corrections, and complex calculation algorithms such as convolution/superposition and Monte Carlo, the purpose of and methodology for the MU verification have come into question. In addition, since the verification is often performed using a simpler geometrical model and calculation algorithm than the primary calculation, exact or almost exact agreement between the two can no longer be expected. Guidelines are needed to help the physicist set clinically reasonable action levels for agreement. This report addresses the following charges of the task group: (1) To re-evaluate the purpose and methods of the "independent second check" for monitor unit calculations for non-IMRT radiation treatment in light of the complexities of modern-day treatment planning. (2) To present recommendations on how to perform verification of monitor unit calculations in a modern clinic. (3) To provide recommendations on establishing action levels for agreement between primary calculations and verification, and to provide guidance in addressing discrepancies outside the action levels. These recommendations are to be used as guidelines only and shall not be interpreted as requirements.

  12. Load controller and method to enhance effective capacity of a photovoltaic power supply

    DOEpatents

    Perez, Richard

    2000-01-01

    A load controller and method are provided for maximizing effective capacity of a non-controllable, renewable power supply coupled to a variable electrical load also coupled to a conventional power grid. Effective capacity is enhanced by monitoring power output of the renewable supply and loading, and comparing the loading against the power output and a load adjustment threshold determined from an expected peak loading. A value for a load adjustment parameter is calculated by subtracting the renewable supply output and the load adjustment parameter from the current load. This value is then employed to control the variable load in an amount proportional to the value of the load control parameter when the parameter is within a predefined range. By so controlling the load, the effective capacity of the non-controllable, renewable power supply is increased without any attempt at operational feedback control of the renewable supply. The renewable supply may comprise, for example, a photovoltaic power supply or a wind-based power supply.

  13. Visual assessment of the radiation distribution in the ISS Lab module: visualization in the human body

    NASA Technical Reports Server (NTRS)

    Saganti, P. B.; Zapp, E. N.; Wilson, J. W.; Cucinotta, F. A.

    2001-01-01

    The US Lab module of the International Space Station (ISS) is a primary working area where the crewmembers are expected to spend majority of their time. Because of the directionality of radiation fields caused by the Earth shadow, trapped radiation pitch angle distribution, and inherent variations in the ISS shielding, a model is needed to account for these local variations in the radiation distribution. We present the calculated radiation dose (rem/yr) values for over 3,000 different points in the working area of the Lab module and estimated radiation dose values for over 25,000 different points in the human body for a given ambient radiation environment. These estimated radiation dose values are presented in a three dimensional animated interactive visualization format. Such interactive animated visualization of the radiation distribution can be generated in near real-time to track changes in the radiation environment during the orbit precession of the ISS.

  14. Revisiting Wiedemann-Franz law through Boltzmann transport equations and ab-initio density functional theory

    NASA Astrophysics Data System (ADS)

    Nag, Abhinav; Kumari, Anuja; Kumar, Jagdish

    2018-05-01

    We have investigated structural, electronic and transport properties of the alkali metals using ab-initio density functional theory. The electron energy dispersions are found parabolic free electron like which is expected for alkali metals. The lattice constants for all the studied metals are also in good agreement within 98% with experiments. We have further computed their transport properties using semi-classical Boltzmann transport equations with special focus on electrical and thermal conductivity. Our objective was to obtain Wiedemann-Franz law and hence Lorenz number. The motivation to do these calculations is to see that how the incorporation of different interactions such as electron-lattice, electron-electron interaction affect the Wiedeman-Franz law. By solving Boltzmann transport equations, we have obtained electrical conductivity (σ/τ) and thermal conductivity (κ0 /τ) at different temperatures and then calculated Lorenz number using L = κ0 /(σT). The obtained value of Lorenz number has been found to match with value derived for free electron Fermi gas 2.44× 10-8 WΩK-2. Our results prove that the Wiedemann-Franz law as derived for free electron gas does not change much for alkali metals, even when one incorporates interaction of electrons with atomic nuclei and other electrons. However, at lower temperatures, the Lorenz number, was found to be deviating from its theoretical value.

  15. Realising the Real Benefits of Outsourcing: Measurement Excellence and Its Importance in Achieving Long Term Value

    NASA Astrophysics Data System (ADS)

    Oshri, Ilan; Kotlarsky, Julia

    These days firms are, more than ever, pressed to demonstrate returns on their investment in outsourcing. While the initial returns can always be associated with one-off cost cutting, outsourcing arrangements are complex, often involving inter-related high-value activities, which makes the realisation of long-term benefits from outsourcing ever more challenging. Executives in client firms are no longer satisfied with the same level of service delivery through the outsourcing lifecycle. They seek to achieve business transformation and innovation in their present and future services, beyond satisfying service level agreements (SLAs). Clearly the business world is facing a new challenge: an outsourcing delivery system of high-value activities that demonstrates value over time and across business functions. However, despite such expectations, many client firms are in the dark when trying to measure and quantify the return on outsourcing investments: results of this research show that less than half of all CIOs and CFOs (43%) have attempted to calculate the financial impact of outsourcing to their bottom line, indicating that the financial benefits are difficult to quantify (51%).

  16. Low-Visibility Visual Simulation with Real Fog

    NASA Technical Reports Server (NTRS)

    Chase, Wendell D.

    1982-01-01

    An environmental fog simulation (EFS) attachment was developed to aid in the study of natural low-visibility visual cues and subsequently used to examine the realism effect upon the aircraft simulator visual scene. A review of the basic fog equations indicated that the two major factors must be accounted for in the simulation of low visibility-one due to atmospheric attenuation and one due to veiling luminance. These factors are compared systematically by: comparing actual measurements lo those computed from the Fog equations, and comparing runway-visual-range-related visual-scene contrast values with the calculated values. These values are also compared with the simulated equivalent equations and with contrast measurements obtained from a current electronic fog synthesizer to help identify areas in which improvements are needed. These differences in technique, the measured values, the Features of both systems, a pilot opinion survey of the EFS fog, and improvements (by combining features of both systems) that are expected to significantly increase the potential as well as flexibility for producing a very high-fidelity, low-visibility visual simulation are discussed.

  17. Low-visibility visual simulation with real fog

    NASA Technical Reports Server (NTRS)

    Chase, W. D.

    1981-01-01

    An environmental fog simulation (EFS) attachment was developed to aid in the study of natural low-visibility visual cues and subsequently used to examine the realism effect upon the aircraft simulator visual scene. A review of the basic fog equations indicated that two major factors must be accounted for in the simulation of low visibility - one due to atmospheric attenuation and one due to veiling luminance. These factors are compared systematically by (1) comparing actual measurements to those computed from the fog equations, and (2) comparing runway-visual-range-related visual-scene contrast values with the calculated values. These values are also compared with the simulated equivalent equations and with contrast measurements obtained from a current electronic fog synthesizer to help identify areas in which improvements are needed. These differences in technique, the measured values, the features of both systems, a pilot opinion survey of the EFS fog, and improvements (by combining features of both systems) that are expected to significantly increase the potential as well as flexibility for producing a very high-fidelity low-visibility visual simulation are discussed.

  18. The IPEA dilemma in CASPT2† †Electronic supplementary information (ESI) available: Original data (Table S1) and additional comments for the literature survey; note on symmetry (Table S2), geometries (Table S3), data (Tables S4–S6) and comments (Section S2) for calculations on di-/triatomic molecules; results (Tables S7–S25) and comments (Section S3) for calculations on the organic molecular data set. See DOI: 10.1039/c6sc03759c Click here for additional data file.

    PubMed Central

    Zobel, J. Patrick

    2017-01-01

    Multi-configurational second order perturbation theory (CASPT2) has become a very popular method for describing excited-state properties since its development in 1990. To account for systematic errors found in the calculation of dissociation energies, an empirical correction applied to the zeroth-order Hamiltonian, called the IPEA shift, was introduced in 2004. The errors were attributed to an unbalanced description of open-shell versus closed-shell electronic states and is believed to also lead to an underestimation of excitation energies. Here we show that the use of the IPEA shift is not justified and the IPEA should not be used to calculate excited states, at least for organic chromophores. This conclusion is the result of three extensive analyses. Firstly, we survey the literature for excitation energies of organic molecules that have been calculated with the unmodified CASPT2 method. We find that the excitation energies of 356 reference values are negligibly underestimated by 0.02 eV. This value is an order of magnitude smaller than the expected error based on the calculation of dissociation energies. Secondly, we perform benchmark full configuration interaction calculations on 137 states of 13 di- and triatomic molecules and compare the results with CASPT2. Also in this case, the excited states are underestimated by only 0.05 eV. Finally, we perform CASPT2 calculations with different IPEA shift values on 309 excited states of 28 organic small and medium-sized organic chromophores. We demonstrate that the size of the IPEA correction scales with the amount of dynamical correlation energy (and thus with the size of the system), and gets immoderate already for the molecules considered here, leading to an overestimation of the excitation energies. It is also found that the IPEA correction strongly depends on the size of the basis set. The dependency on both the size of the system and of the basis set, contradicts the idea of a universal IPEA shift which is able to compensate for systematic CASPT2 errors in the calculation of excited states. PMID:28572908

  19. A Practical Measure of Student Motivation: Establishing Validity Evidence for the Expectancy-Value-Cost Scale in Middle School

    ERIC Educational Resources Information Center

    Kosovich, Jeff J.; Hulleman, Chris S.; Barron, Kenneth E.; Getty, Steve

    2015-01-01

    We present validity evidence for the Expectancy-Value-Cost (EVC) Scale of student motivation. Using a brief, 10-item scale, we measured middle school students' expectancy, value, and cost for their math and science classes in the Fall and Winter of the same academic year. Confirmatory factor analyses supported the three-factor structure of the EVC…

  20. Crystal structure and magnetic properties of '{alpha} Prime Prime -Fe{sub 16}N{sub 2}' containing residual {alpha}-Fe prepared by low-temperature ammonia nitridation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamashita, S.; Masubuchi, Y.; Nakazawa, Y.

    2012-10-15

    Slight enhancement of saturation magnetization to 219 A m{sup 2} kg{sup -1} was observed from 199 A m{sup 2} kg{sup -1} for the original {alpha}-Fe on the intermediate nitrided mixture of '{alpha} Prime Prime -Fe{sub 16}N{sub 2}' with residual {alpha}-Fe among the low temperature ammonia nitridation products under 5 T magnetic field at room temperature. The value changed not linearly against the yield as had been expected. Crystal structure refinement indicated that the phase similar to {alpha} Prime Prime -Fe{sub 16}N{sub 2} had deviations on its lattice constants and positional parameters, compared to previously reported values for {alpha} Prime Primemore » -Fe{sub 16}N{sub 2}. Spin-polarized total energy calculations were performed using the projector-augmented wave method as implemented in the Vienna ab-initio simulation package (VASP) to calculate magnetic moment on the refined crystal structure of the intermediate '{alpha} Prime Prime -Fe{sub 16}N{sub 2}'. The calculations supported the observed magnetization enhancement in the intermediate nitridation product. - Graphical abstract: Crystal structural parameters slightly change in the intermediate nitrided '{alpha} Prime Prime -Fe{sub 16}N{sub 2}' from those in {alpha} Prime Prime -Fe{sub 16}N{sub 2} to show the magnetization maxima in the mixture of '{alpha} Prime Prime -Fe{sub 16}N{sub 2}' and the residual {alpha}-F. Highlights: Black-Right-Pointing-Pointer Larger magnetization was observed than the value of Fe{sub 16}N{sub 2} on its intermediate nitrided mixture with residual {alpha}-Fe. Black-Right-Pointing-Pointer The enhancement was related to the crystal structural deviation from Fe{sub 16}N{sub 2} on the intermediate nitride. Black-Right-Pointing-Pointer It was supported by spin-polarized total energy calculation using the deviated structure.« less

  1. Quantum Monte Carlo calculations of electromagnetic transitions in $^8$Be with meson-exchange currents derived from chiral effective field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pastore, S.; Wiringa, Robert B.; Pieper, Steven C.

    2014-08-01

    We report quantum Monte Carlo calculations of electromagnetic transitions inmore » $^8$Be. The realistic Argonne $$v_{18}$$ two-nucleon and Illinois-7 three-nucleon potentials are used to generate the ground state and nine excited states, with energies that are in excellent agreement with experiment. A dozen $M1$ and eight $E2$ transition matrix elements between these states are then evaluated. The $E2$ matrix elements are computed only in impulse approximation, with those transitions from broad resonant states requiring special treatment. The $M1$ matrix elements include two-body meson-exchange currents derived from chiral effective field theory, which typically contribute 20--30\\% of the total expectation value. Many of the transitions are between isospin-mixed states; the calculations are performed for isospin-pure states and then combined with the empirical mixing coefficients to compare to experiment. In general, we find that transitions between states that have the same dominant spatial symmetry are in decent agreement with experiment, but those transitions between different spatial symmetries are often significantly underpredicted.« less

  2. Initial conditions in high-energy collisions

    NASA Astrophysics Data System (ADS)

    Petreska, Elena

    This thesis is focused on the initial stages of high-energy collisions in the saturation regime. We start by extending the McLerran-Venugopalan distribution of color sources in the initial wave-function of nuclei in heavy-ion collisions. We derive a fourth-order operator in the action and discuss its relevance for the description of color charge distributions in protons in high-energy experiments. We calculate the dipole scattering amplitude in proton-proton collisions with the quartic action and find an agreement with experimental data. We also obtain a modification to the fluctuation parameter of the negative binomial distribution of particle multiplicities in proton-proton experiments. The result implies an advancement of the fourth-order action towards Gaussian when the energy is increased. Finally, we calculate perturbatively the expectation value of the magnetic Wilson loop operator in the first moments of heavy-ion collisions. For the magnetic flux we obtain a first non-trivial term that is proportional to the square of the area of the loop. The result is close to numerical calculations for small area loops.

  3. Design and implementation of the modified signed digit multiplication routine on a ternary optical computer.

    PubMed

    Xu, Qun; Wang, Xianchao; Xu, Chao

    2017-06-01

    Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.

  4. Notes on the ExactPack Implementation of the DSD Rate Stick Solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaul, Ann

    It has been shown above that the discretization scheme implemented in the ExactPack solver for the DSD Rate Stick equation is consistent with the Rate Stick PDE. In addition, a stability analysis has provided a CFL condition for a stable time step. Together, consistency and stability imply convergence of the scheme, which is expected to be close to first-order in time and second-order in space. It is understood that the nonlinearity of the underlying PDE will affect this rate somewhat. In the solver I implemented in ExactPack, I used the one-sided boundary condition described above at the outer boundary. Inmore » addition, I used 80% of the time step calculated in the stability analysis above. By making these two changes, I was able to implement a solver that calculates the solution without any arbitrary limits placed on the values of the curvature at the boundary. Thus, the calculation is driven directly by the conditions at the boundary as formulated in the DSD theory. The chosen scheme is completely coherent and defensible from a mathematical standpoint.« less

  5. Normal Values for Heart Electrophysiology Parameters of Healthy Swine Determined on Electrophysiology Study.

    PubMed

    Noszczyk-Nowak, Agnieszka; Cepiel, Alicja; Janiszewski, Adrian; Pasławski, Robert; Gajek, Jacek; Pasławska, Urszula; Nicpoń, Józef

    2016-01-01

    Swine are a well-recognized animal model for human cardiovascular diseases. Despite the widespread use of porcine model in experimental electrophysiology, still no reference values for intracardiac electrical activity and conduction parameters determined during an invasive electrophysiology study (EPS) have been developed in this species thus far. The aim of the study was to develop a set of normal values for intracardiac electrical activity and conduction parameters determined during an invasive EPS of swine. The study included 36 healthy domestic swine (24-40 kg body weight). EPS was performed under a general anesthesia with midazolam, propofol and isoflurane. The reference values for intracardiac electrical activity and conduction parameters were calculated as arithmetic means ± 2 standard deviations. The reference values were determined for AH, HV and PA intervals, interatrial conduction time at its own and imposed rhythm, sinus node recovery time (SNRT), corrected sinus node recovery time (CSNRT), anterograde and retrograde Wenckebach points, atrial, atrioventricular node and ventricular refractory periods. No significant correlations were found between body weight and heart rate of the examined pigs and their electrophysiological parameters. The hereby presented reference values can be helpful in comparing the results of various studies, as well as in more accurately estimating the values of electrophysiological parameters that can be expected in a given experiment.

  6. Selection of the effect size for sample size determination for a continuous response in a superiority clinical trial using a hybrid classical and Bayesian procedure.

    PubMed

    Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N

    2016-06-01

    When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the effect size which the study has a high probability of correctly detecting based on the available prior information on the difference [Formula: see text] and the standard deviation [Formula: see text] provides a valuable, substantiated estimate that can form the basis for discussion about the study's feasibility during the design phase. © The Author(s) 2016.

  7. SU-C-204-06: Monte Carlo Dose Calculation for Kilovoltage X-Ray-Psoralen Activated Cancer Therapy (X-PACT): Preliminary Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mein, S; Gunasingha, R; Nolan, M

    Purpose: X-PACT is an experimental cancer therapy where kV x-rays are used to photo-activate anti-cancer therapeutics through phosphor intermediaries (phosphors that absorb x-rays and re-radiate as UV light). Clinical trials in pet dogs are currently underway (NC State College of Veterinary Medicine) and an essential component is the ability to model the kV dose in these dogs. Here we report the commissioning and characterization of a Monte Carlo (MC) treatment planning simulation tool to calculate X-PACT radiation doses in canine trials. Methods: FLUKA multi-particle MC simulation package was used to simulate a standard X-PACT radiation treatment beam of 80kVp withmore » the Varian OBI x-ray source geometry. The beam quality was verified by comparing measured and simulated attenuation of the beam by various thicknesses of aluminum (2–4.6 mm) under narrow beam conditions (HVL). The beam parameters at commissioning were then corroborated using MC, characterized and verified with empirically collected commissioning data, including: percent depth dose curves (PDD), back-scatter factors (BSF), collimator scatter factor(s), and heel effect, etc. All simulations were conducted for N=30M histories at M=100 iterations. Results: HVL and PDD simulation data agreed with an average percent error of 2.42%±0.33 and 6.03%±1.58, respectively. The mean square error (MSE) values for HVL and PDD (0.07% and 0.50%) were low, as expected; however, longer simulations are required to validate convergence to the expected values. Qualitatively, pre- and post-filtration source spectra matched well with 80kVp references generated via SPEKTR software. Further validation of commissioning data simulation is underway in preparation for first-time 3D dose calculations with canine CBCT data. Conclusion: We have prepared a Monte Carlo simulation capable of accurate dose calculation for use with ongoing X-PACT canine clinical trials. Preliminary results show good agreement with measured data and hold promise for accurate quantification of dose for this novel psoralen X-ray therapy. Funding Support, Disclosures, & Conflict of Interest: The Monte Carlo simulation work was not funded; Drs. Adamson & Oldham have received funding from Immunolight LLC for X-PACT research.« less

  8. Laharz_py: GIS tools for automated mapping of lahar inundation hazard zones

    USGS Publications Warehouse

    Schilling, Steve P.

    2014-01-01

    Laharz_py is written in the Python programming language as a suite of tools for use in ArcMap Geographic Information System (GIS). Primarily, Laharz_py is a computational model that uses statistical descriptions of areas inundated by past mass-flow events to forecast areas likely to be inundated by hypothetical future events. The forecasts use physically motivated and statistically calibrated power-law equations that each has a form A = cV2/3, relating mass-flow volume (V) to planimetric or cross-sectional areas (A) inundated by an average flow as it descends a given drainage. Calibration of the equations utilizes logarithmic transformation and linear regression to determine the best-fit values of c. The software uses values of V, an algorithm for idenitifying mass-flow source locations, and digital elevation models of topography to portray forecast hazard zones for lahars, debris flows, or rock avalanches on maps. Laharz_py offers two methods to construct areas of potential inundation for lahars: (1) Selection of a range of plausible V values results in a set of nested hazard zones showing areas likely to be inundated by a range of hypothetical flows; and (2) The user selects a single volume and a confidence interval for the prediction. In either case, Laharz_py calculates the mean expected A and B value from each user-selected value of V. However, for the second case, a single value of V yields two additional results representing the upper and lower values of the confidence interval of prediction. Calculation of these two bounding predictions require the statistically calibrated prediction equations, a user-specified level of confidence, and t-distribution statistics to calculate the standard error of regression, standard error of the mean, and standard error of prediction. The portrayal of results from these two methods on maps compares the range of inundation areas due to prediction uncertainties with uncertainties in selection of V values. The Open-File Report document contains an explanation of how to install and use the software. The Laharz_py software includes an example data set for Mount Rainier, Washington. The second part of the documentation describes how to use all of the Laharz_py tools in an example dataset at Mount Rainier, Washington.

  9. Evaluating convex roof entanglement measures.

    PubMed

    Tóth, Géza; Moroder, Tobias; Gühne, Otfried

    2015-04-24

    We show a powerful method to compute entanglement measures based on convex roof constructions. In particular, our method is applicable to measures that, for pure states, can be written as low order polynomials of operator expectation values. We show how to compute the linear entropy of entanglement, the linear entanglement of assistance, and a bound on the dimension of the entanglement for bipartite systems. We discuss how to obtain the convex roof of the three-tangle for three-qubit states. We also show how to calculate the linear entropy of entanglement and the quantum Fisher information based on partial information or device independent information. We demonstrate the usefulness of our method by concrete examples.

  10. Representation of complex probabilities and complex Gibbs sampling

    NASA Astrophysics Data System (ADS)

    Salcedo, Lorenzo Luis

    2018-03-01

    Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.

  11. Precision holography for N={2}^{\\ast } on S 4 from type IIB supergravity

    NASA Astrophysics Data System (ADS)

    Bobev, Nikolay; Gautason, Friðrik Freyr; van Muiden, Jesse

    2018-04-01

    We find a new supersymmetric solution of type IIB supergravity which is holographically dual to the planar limit of the four-dimensional N={2}^{\\ast } supersymmetric Yang-Mills theory on S 4. We study a probe fundamental string in this background which is dual to a supersymmetric Wilson loop in the N={2}^{\\ast } theory. Using holography we calculate the expectation value of this line operator to leading order in the 't Hooft coupling. The result is a non-trivial function of the mass parameter of the N={2}^{\\ast } theory that precisely matches the result from supersymmetric localization.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nesaraja, C.D.

    Available information pertaining to the nuclear structure of all nuclei with mass numbers A=247 is presented. Various decay and reaction data are evaluated and compared. Adopted data, levels, spin, parity and configuration assignments are given. When there are insufficient data, expected values from systematics of nuclear properties or/and theoretical calculations are quoted. Unexpected or discrepant experimental results are also noted. A summary and compilation of the discovery of various isotopes in this mass region is given in 2013Fr02 ({sup 247}Pu, {sup 247}Am, {sup 247}Cm, {sup 243}Bk, {sup 247}Cf), 2011Me01 ({sup 247}Es), and 2013Th02 ({sup 247}Fm, {sup 247}Md)

  13. Effect of attochirp on attosecond streaking time delay in photoionization of atoms

    NASA Astrophysics Data System (ADS)

    Goldsmith, C.; Jaroń-Becker, A.; Becker, A.

    2018-01-01

    We present a theoretical analysis of the effect of the attochirp on the streaking time delay, intrinsic to photoionization of an atom by an attosecond laser pulse at extreme ultraviolet wavelengths superposed by a femtosecond streaking pulse. To this end, we determine the expectation value of the delay in a chirped pulse using a recently developed model formula. Results of our calculations show that the attochirp can be relevant for photoemission from the 3p shell in argon atom at frequencies near the Cooper minimum, while it is negligible if the photoionization cross section as a function of frequency varies smoothly.

  14. Shapes and stability of algebraic nuclear models

    NASA Technical Reports Server (NTRS)

    Lopez-Moreno, Enrique; Castanos, Octavio

    1995-01-01

    A generalization of the procedure to study shapes and stability of algebraic nuclear models introduced by Gilmore is presented. One calculates the expectation value of the Hamiltonian with respect to the coherent states of the algebraic structure of the system. Then equilibrium configurations of the resulting energy surface, which depends in general on state variables and a set of parameters, are classified through the Catastrophe theory. For one- and two-body interactions in the Hamiltonian of the interacting Boson model-1, the critical points are organized through the Cusp catastrophe. As an example, we apply this Separatrix to describe the energy surfaces associated to the Rutenium and Samarium isotopes.

  15. Nuclear Data Sheets for A=243

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nesaraja, Caroline D; McCutchan, Elizabeth A.

    2014-09-30

    We present available information pertaining to the nuclear structure of all nuclei with mass numbers A=243. Various decay and reaction data are evaluated and compared. Adopted data, levels, spin, parity and configuration assignments are given. When there are insufficient data, expected values from systematics of nuclear properties or/and theoretical calculations are quoted. Unexpected or discrepant experimental results are also noted. A summary and compilation of the discovery of various isotopes in this mass region is given in 2013Fr02 ( 243Np, 243Pu, 243Am, 243Cm, 243Bk, and 243Cf), 2011Me01 ( 243Es), and 2013Th02 ( 243Fm).

  16. Formation of Schrödinger-cat states in the Morse potential: Wigner function picture.

    PubMed

    Foldi, Peter; Czirjak, Attila; Molnar, Balazs; Benedict, Mihaly

    2002-04-22

    We investigate the time evolution of Morse coherent states in the potential of the NO molecule. We present animated wave functions and Wigner functions of the system exhibiting spontaneous formation of Schrödinger-cat states at certain stages of the time evolution. These nonclassical states are coherent superpositions of two localized states corresponding to two di.erent positions of the center of mass. We analyze the degree of nonclassicality as the function of the expectation value of the position in the initial state. Our numerical calculations are based on a novel, essentially algebraic treatment of the Morse potential.

  17. Optimal focal-plane restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1989-01-01

    Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.

  18. Gigantic Dzyaloshinskii-Moriya interaction in the MnBi ultrathin films

    NASA Astrophysics Data System (ADS)

    Yu, Jie-Xiang; Zang, Jiadong; Zang's Team

    The magnetic skyrmion, a swirling-like spin texture with nontrivial topology, is driven by strong Dzyaloshinskii-Moriya (DM) interaction originated from the spin-orbit coupling in inversion symmetry breaking systems. Here, based on first-principles calculations, we predict a new material, MnBi ultrathin film, with gigantic DM interactions. The ratio of the DM interaction to the Heisenberg exchange is about 0.3, exceeding any values reported so far. Its high Curie temperature, high coercivity, and large perpendicular magnetoanisotropy make MnBi a good candidate for future spintronics studies. Topologically nontrivial spin textures are emergent in this system. We expect further experimental efforts will be devoted into this systems.

  19. Numerical modeling of heat and mass transfer in the human eye under millimeter wave exposure.

    PubMed

    Karampatzakis, Andreas; Samaras, Theodoros

    2013-05-01

    Human exposure to millimeter wave (MMW) radiation is expected to increase in the next several years. In this work, we present a thermal model of the human eye under MMW illumination. The model takes into account the fluid dynamics of the aqueous humor and predicts a frequency-dependent reversal of its flow that also depends on the incident power density. The calculated maximum fluid velocity in the anterior chamber and the temperature rise at the corneal apex are reported for frequencies from 40 to 100 GHz and different values of incident power density. Copyright © 2013 Wiley Periodicals, Inc.

  20. Evaluation of non-animal methods for assessing skin sensitisation hazard: A Bayesian Value-of-Information analysis.

    PubMed

    Leontaridou, Maria; Gabbert, Silke; Van Ierland, Ekko C; Worth, Andrew P; Landsiedel, Robert

    2016-07-01

    This paper offers a Bayesian Value-of-Information (VOI) analysis for guiding the development of non-animal testing strategies, balancing information gains from testing with the expected social gains and costs from the adoption of regulatory decisions. Testing is assumed to have value, if, and only if, the information revealed from testing triggers a welfare-improving decision on the use (or non-use) of a substance. As an illustration, our VOI model is applied to a set of five individual non-animal prediction methods used for skin sensitisation hazard assessment, seven battery combinations of these methods, and 236 sequential 2-test and 3-test strategies. Their expected values are quantified and compared to the expected value of the local lymph node assay (LLNA) as the animal method. We find that battery and sequential combinations of non-animal prediction methods reveal a significantly higher expected value than the LLNA. This holds for the entire range of prior beliefs. Furthermore, our results illustrate that the testing strategy with the highest expected value does not necessarily have to follow the order of key events in the sensitisation adverse outcome pathway (AOP). 2016 FRAME.

  1. Calculation of amorphous silica solubilities at 25° to 300°C and apparent cation hydration numbers in aqueous salt solutions using the concept of effective density of water

    USGS Publications Warehouse

    Fournier, Robert O.; Williams, Marshall L.

    1983-01-01

    The solubility of amorphous silica in aqueous salt solutions at 25° to 300°C can be calculated using information on its solubility in pure water and a model in which the activity of water in the salt solution is defined to equal the effective density. pe, of “free” water in that solution. At temperatures of 100°C and above, pe closely equals the product of the density of the solution times the weight fraction of water in the solution. At 25°C, a correction parameter must be applied to pe that incorporates a term called the apparent cation hydration number, h. Because of the many assumptions and other uncertainties involved in determining values of h, by the model used here, the reported numbers are not necessarily real hydration numbers even though they do agree with some published values determined by activity and diffusion methods. Whether or not h is a real hydration number, it would appear to be useful in its inclusion within a more extensive activity coefficient term that describes the departure of silica solubilities in concentrated salt solutions from expected behavior according to the model presented here. Values of h can be calculated from measured amorphous silica solubilities in salt solutions at 25°C provided there is no complexing of dissolved silica with the dissolved salt, or if the degree of complexing is known. The previously postulated aqueous silica-sulfate complexing in aqueous Na2SO4 solutions is supported by results of the present effective density of water model

  2. Structural, electronic, and elastic properties of CuFeS2: first-principles study

    NASA Astrophysics Data System (ADS)

    Zhou, Meng; Gao, Xiang; Cheng, Yan; Chen, Xiangrong; Cai, Lingcang

    2015-03-01

    The structural, electronic, and elastic properties of CuFeS2 have been investigated by using the generalized gradient approximation (GGA), GGA + U (on-site Coulomb repulsion energy), the local density approximation (LDA), and the LDA + U approach in the frame of density functional theory. It is shown that when the GGA + U formalism is selected with a U value of 3 eV for the 3d state of Fe, the calculated lattice constants agree well with the available experimental and other theoretical data. Our GGA + U calculations indicate that CuFeS2 is a semiconductor with a band gap of 0.552 eV and with a magnetic moment of 3.64 µB per Fe atom, which are well consistent with the experimental results. Combined with the density of states, the band structure characteristics of CuFeS2 have been analyzed and their origins have been specified, which reveals a hybridization existing between Fe-3d, Cu-3s, and S-3p, respectively. The charge and Mulliken population analyses indicate that CuFeS2 is a covalent crystal. Moreover, the calculated elastic constants prove that CuFeS2 is mechanically stable but anisotropic. The bulk modulus obtained from elastic constants is 87.1 GPa, which agrees well with the experimental value of 91 ± 15 GPa and better than the theoretical bulk modulus 74 GPa obtained from GGA method by Lazewski et al. The obtained shear modulus and Debye temperature are 21.0 GPa and 287 K, respectively, and the latter accords well with the available experimental value. It is expected that our work can provide useful information to further investigate CuFeS2 from both the experimental and theoretical sides.

  3. Glacial conditions in the Red Sea

    NASA Astrophysics Data System (ADS)

    Rohling, Eelco J.

    1994-10-01

    In this paper, results from previous studies on planktonic foraminifera, δ18O, and global sea level are combined to discuss climatic conditions in the Red Sea during the last glacial maximum (18,000 B.P.). First, the influence of 120-m sea level lowering on the exchange transport through the strait of Bab-el-Mandab is considered. This strait is the only natural connection of the Red Sea to the open ocean. Next, glacial Red Sea outflow salinity is estimated (about 48 parts per thousand) from the foraminiferal record. Combined, these results yield an estimate of the glacial net water deficit, which appears to have been quite similar to the present (about 2 m yr-1). Finally, budget calculation of δ18O fluxes suggests that the glacial δ18O value of evaporation was about 50% of the present value. This is considered to have resulted from substantially increased mean wind speeds over the glacial Red Sea, which would have caused a rapid drop in the kinematic fractionation factor for 18O. The sensitivity of the calculated values for water deficit and isotopic fractionation to the various assumptions and estimates is evaluated in the discussion. Improvents are to be expected especially through research on the glacial salinity contrast between the Red Sea and Gulf of Aden. It is argued, however, that such future improvement will likely result in a worsening of the isotopic discrepancy, thus increasing the need for an additional mechanism that influenced fractionation (such as mean wind speed). This study demonstrates the need for caution when calculating paleosalinities from δ18O records under the assumption that the modern S∶δ18O relation has remained constant through time. Previously overlooked factors, such as mean wind speed, may have significantly altered that relation in the past.

  4. Do High-Ability Students Disidentify with Science? A Descriptive Study of U.S. Ninth Graders in 2009

    ERIC Educational Resources Information Center

    Andersen, Lori; Chen, Jason A.

    2016-01-01

    The present study describes science expectancy-value motivation classes within a nationally representative sample of students who were U.S. ninth graders in 2009. An expectancy-value model was the basis for science-specific profile indicators (self-efficacy, attainment value, utility value, interest-enjoyment value). Using exploratory latent class…

  5. Using Expectancy Value Theory as a Framework to Reduce Student Resistance to Active Learning: A Proof of Concept.

    PubMed

    Cooper, Katelyn M; Ashley, Michael; Brownell, Sara E

    2017-01-01

    There has been a national movement to transition college science courses from passive lectures to active learning environments. Active learning has been shown to be a more effective way for students to learn, yet there is concern that some students are resistant to active learning approaches. Although there is much discussion about student resistance to active learning, few studies have explored this topic. Furthermore, a limited number of studies have applied theoretical frameworks to student engagement in active learning. We propose using a theoretical lens of expectancy value theory to understand student resistance to active learning. In this study, we examined student perceptions of active learning after participating in 40 hours of active learning. We used the principal components of expectancy value theory to probe student experience in active learning: student perceived self-efficacy in active learning, value of active learning, and potential cost of participating in active learning. We found that students showed positive changes in the components of expectancy value theory and reported high levels of engagement in active learning, which provide proof of concept that expectancy value theory can be used to boost student perceptions of active learning and their engagement in active learning classrooms. From these findings, we have built a theoretical framework of expectancy value theory applied to active learning.

  6. Using Expectancy Value Theory as a Framework to Reduce Student Resistance to Active Learning: A Proof of Concept

    PubMed Central

    Cooper, Katelyn M.; Ashley, Michael; Brownell, Sara E.

    2017-01-01

    There has been a national movement to transition college science courses from passive lectures to active learning environments. Active learning has been shown to be a more effective way for students to learn, yet there is concern that some students are resistant to active learning approaches. Although there is much discussion about student resistance to active learning, few studies have explored this topic. Furthermore, a limited number of studies have applied theoretical frameworks to student engagement in active learning. We propose using a theoretical lens of expectancy value theory to understand student resistance to active learning. In this study, we examined student perceptions of active learning after participating in 40 hours of active learning. We used the principal components of expectancy value theory to probe student experience in active learning: student perceived self-efficacy in active learning, value of active learning, and potential cost of participating in active learning. We found that students showed positive changes in the components of expectancy value theory and reported high levels of engagement in active learning, which provide proof of concept that expectancy value theory can be used to boost student perceptions of active learning and their engagement in active learning classrooms. From these findings, we have built a theoretical framework of expectancy value theory applied to active learning. PMID:28861130

  7. An Algorithm for the Hierarchical Organization of Path Diagrams and Calculation of Components of Expected Covariance.

    ERIC Educational Resources Information Center

    Boker, Steven M.; McArdle, J. J.; Neale, Michael

    2002-01-01

    Presents an algorithm for the production of a graphical diagram from a matrix formula in such a way that its components are logically and hierarchically arranged. The algorithm, which relies on the matrix equations of J. McArdle and R. McDonald (1984), calculates the individual path components of expected covariance between variables and…

  8. Hamiltonian approach to Ehrenfest expectation values and Gaussian quantum states

    PubMed Central

    Bonet-Luz, Esther

    2016-01-01

    The dynamics of quantum expectation values is considered in a geometric setting. First, expectation values of the canonical observables are shown to be equivariant momentum maps for the action of the Heisenberg group on quantum states. Then, the Hamiltonian structure of Ehrenfest’s theorem is shown to be Lie–Poisson for a semidirect-product Lie group, named the Ehrenfest group. The underlying Poisson structure produces classical and quantum mechanics as special limit cases. In addition, quantum dynamics is expressed in the frame of the expectation values, in which the latter undergo canonical Hamiltonian motion. In the case of Gaussian states, expectation values dynamics couples to second-order moments, which also enjoy a momentum map structure. Eventually, Gaussian states are shown to possess a Lie–Poisson structure associated with another semidirect-product group, which is called the Jacobi group. This structure produces the energy-conserving variant of a class of Gaussian moment models that have previously appeared in the chemical physics literature. PMID:27279764

  9. Impaired Expected Value Computations Coupled With Overreliance on Stimulus-Response Learning in Schizophrenia.

    PubMed

    Hernaus, Dennis; Gold, James M; Waltz, James A; Frank, Michael J

    2018-04-03

    While many have emphasized impaired reward prediction error signaling in schizophrenia, multiple studies suggest that some decision-making deficits may arise from overreliance on stimulus-response systems together with a compromised ability to represent expected value. Guided by computational frameworks, we formulated and tested two scenarios in which maladaptive representations of expected value should be most evident, thereby delineating conditions that may evoke decision-making impairments in schizophrenia. In a modified reinforcement learning paradigm, 42 medicated people with schizophrenia and 36 healthy volunteers learned to select the most frequently rewarded option in a 75-25 pair: once when presented with a more deterministic (90-10) pair and once when presented with a more probabilistic (60-40) pair. Novel and old combinations of choice options were presented in a subsequent transfer phase. Computational modeling was employed to elucidate contributions from stimulus-response systems (actor-critic) and expected value (Q-learning). People with schizophrenia showed robust performance impairments with increasing value difference between two competing options, which strongly correlated with decreased contributions from expected value-based learning (Q-learning). Moreover, a subtle yet consistent contextual choice bias for the probabilistic 75 option was present in people with schizophrenia, which could be accounted for by a context-dependent reward prediction error in the actor-critic. We provide evidence that decision-making impairments in schizophrenia increase monotonically with demands placed on expected value computations. A contextual choice bias is consistent with overreliance on stimulus-response learning, which may signify a deficit secondary to the maladaptive representation of expected value. These results shed new light on conditions under which decision-making impairments may arise. Copyright © 2018 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  10. Comparing multistate expected damages, option price and cumulative prospect measures for valuing flood protection

    NASA Astrophysics Data System (ADS)

    Farrow, Scott; Scott, Michael

    2013-05-01

    Floods are risky events ranging from small to catastrophic. Although expected flood damages are frequently used for economic policy analysis, alternative measures such as option price (OP) and cumulative prospect value exist. The empirical magnitude of these measures whose theoretical preference is ambiguous is investigated using case study data from Baltimore City. The outcome for the base case OP measure increases mean willingness to pay over the expected damage value by about 3%, a value which is increased with greater risk aversion, reduced by increased wealth, and only slightly altered by higher limits of integration. The base measure based on cumulative prospect theory is about 46% less than expected damages with estimates declining when alternative parameters are used. The method of aggregation is shown to be important in the cumulative prospect case which can lead to an estimate up to 41% larger than expected damages. Expected damages remain a plausible and the most easily computed measure for analysts.

  11. Iodine intake by adult residents of a farming area in Iwate Prefecture, Japan, and the accuracy of estimated iodine intake calculated using the Standard Tables of Food Composition in Japan.

    PubMed

    Nakatsuka, Haruo; Chiba, Keiko; Watanabe, Takao; Sawatari, Hideyuki; Seki, Takako

    2016-11-01

    Iodine intake by adults in farming districts in Northeastern Japan was evaluated by two methods: (1) government-approved food composition tables based calculation and (2) instrumental measurement. The correlation between these two values and a regression model for the calibration of calculated values was presented. Iodine intake was calculated, using the values in the Japan Standard Tables of Food Composition (FCT), through the analysis of duplicate samples of complete 24-h food consumption for 90 adult subjects. In cases where the value for iodine content was not available in the FCT, it was assumed to be zero for that food item (calculated values). Iodine content was also measured by ICP-MS (measured values). Calculated and measured values rendered geometric means (GM) of 336 and 279 μg/day, respectively. There was no statistically significant (p > 0.05) difference between calculated and measured values. The correlation coefficient was 0.646 (p < 0.05). With this high correlation coefficient, a simple regression line can be applied to estimate measured value from calculated value. A survey of the literature suggests that the values in this study were similar to values that have been reported to date for Japan, and higher than those for other countries in Asia. Iodine intake of Japanese adults was 336 μg/day (GM, calculated) and 279 μg/day (GM, measured). Both values correlated so well, with a correlation coefficient of 0.646, that a regression model (Y = 130.8 + 1.9479X, where X and Y are measured and calculated values, respectively) could be used to calibrate calculated values.

  12. Implications for Planetary System Formation from Interstellar Object 1I/2017 U1 (‘Oumuamua)

    NASA Astrophysics Data System (ADS)

    Trilling, David E.; Robinson, Tyler; Roegge, Alissa; Chandler, Colin Orion; Smith, Nathan; Loeffler, Mark; Trujillo, Chad; Navarro-Meza, Samuel; Glaspie, Lori M.

    2017-12-01

    The recently discovered minor body 1I/2017 U1 (‘Oumuamua) is the first known object in our solar system that is not bound by the Sun’s gravity. Its hyperbolic orbit (eccentricity greater than unity) strongly suggests that it originated outside our solar system; its red color is consistent with substantial space weathering experienced over a long interstellar journey. We carry out a simple calculation of the probability of detecting such an object. We find that the observed detection rate of 1I-like objects can be satisfied if the average mass of ejected material from nearby stars during the process of planetary formation is ˜20 Earth masses, similar to the expected value for our solar system. The current detection rate of such interstellar interlopers is estimated to be 0.2 yr-1, and the expected number of detections over the past few years is almost exactly one. When the Large Synoptic Survey Telescope begins its wide, fast, deep all-sky survey, the detection rate will increase to 1 yr-1. Those expected detections will provide further constraints on nearby planetary system formation through a better estimate of the number and properties of interstellar objects.

  13. [Estimation of infant mortality and life expectancy in the time of the Roman Empire: a methodological examination].

    PubMed

    Langner, G

    1998-01-01

    "The first available written source in human history relating to the description of the life expectancy of a living population is a legal text which originates from the Roman jurist Ulpianus (murdered in AD 228). In contrast to the prevailing opinion in demography, I not only do consider the text to be of ¿historical interest'...but to be a document of inestimable worth for evaluating the population survival probability in the Roman empire. The criteria specified by Ulpianus are in line with the ¿pan-human' survival function as described by modern model life tables, when based on adulthood. Values calculated from tomb inscriptions follow the lowest level of the model life tables as well and support Ulpianus' statements. The specifications by Ulpianus for the population of the Roman world empire as a whole in the ¿best fit' with modern life tables lead to an average level of 20 years of life expectancy. As a consequence a high infant mortality rate of almost 400 [per thousand] can be concluded resulting in no more than three children at the age of five in an average family in spite of a high fertility rate." (EXCERPT)

  14. Interpreting the Strongly Lensed Supernova iPTF16geu: Time Delay Predictions, Microlensing, and Lensing Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, Anupreeta; Oguri, Masamune; More, Surhud

    2017-02-01

    We present predictions for time delays between multiple images of the gravitationally lensed supernova, iPTF16geu, which was recently discovered from the intermediate Palomar Transient Factory (iPTF). As the supernova is of Type Ia where the intrinsic luminosity is usually well known, accurately measured time delays of the multiple images could provide tight constraints on the Hubble constant. According to our lens mass models constrained by the Hubble Space Telescope F814W image, we expect the maximum relative time delay to be less than a day, which is consistent with the maximum of 100 hr reported by Goobar et al. but placesmore » a stringent upper limit. Furthermore, the fluxes of most of the supernova images depart from expected values suggesting that they are affected by microlensing. The microlensing timescales are small enough that they may pose significant problems to measure the time delays reliably. Our lensing rate calculation indicates that the occurrence of a lensed SN in iPTF is likely. However, the observed total magnification of iPTF16geu is larger than expected, given its redshift. This may be a further indication of ongoing microlensing in this system.« less

  15. Expected rate of fisheries-induced evolution is slow

    PubMed Central

    Andersen, Ken H.; Brander, Keith

    2009-01-01

    Commercial fisheries exert high mortalities on the stocks they exploit, and the consequent selection pressure leads to fisheries-induced evolution of growth rate, age and size at maturation, and reproductive output. Productivity and yields may decline as a result, but little is known about the rate at which such changes are likely to occur. Fisheries-induced evolution of exploited populations has recently become a subject of concern for policy makers, fisheries managers, and the general public, with prominent calls for mitigating management action. We make a general evolutionary impact assessment of fisheries by calculating the expected rate of fisheries-induced evolution and the consequent changes in yield. Rates of evolution are expected to be ≈0.1–0.6% per year, and the consequent reductions in fisheries yield are <0.7% per year. These rates are at least a factor of 5 lower than published values based on experiments and analyses of population time series, and we explain why the published rates may be overestimates. Dealing with evolutionary effects of fishing is less urgent than reducing the direct detrimental effects of overfishing on exploited stocks and on their marine ecosystems. PMID:19564596

  16. Gravitational lensing by an ensemble of isothermal galaxies

    NASA Technical Reports Server (NTRS)

    Katz, Neal; Paczynski, Bohdan

    1987-01-01

    Calculation of 28,000 models of gravitational lensing of a distant quasar by an ensemble of randomly placed galaxies, each having a singular isothermal mass distribuiton, is reported. The average surface mass density was 0.2 of the critical value in all models. It is found that the surface mass density averaged over the area of the smallest circle that encompasses the multiple images is 0.82, only slightly smaller than expected from a simple analytical model of Turner et al. (1984). The probability of getting multiple images is also as large as expected analytically. Gravitational lensing is dominated by the matter in the beam; i.e., by the beam convergence. The cases where the multiple imaging is due to asymmetry in mass distribution (i.e., due to shear) are very rare. Therefore, the observed gravitational-lens candidates for which no lensing object has been detected between the images cannot be a result of asymmetric mass distribution outside the images, at least in a model with randomly distributed galaxies. A surprisingly large number of large separations between the multiple images is found: up to 25 percent of multiple images have their angular separation 2 to 4 times larger than expected in a simple analytical model.

  17. Overcoming Learning Aversion in Evaluating and Managing Uncertain Risks.

    PubMed

    Cox, Louis Anthony Tony

    2015-10-01

    Decision biases can distort cost-benefit evaluations of uncertain risks, leading to risk management policy decisions with predictably high retrospective regret. We argue that well-documented decision biases encourage learning aversion, or predictably suboptimal learning and premature decision making in the face of high uncertainty about the costs, risks, and benefits of proposed changes. Biases such as narrow framing, overconfidence, confirmation bias, optimism bias, ambiguity aversion, and hyperbolic discounting of the immediate costs and delayed benefits of learning, contribute to deficient individual and group learning, avoidance of information seeking, underestimation of the value of further information, and hence needlessly inaccurate risk-cost-benefit estimates and suboptimal risk management decisions. In practice, such biases can create predictable regret in selection of potential risk-reducing regulations. Low-regret learning strategies based on computational reinforcement learning models can potentially overcome some of these suboptimal decision processes by replacing aversion to uncertain probabilities with actions calculated to balance exploration (deliberate experimentation and uncertainty reduction) and exploitation (taking actions to maximize the sum of expected immediate reward, expected discounted future reward, and value of information). We discuss the proposed framework for understanding and overcoming learning aversion and for implementing low-regret learning strategies using regulation of air pollutants with uncertain health effects as an example. © 2015 Society for Risk Analysis.

  18. Ground-state phase diagram of the repulsive fermionic t -t' Hubbard model on the square lattice from weak coupling

    NASA Astrophysics Data System (ADS)

    Šimkovic, Fedor; Liu, Xuan-Wen; Deng, Youjin; Kozik, Evgeny

    2016-08-01

    We obtain a complete and numerically exact in the weak-coupling limit (U →0 ) ground-state phase diagram of the repulsive fermionic Hubbard model on the square lattice for filling factors 0

  19. Attitudes and values expected of public health nursing students at graduation: A delphi study.

    PubMed

    Okura, Mika; Takizawa, Hiroko

    2018-06-01

    The skills and knowledge of the competencies expected of public health nursing (PHN) students at graduation have been clarified; however, the attitudes and values have not yet been studied in Japan. The objective of this study was to identify and reach a consensus among experts on the attitudes and values expected of PHN students at graduation. This survey was conducted as a two-stage Delphi study. We selected the following experts: 248 teachers in the faculty of public health nursing at a university as academic experts, and 250 public health nurses who were also experienced clinical instructors as clinical experts. The round 1 mailed survey was conducted using a questionnaire about the necessity and importance of attitudes and values, and 211 experts responded (42.4%, clinical; n = 124, academic; n = 87). In the Round 2 survey, the experts consisted of 60.2% of the round 1 participants (clinical; n = 73, academic; n = 54). Descriptive statistics were used for multiple imputation. We identified a total of 13 attitudes and values expected of PHN students, and reached ≥90% consensus for most items (except for one). Regarding the expected achievement level at graduation, there was no difference between clinical and academic experts except for one item. Consensus was clearly achieved for 13 attitudes and values expected of PHN students, as well as importance and expected achievement level at graduation. In the future, it is important to examine strategies that can effectively develop these attitudes and values through basic and continuous education. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Theoretical Accuracy of Along-Track Displacement Measurements from Multiple-Aperture Interferometry (MAI)

    PubMed Central

    Jung, Hyung-Sup; Lee, Won-Jin; Zhang, Lei

    2014-01-01

    The measurement of precise along-track displacements has been made with the multiple-aperture interferometry (MAI). The empirical accuracies of the MAI measurements are about 6.3 and 3.57 cm for ERS and ALOS data, respectively. However, the estimated empirical accuracies cannot be generalized to any interferometric pair because they largely depend on the processing parameters and coherence of the used SAR data. A theoretical formula is given to calculate an expected MAI measurement accuracy according to the system and processing parameters and interferometric coherence. In this paper, we have investigated the expected MAI measurement accuracy on the basis of the theoretical formula for the existing X-, C- and L-band satellite SAR systems. The similarity between the expected and empirical MAI measurement accuracies has been tested as well. The expected accuracies of about 2–3 cm and 3–4 cm (γ = 0.8) are calculated for the X- and L-band SAR systems, respectively. For the C-band systems, the expected accuracy of Radarsat-2 ultra-fine is about 3–4 cm and that of Sentinel-1 IW is about 27 cm (γ = 0.8). The results indicate that the expected MAI measurement accuracy of a given interferometric pair can be easily calculated by using the theoretical formula. PMID:25251408

Top