Moayeri, Foruhar; Hsueh, Ya-Seng Arthur; Clarke, Philip; Hua, Xinyang; Dunt, David
2016-06-01
Chronic obstructive pulmonary disease (COPD) has a considerable impact on quality of life and well-being of patients. Health state utility value (HSUV) is a recognized measure for health economic appraisals and is extensively used as an indicator for decision-making studies. This study is a systematic review of literature aimed to estimate mean utility value in COPD using meta-analysis and explore degree of heterogeneity in the utility values across a variety of clinical and study characteristic. The literature review covers studies that used EQ-5D to estimate utility value for patient level research in COPD. Studies that reported utility values elicited by EQ-5D in COPD patients were selected for random-effect meta-analysis addressing inter-study heterogeneity and subgroup analyses. Thirty-two studies were included in the general utility meta-analysis. The estimated general utility value was 0.673 (95% CI 0.653 to 0.693). Meta-analyses of COPD stages utility values showed influence of airway obstruction on utility value. The utility values ranged from 0.820 (95% CI 0.767 to 0.872) for stage I to 0.624 (95% CI 0.571 to 0.677) for stage IV. There was substantial heterogeneity in utility values: I(2) = 97.7%. A more accurate measurement of utility values in COPD is needed to refine valid and generalizable scores of HSUV. Given the limited success of the factors studied to reduce heterogeneity, an approach needs to be developed how best to use mean utility values for COPD in health economic evaluation.
Photovoltaics as a terrestrial energy source. Volume 2: System value
NASA Technical Reports Server (NTRS)
Smith, J. L.
1980-01-01
Assumptions and techniques employed by the electric utility industry and other electricity planners to make estimates of the future value of photovoltaic (PV) systems interconnected with U.S. electric utilities were examined. Existing estimates of PV value and their interpretation and limitations are discussed. PV value is defined as the marginal private savings accruing to potential PV owners. For utility-owned PV systems, these values are shown to be the after-tax savings in conventional fuel and capacity displaced by the PV output. For non-utility-owned (distributed) systems, the utility's savings in fuel and capacity must first be translated through the electric rate structure (prices) to the potential PV system owner. Base-case estimates of the average value of PV systems to U.S. utilities are presented. The relationship of these results to the PV Program price goals and current energy policy is discussed; the usefulness of PV output quantity goals is also reviewed.
Gries, Katharine S; Regier, Dean A; Ramsey, Scott D; Patrick, Donald L
2017-06-01
To develop a statistical model generating utility estimates for prostate cancer specific health states, using preference weights derived from the perspectives of prostate cancer patients, men at risk for prostate cancer, and society. Utility estimate values were calculated using standard gamble (SG) methodology. Study participants valued 18 prostate-specific health states with the five attributes: sexual function, urinary function, bowel function, pain, and emotional well-being. Appropriateness of model (linear regression, mixed effects, or generalized estimating equation) to generate prostate cancer utility estimates was determined by paired t-tests to compare observed and predicted values. Mixed-corrected standard SG utility estimates to account for loss aversion were calculated based on prospect theory. 132 study participants assigned values to the health states (n = 40 men at risk for prostate cancer; n = 43 men with prostate cancer; n = 49 general population). In total, 792 valuations were elicited (six health states for each 132 participants). The most appropriate model for the classification system was a mixed effects model; correlations between the mean observed and predicted utility estimates were greater than 0.80 for each perspective. Developing a health-state classification system with preference weights for three different perspectives demonstrates the relative importance of main effects between populations. The predicted values for men with prostate cancer support the hypothesis that patients experiencing the disease state assign higher utility estimates to health states and there is a difference in valuations made by patients and the general population.
Robinson, Angela; Spencer, Anne; Moffatt, Peter
2015-04-01
There has been recent interest in using the discrete choice experiment (DCE) method to derive health state utilities for use in quality-adjusted life year (QALY) calculations, but challenges remain. We set out to develop a risk-based DCE approach to derive utility values for health states that allowed 1) utility values to be anchored directly to normal health and death and 2) worse than dead health states to be assessed in the same manner as better than dead states. Furthermore, we set out to estimate alternative models of risky choice within a DCE model. A survey was designed that incorporated a risk-based DCE and a "modified" standard gamble (SG). Health state utility values were elicited for 3 EQ-5D health states assuming "standard" expected utility (EU) preferences. The DCE model was then generalized to allow for rank-dependent expected utility (RDU) preferences, thereby allowing for probability weighting. A convenience sample of 60 students was recruited and data collected in small groups. Under the assumption of "standard" EU preferences, the utility values derived within the DCE corresponded fairly closely to the mean results from the modified SG. Under the assumption of RDU preferences, the utility values estimated are somewhat lower than under the assumption of standard EU, suggesting that the latter may be biased upward. Applying the correct model of risky choice is important whether a modified SG or a risk-based DCE is deployed. It is, however, possible to estimate a probability weighting function within a DCE and estimate "unbiased" utility values directly, which is not possible within a modified SG. We conclude by setting out the relative strengths and weaknesses of the 2 approaches in this context. © The Author(s) 2014.
Estimating the relative utility of screening mammography.
Abbey, Craig K; Eckstein, Miguel P; Boone, John M
2013-05-01
The concept of diagnostic utility is a fundamental component of signal detection theory, going back to some of its earliest works. Attaching utility values to the various possible outcomes of a diagnostic test should, in principle, lead to meaningful approaches to evaluating and comparing such systems. However, in many areas of medical imaging, utility is not used because it is presumed to be unknown. In this work, we estimate relative utility (the utility benefit of a detection relative to that of a correct rejection) for screening mammography using its known relation to the slope of a receiver operating characteristic (ROC) curve at the optimal operating point. The approach assumes that the clinical operating point is optimal for the goal of maximizing expected utility and therefore the slope at this point implies a value of relative utility for the diagnostic task, for known disease prevalence. We examine utility estimation in the context of screening mammography using the Digital Mammographic Imaging Screening Trials (DMIST) data. We show how various conditions can influence the estimated relative utility, including characteristics of the rating scale, verification time, probability model, and scope of the ROC curve fit. Relative utility estimates range from 66 to 227. We argue for one particular set of conditions that results in a relative utility estimate of 162 (±14%). This is broadly consistent with values in screening mammography determined previously by other means. At the disease prevalence found in the DMIST study (0.59% at 365-day verification), optimal ROC slopes are near unity, suggesting that utility-based assessments of screening mammography will be similar to those found using Youden's index.
Flynn, Terry N; Louviere, Jordan J; Marley, Anthony AJ; Coast, Joanna; Peters, Tim J
2008-01-01
Background Researchers are increasingly investigating the potential for ordinal tasks such as ranking and discrete choice experiments to estimate QALY health state values. However, the assumptions of random utility theory, which underpin the statistical models used to provide these estimates, have received insufficient attention. In particular, the assumptions made about the decisions between living states and the death state are not satisfied, at least for some people. Estimated values are likely to be incorrectly anchored with respect to death (zero) in such circumstances. Methods Data from the Investigating Choice Experiments for the preferences of older people CAPability instrument (ICECAP) valuation exercise were analysed. The values (previously anchored to the worst possible state) were rescaled using an ordinal model proposed previously to estimate QALY-like values. Bootstrapping was conducted to vary artificially the proportion of people who conformed to the conventional random utility model underpinning the analyses. Results Only 26% of respondents conformed unequivocally to the assumptions of conventional random utility theory. At least 14% of respondents unequivocally violated the assumptions. Varying the relative proportions of conforming respondents in sensitivity analyses led to large changes in the estimated QALY values, particularly for lower-valued states. As a result these values could be either positive (considered to be better than death) or negative (considered to be worse than death). Conclusion Use of a statistical model such as conditional (multinomial) regression to anchor quality of life values from ordinal data to death is inappropriate in the presence of respondents who do not conform to the assumptions of conventional random utility theory. This is clearest when estimating values for that group of respondents observed in valuation samples who refuse to consider any living state to be worse than death: in such circumstances the model cannot be estimated. Only a valuation task requiring respondents to make choices in which both length and quality of life vary can produce estimates that properly reflect the preferences of all respondents. PMID:18945358
Petrou, Stavros; Kwon, Joseph; Madan, Jason
2018-05-10
Economic analysts are increasingly likely to rely on systematic reviews and meta-analyses of health state utility values to inform the parameter inputs of decision-analytic modelling-based economic evaluations. Beyond the context of economic evaluation, evidence from systematic reviews and meta-analyses of health state utility values can be used to inform broader health policy decisions. This paper provides practical guidance on how to conduct a systematic review and meta-analysis of health state utility values. The paper outlines a number of stages in conducting a systematic review, including identifying the appropriate evidence, study selection, data extraction and presentation, and quality and relevance assessment. The paper outlines three broad approaches that can be used to synthesise multiple estimates of health utilities for a given health state or condition, namely fixed-effect meta-analysis, random-effects meta-analysis and mixed-effects meta-regression. Each approach is illustrated by a synthesis of utility values for a hypothetical decision problem, and software code is provided. The paper highlights a number of methodological issues pertinent to the conduct of meta-analysis or meta-regression. These include the importance of limiting synthesis to 'comparable' utility estimates, for example those derived using common utility measurement approaches and sources of valuation; the effects of reliance on limited or poorly reported published data from primary utility assessment studies; the use of aggregate outcomes within analyses; approaches to generating measures of uncertainty; handling of median utility values; challenges surrounding the disentanglement of utility estimates collected serially within the context of prospective observational studies or prospective randomised trials; challenges surrounding the disentanglement of intervention effects; and approaches to measuring model validity. Areas of methodological debate and avenues for future research are highlighted.
Hodgson, Robert; Reason, Timothy; Trueman, David; Wickstead, Rose; Kusel, Jeanette; Jasilek, Adam; Claxton, Lindsay; Taylor, Matthew; Pulikottil-Jacob, Ruth
2017-10-01
The estimation of utility values for the economic evaluation of therapies for wet age-related macular degeneration (AMD) is a particular challenge. Previous economic models in wet AMD have been criticized for failing to capture the bilateral nature of wet AMD by modelling visual acuity (VA) and utility values associated with the better-seeing eye only. Here we present a de novo regression analysis using generalized estimating equations (GEE) applied to a previous dataset of time trade-off (TTO)-derived utility values from a sample of the UK population that wore contact lenses to simulate visual deterioration in wet AMD. This analysis allows utility values to be estimated as a function of VA in both the better-seeing eye (BSE) and worse-seeing eye (WSE). VAs in both the BSE and WSE were found to be statistically significant (p < 0.05) when regressed separately. When included without an interaction term, only the coefficient for VA in the BSE was significant (p = 0.04), but when an interaction term between VA in the BSE and WSE was included, only the constant term (mean TTO utility value) was significant, potentially a result of the collinearity between the VA of the two eyes. The lack of both formal model fit statistics from the GEE approach and theoretical knowledge to support the superiority of one model over another make it difficult to select the best model. Limitations of this analysis arise from the potential influence of collinearity between the VA of both eyes, and the use of contact lenses to reflect VA states to obtain the original dataset. Whilst further research is required to elicit more accurate utility values for wet AMD, this novel regression analysis provides a possible source of utility values to allow future economic models to capture the quality of life impact of changes in VA in both eyes. Novartis Pharmaceuticals UK Limited.
Valuing SF-6D Health States Using a Discrete Choice Experiment.
Norman, Richard; Viney, Rosalie; Brazier, John; Burgess, Leonie; Cronin, Paula; King, Madeleine; Ratcliffe, Julie; Street, Deborah
2014-08-01
SF-6D utility weights are conventionally produced using a standard gamble (SG). SG-derived weights consistently demonstrate a floor effect not observed with other elicitation techniques. Recent advances in discrete choice methods have allowed estimation of utility weights. The objective was to produce Australian utility weights for the SF-6D and to explore the application of discrete choice experiment (DCE) methods in this context. We hypothesized that weights derived using this method would reflect the largely monotonic construction of the SF-6D. We designed an online DCE and administered it to an Australia-representative online panel (n = 1017). A range of specifications investigating nonlinear preferences with respect to additional life expectancy were estimated using a random-effects probit model. The preferred model was then used to estimate a preference index such that full health and death were valued at 1 and 0, respectively, to provide an algorithm for Australian cost-utility analyses. Physical functioning, pain, mental health, and vitality were the largest drivers of utility weights. Combining levels to remove illogical orderings did not lead to a poorer model fit. Relative to international SG-derived weights, the range of utility weights was larger with 5% of health states valued below zero. s. DCEs can be used to investigate preferences for health profiles and to estimate utility weights for multi-attribute utility instruments. Australian cost-utility analyses can now use domestic SF-6D weights. The comparability of DCE results to those using other elicitation methods for estimating utility weights for quality-adjusted life-year calculations should be further investigated. © The Author(s) 2013.
Mapping from disease-specific measures to health-state utility values in individuals with migraine.
Gillard, Patrick J; Devine, Beth; Varon, Sepideh F; Liu, Lei; Sullivan, Sean D
2012-05-01
The objective of this study was to develop empirical algorithms that estimate health-state utility values from disease-specific quality-of-life scores in individuals with migraine. Data from a cross-sectional, multicountry study were used. Individuals with episodic and chronic migraine were randomly assigned to training or validation samples. Spearman's correlation coefficients between paired EuroQol five-dimensional (EQ-5D) questionnaire utility values and both Headache Impact Test (HIT-6) scores and Migraine-Specific Quality-of-Life Questionnaire version 2.1 (MSQ) domain scores (role restrictive, role preventive, and emotional function) were examined. Regression models were constructed to estimate EQ-5D questionnaire utility values from the HIT-6 score or the MSQ domain scores. Preferred algorithms were confirmed in the validation samples. In episodic migraine, the preferred HIT-6 and MSQ algorithms explained 22% and 25% of the variance (R(2)) in the training samples, respectively, and had similar prediction errors (root mean square errors of 0.30). In chronic migraine, the preferred HIT-6 and MSQ algorithms explained 36% and 45% of the variance in the training samples, respectively, and had similar prediction errors (root mean square errors 0.31 and 0.29). In episodic and chronic migraine, no statistically significant differences were observed between the mean observed and the mean estimated EQ-5D questionnaire utility values for the preferred HIT-6 and MSQ algorithms in the validation samples. The relationship between the EQ-5D questionnaire and the HIT-6 or the MSQ is adequate to use regression equations to estimate EQ-5D questionnaire utility values. The preferred HIT-6 and MSQ algorithms will be useful in estimating health-state utilities in migraine trials in which no preference-based measure is present. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Kharroubi, Samer A
2018-06-05
Experimental studies to develop valuations of health state descriptive systems like EQ-5D or SF-6D need to be conducted in different countries, because social and cultural differences are likely to lead to systematically different valuations. There is a scope utilize the evidence in one country to help with the design and the analysis of a study in another, for this to enable the generation of utility estimates of the second country much more precisely than would have been possible when collecting and analyzing the country's data alone. We analyze SF-6D valuation data elicited from representative samples corresponding to the Hong Kong (HK) and United Kingdom (UK) general adult populations through the use of the standard gamble technique to value 197 and 249 health states respectively. We apply a nonparametric Bayesian model to estimate a HK value set using the UK dataset as informative prior to improve its estimation. Estimates are compared to a HK value set estimated using HK values alone using mean predictions and root mean square error. The novel method of modelling utility functions permitted the UK valuations to contribute significant prior information to the Hong Kong analysis. The results suggest that using HK data alongside the existing UK data produces HK utility estimates better than using the HK study data by itself. The promising results suggest that existing preference data could be combined with valuation study in a new country to generate preference weights, making own country value sets more achievable for low and middle income countries. Further research is encouraged.
Li, Li; Nguyen, Kim-Huong; Comans, Tracy; Scuffham, Paul
2018-04-01
Several utility-based instruments have been applied in cost-utility analysis to assess health state values for people with dementia. Nevertheless, concerns and uncertainty regarding their performance for people with dementia have been raised. To assess the performance of available utility-based instruments for people with dementia by comparing their psychometric properties and to explore factors that cause variations in the reported health state values generated from those instruments by conducting meta-regression analyses. A literature search was conducted and psychometric properties were synthesized to demonstrate the overall performance of each instrument. When available, health state values and variables such as the type of instrument and cognitive impairment levels were extracted from each article. A meta-regression analysis was undertaken and available covariates were included in the models. A total of 64 studies providing preference-based values were identified and included. The EuroQol five-dimension questionnaire demonstrated the best combination of feasibility, reliability, and validity. Meta-regression analyses suggested that significant differences exist between instruments, type of respondents, and mode of administration and the variations in estimated utility values had influences on incremental quality-adjusted life-year calculation. This review finds that the EuroQol five-dimension questionnaire is the most valid utility-based instrument for people with dementia, but should be replaced by others under certain circumstances. Although no utility estimates were reported in the article, the meta-regression analyses that examined variations in utility estimates produced by different instruments impact on cost-utility analysis, potentially altering the decision-making process in some circumstances. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Review of utility values for economic modeling in type 2 diabetes.
Beaudet, Amélie; Clegg, John; Thuresson, Per-Olof; Lloyd, Adam; McEwan, Phil
2014-06-01
Economic analysis in type 2 diabetes mellitus (T2DM) requires an assessment of the effect of a wide range of complications. The objective of this article was to identify a set of utility values consistent with the National Institute for Health and Care Excellence (NICE) reference case and to critically discuss and illustrate challenges in creating such a utility set. A systematic literature review was conducted to identify studies reporting utility values for relevant complications. The methodology of each study was assessed for consistency with the NICE reference case. A suggested set of utility values applicable to modeling was derived, giving preference to studies reporting multiple complications and correcting for comorbidity. The review considered 21 relevant diabetes complications. A total of 16,574 articles were identified; after screening, 61 articles were assessed for methodological quality. Nineteen articles met NICE criteria, reporting utility values for 20 of 21 relevant complications. For renal transplant, because no articles meeting NICE criteria were identified, two articles using other methodologies were included. Index value estimates for T2DM without complication ranged from 0.711 to 0.940. Utility decrement associated with complications ranged from 0.014 (minor hypoglycemia) to 0.28 (amputation). Limitations associated with the selection of a utility value for use in economic modeling included variability in patient recruitment, heterogeneity in statistical analysis, large variability around some point estimates, and lack of recent data. A reference set of utility values for T2DM and its complications in line with NICE requirements was identified. This research illustrates the challenges associated with systematically selecting utility data for economic evaluations. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
ERIC Educational Resources Information Center
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
Rentz, Anne M; Kowalski, Jonathan W; Walt, John G; Hays, Ron D; Brazier, John E; Yu, Ren; Lee, Paul; Bressler, Neil; Revicki, Dennis A
2014-03-01
Understanding how individuals value health states is central to patient-centered care and to health policy decision making. Generic preference-based measures of health may not effectively capture the impact of ocular diseases. Recently, 6 items from the National Eye Institute Visual Function Questionnaire-25 were used to develop the Visual Function Questionnaire-Utility Index health state classification, which defines visual function health states. To describe elicitation of preferences for health states generated from the Visual Function Questionnaire-Utility Index health state classification and development of an algorithm to estimate health preference scores for any health state. Nonintervention, cross-sectional study of the general community in 4 countries (Australia, Canada, United Kingdom, and United States). A total of 607 adult participants were recruited from local newspaper advertisements. In the United Kingdom, an existing database of participants from previous studies was used for recruitment. Eight of 15,625 possible health states from the Visual Function Questionnaire-Utility Index were valued using time trade-off technique. A θ severity score was calculated for Visual Function Questionnaire-Utility Index-defined health states using item response theory analysis. Regression models were then used to develop an algorithm to assign health state preference values for all potential health states defined by the Visual Function Questionnaire-Utility Index. Health state preference values for the 8 states ranged from a mean (SD) of 0.343 (0.395) to 0.956 (0.124). As expected, preference values declined with worsening visual function. Results indicate that the Visual Function Questionnaire-Utility Index describes states that participants view as spanning most of the continuum from full health to dead. Visual Function Questionnaire-Utility Index health state classification produces health preference scores that can be estimated in vision-related studies that include the National Eye Institute Visual Function Questionnaire-25. These preference scores may be of value for estimating utilities in economic and health policy analyses.
Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M
2017-04-01
Parameter uncertainty in value sets of multiattribute utility-based instruments (MAUIs) has received little attention previously. This false precision leads to underestimation of the uncertainty of the results of cost-effectiveness analyses. The aim of this study is to examine the use of multiple imputation as a method to account for this uncertainty of MAUI scoring algorithms. We fitted a Bayesian model with random effects for respondents and health states to the data from the original US EQ-5D-3L valuation study, thereby estimating the uncertainty in the EQ-5D-3L scoring algorithm. We applied these results to EQ-5D-3L data from the Commonwealth Fund (CWF) Survey for Sick Adults ( n = 3958), comparing the standard error of the estimated mean utility in the CWF population using the predictive distribution from the Bayesian mixed-effect model (i.e., incorporating parameter uncertainty in the value set) with the standard error of the estimated mean utilities based on multiple imputation and the standard error using the conventional approach of using MAUI (i.e., ignoring uncertainty in the value set). The mean utility in the CWF population based on the predictive distribution of the Bayesian model was 0.827 with a standard error (SE) of 0.011. When utilities were derived using the conventional approach, the estimated mean utility was 0.827 with an SE of 0.003, which is only 25% of the SE based on the full predictive distribution of the mixed-effect model. Using multiple imputation with 20 imputed sets, the mean utility was 0.828 with an SE of 0.011, which is similar to the SE based on the full predictive distribution. Ignoring uncertainty of the predicted health utilities derived from MAUIs could lead to substantial underestimation of the variance of mean utilities. Multiple imputation corrects for this underestimation so that the results of cost-effectiveness analyses using MAUIs can report the correct degree of uncertainty.
Comparing and transforming PROMIS utility values to the EQ-5D.
Hartman, John D; Craig, Benjamin M
2018-03-01
Summarizing patient-reported outcomes (PROs) on a quality-adjusted life year (QALY) scale is an essential component to any economic evaluation comparing alternative medical treatments. While multiple studies have compared PRO items and instruments based on their psychometric properties, no study has compared the preference-based summary of the EQ-5D-3L and Patient Reported Outcomes Measurement Information System (PROMIS-29) instruments. As part of this comparison, a major aim of this manuscript is to transform PROMIS-29 utility values to an EQ-5D-3L scale. A nationally representative survey of 2623 US adults completed the 29-item PROMIS health profile instrument (PROMIS-29) and the 3-level version of the EQ-5D instrument (EQ-5D-3L). Their responses were summarized on a health utility scale using published estimates. Using regression analysis, PROMIS-29 and EQ-5D-3L utility weights were compared with each other as well as with self-reported general health. PROMIS-29 utility weights were much lower than the EQ-5D-3L weights. However, a correlation coefficient of 0.769 between the utility values of the two instruments suggests that the main discordance is simply a difference in scale between the measures. It is also possible to map PROMIS-29 utility weights onto an EQ-5D-3L scale. EQ-5D-3L losses equal .1784 × (PROMIS-29 Losses) .7286 . The published estimates of the PROMIS-29 produce lower utility values than many other health instruments. Mapping the PROMIS-29 estimates to an EQ-5D-3L scale alleviates this issue and allows for a more straightforward comparison between the PROMIS-29 and other common health instruments.
McCullagh, Laura; Schmitz, Susanne; Barry, Michael; Walsh, Cathal
2017-11-01
In Ireland, all new drugs for which reimbursement by the healthcare payer is sought undergo a health technology assessment by the National Centre for Pharmacoeconomics. The National Centre for Pharmacoeconomics estimate expected value of perfect information but not partial expected value of perfect information (owing to computational expense associated with typical methodologies). The objective of this study was to examine the feasibility and utility of estimating partial expected value of perfect information via a computationally efficient, non-parametric regression approach. This was a retrospective analysis of evaluations on drugs for cancer that had been submitted to the National Centre for Pharmacoeconomics (January 2010 to December 2014 inclusive). Drugs were excluded if cost effective at the submitted price. Drugs were excluded if concerns existed regarding the validity of the applicants' submission or if cost-effectiveness model functionality did not allow required modifications to be made. For each included drug (n = 14), value of information was estimated at the final reimbursement price, at a threshold equivalent to the incremental cost-effectiveness ratio at that price. The expected value of perfect information was estimated from probabilistic analysis. Partial expected value of perfect information was estimated via a non-parametric approach. Input parameters with a population value at least €1 million were identified as potential targets for research. All partial estimates were determined within minutes. Thirty parameters (across nine models) each had a value of at least €1 million. These were categorised. Collectively, survival analysis parameters were valued at €19.32 million, health state utility parameters at €15.81 million and parameters associated with the cost of treating adverse effects at €6.64 million. Those associated with drug acquisition costs and with the cost of care were valued at €6.51 million and €5.71 million, respectively. This research demonstrates that the estimation of partial expected value of perfect information via this computationally inexpensive approach could be considered feasible as part of the health technology assessment process for reimbursement purposes within the Irish healthcare system. It might be a useful tool in prioritising future research to decrease decision uncertainty.
Essays in applied microeconomics
NASA Astrophysics Data System (ADS)
Davis, Lucas William
2005-11-01
The first essay measures the impact of an outbreak of pediatric leukemia on local housing values. A model of residential location choice is used to describe conditions under which the gradient of the hedonic price function with respect to health risk is equal to household marginal willingness to pay to avoid pediatric leukemia risk. This equalizing differential is estimated using property-level sales records from a county in Nevada where residents have recently experienced a severe increase in pediatric leukemia. Housing values are compared before and after the increase with a nearby county acting as a control group. The results indicate that housing values decreased 15.6% during the period of maximum risk. Results are similar for alternative measures of risk and across houses of different sizes. With risk estimates derived using a Bayesian learning model the results imply a statistical value of pediatric leukemia of $5.6 million. The results from the paper provide some of the first market-based estimates of the value of health for children. The second essay evaluates the cost-effectiveness of public incentives that encourage households to purchase high-efficiency durable goods. The demand for durable goods and the demand for energy and other inputs are modeled jointly as the solution to a household production problem. The empirical analysis focuses on the case of clothes washers. The production technology and utilization decision are estimated using household-level data from field trials in which participants received front-loading clothes washers free of charge. The estimation strategy exploits this quasi-random replacement of washers to derive robust estimates of the utilization decision. The results indicate a price elasticity, -.06, that is statistically different from zero across specifications. The parameters from the utilization decision are used to estimate the purchase decision using data from the Consumer Expenditure Survey, 1994-2002. Households consider optimal utilization levels, purchase prices, water rates, energy rates and other factors when deciding which clothes washer to purchase. The complete model is used to simulate the effects of rebate programs and other policies on adoption patterns of clothes washers and household demand for water and energy.
Investing in health: is social housing value for money? A cost-utility analysis.
Lawson, K D; Kearns, A; Petticrew, M; Fenwick, E A L
2013-10-01
There is a healthy public policy agenda investigating the health impacts of improving living conditions. However, there are few economic evaluations, to date, assessing value for money. We conducted the first cost-effectiveness analysis of a nationwide intervention transferring social and private tenants to new-build social housing, in Scotland. A quasi-experimental prospective study was undertaken involving 205 intervention households and 246 comparison households, over 2 years. A cost-utility analysis assessed the average cost per change in health utility (a single score summarising overall health-related quality of life), generated via the SF-6D algorithm. Construction costs for new builds were included. Analysis was conducted for all households, and by family, adult and elderly households; with estimates adjusted for baseline confounders. Outcomes were annuitised and discounted at 3.5%. The average discounted cost was £18, 708 per household, at a national programme cost of £ 28.4 million. The average change in health utility scores in the intervention group attributable to the intervention were +0.001 for all households, +0.001 for family households, -0.04 for adult households and -0.03 for elderly households. All estimates were statistically insignificant. At face value, the interventions were not value for money in health terms. However, because the policy rationale was the amenity provision of housing for disadvantaged groups, impacts extend beyond health and may be fully realised over the long term. Before making general value-for-money inferences, economic evaluation should attempt to estimate the full social value of interventions, model long-term impacts and explicitly incorporate equity considerations.
Zou, Haidong; Xu, Xun; Zhang, Xi
2015-01-01
Background This study aimed to evaluate and compare the utility values associated with diabetic retinopathy (DR) in a sample of Chinese patients and ophthalmologists. Methods Utility values were evaluated by both the time trade-off (TTO) and rating scale (RS) methods for 109 eligible patients with DR and 2 experienced ophthalmologists. Patients were stratified by Snellen best-corrected visual acuity (BCVA) in the better-seeing eye. The correlations between the utility values and general vision-related health status measures were analyzed. These utility values were compared with data from two other studies. Results The mean utility values elicited from the patients themselves with the TTO (0.81; SD 0.10) and RS (0.81; SD 0.11) methods were both statistically lower than the mean utility values assessed by ophthalmologists. Significant predictors of patients’ TTO and RS utility values were both LogMAR BCVA in the affected eye and average weighted LogMAR BCVA. DR grade and duration of visual dysfunction were also variables that significantly predicted patients’ TTO utility values. For ophthalmologists, patients’ LogMAR BCVA in the affected eye and in the better eye were the variables that significantly predicted both the TTO and RS utility values. Patients’ education level was also a variable that significantly predicted RS utility values. Moreover, both diabetic macular edema and employment status were significant predictors of TTO and RS utility values, whether from patients or ophthalmologists. There was no difference in mean TTO utility values compared to our American and Canadian patients. Conclusions DR caused a substantial decrease in Chinese patients’ utility values, and ophthalmologists substantially underestimated its effect on patient quality of life. PMID:26630653
Valuing Informal Care Experience: Does Choice of Measure Matter?
ERIC Educational Resources Information Center
Mentzakis, Emmanouil; McNamee, Paul; Ryan, Mandy; Sutton, Matthew
2012-01-01
Well-being equations are often estimated to generate monetary values for non-marketed activities. In such studies, utility is often approximated by either life satisfaction or General Health Questionnaire scores. We estimate and compare monetary valuations of informal care for the first time in the UK employing both measures, using longitudinal…
Conjoint analysis of nature tourism values in Bahia, Brazil
Thomas Holmes; Chris Zinkhan; Keith Alger; D. Evan Mercer
1996-01-01
This paper uses conjoint analysis to estimate the value of nature tourism attributes in a threatened forest ecosystem in northeastern Brazil. Computerized interviews were conducted using a paired comparison design. An ordinal interpretation of the rating scale was used and marginal utilities were estimated using ordered probit. The empirical results showed that the...
Detailed Project Report. Small Beach Erosion Control Project. Broadkill Beach, Delaware.
1972-02-01
this study. TABLE 3 ESTIMATED PROPERTY VALUES IN BROADKILL BEACH (July 1971) Beach Front Property* Entire Community Present Present Fair Value Fair ...between the 14th and 50th year reflect only the land, houses and utilities (minus salvage value estimated at 25% of the fair value ) that are located... Value $ $ 1,221,000 2,866,000 ftExcluding beach area. >4’ 5 11. The water entering Delaware Bay from Delaware River is polluted, but the degree of
Arnold, David; Girling, Alan; Stevens, Andrew; Lilford, Richard
2009-07-22
Utilities (values representing preferences) for healthcare priority setting are typically obtained indirectly by asking patients to fill in a quality of life questionnaire and then converting the results to a utility using population values. We compared such utilities with those obtained directly from patients or the public. Review of studies providing both a direct and indirect utility estimate. Papers reporting comparisons of utilities obtained directly (standard gamble or time tradeoff) or indirectly (European quality of life 5D [EQ-5D], short form 6D [SF-6D], or health utilities index [HUI]) from the same patient. PubMed and Tufts database of utilities. Sign test for paired comparisons between direct and indirect utilities; least squares regression to describe average relations between the different methods. Mean utility scores (or median if means unavailable) for each method, and differences in mean (median) scores between direct and indirect methods. We found 32 studies yielding 83 instances where direct and indirect methods could be compared for health states experienced by adults. The direct methods used were standard gamble in 57 cases and time trade off in 60(34 used both); the indirect methods were EQ-5D (67 cases), SF-6D (13), HUI-2 (5), and HUI-3 (37). Mean utility values were 0.81 (standard gamble) and 0.77 (time tradeoff) for the direct methods; for the indirect methods: 0.59(EQ-5D), 0.63 (SF-6D), 0.75 (HUI-2) and 0.68 (HUI-3). Direct methods of estimating utilities tend to result in higher health ratings than the more widely used indirect methods, and the difference can be substantial.Use of indirect methods could have important implications for decisions about resource allocation: for example, non-lifesaving treatments are relatively more favoured in comparison with lifesaving interventions than when using direct methods.
Lloyd, Andrew; Kerr, Cicely; Breheny, Katie; Brazier, John; Ortiz, Aurora; Borg, Emma
2014-03-01
Condition-specific preference-based measures can offer utility data where they would not otherwise be available or where generic measures may lack sensitivity, although they lack comparability across conditions. This study aimed to develop an algorithm for estimating utilities from the short bowel syndrome health-related quality of life scale (SBS-QoL™). SBS-QoL™ items were selected based on factor and item performance analysis of a European SBS-QoL™ dataset and consultation with 3 SBS clinical experts. Six-dimension health states were developed using 8 SBS-QoL™ items (2 dimensions combined 2 SBS-QoL™ items). SBS health states were valued by a UK general population sample (N = 250) using the lead-time time trade-off method. Preference weights or 'utility decrements' for each severity level of each dimension were estimated by regression models and used to develop the scoring algorithm. Mean utilities for the SBS health states ranged from -0.46 (worst health state, very much affected on all dimensions) to 0.92 (best health state, not at all affected on all dimensions). The random effects model with maximum likelihood estimation regression had the best predictive ability and lowest root mean squared error and mean absolute error, and was used to develop the scoring algorithm. The preference-weighted scoring algorithm for the SBS-QoL™ developed is able to estimate a wide range of utility values from patient-level SBS-QoL™ data. This allows estimation of SBS HRQL impact for the purpose of economic evaluation of SBS treatment benefits.
Ip, Queeny; Malone, Daniel C; Chong, Jenny; Harris, Robin B; Labiner, David M
2018-03-01
Epilepsy is most prevalent among older individuals, and its economic impact is substantial. The development of economic burden estimates that account for known confounders, and using percent incremental costs may provide meaningful comparison across time and different health systems. The first objective of the current study was to estimate the percent incremental healthcare costs and the odds ratio (OR) for inpatient utilization for older Medicare beneficiaries with epilepsy and without epilepsy. The second objective was to estimate the percent incremental healthcare costs and the OR for inpatient utilization associated with antiepileptic drug (AED) nonadherence among Medicare beneficiaries with epilepsy. The OR of inpatient utilization for cases compared with controls (i.e., non-cases) were 2.4 (95% CI 2.3 to 2.6, p-value<0.0001) for prevalent epilepsy and 3.6 (95% CI 3.2 to 4.0, p-value<0.0001) for incident epilepsy. With respect to total health care costs, prevalent cases incurred 61.8% (95% CI 56.6 to 67.1%, p-value<0.0001) higher costs than controls while incident cases incurred 71.2% (95% CI 63.2 to 79.5%, p-value <0.0001) higher costs than controls. The nonadherence rates were 33.6 and 32.9% for prevalent and incident cases, respectively. Compared to nonadherent cases, the OR of inpatient utilization for adherent prevalent cases was 0.66 (95% CI 0.55 to 0.81, p-value <0.0001). The cost saving for a prevalent case adherent to AEDs was 13.2% (95% CI 6.6 to 19.4%, p-value=0.0001) compared to a nonadherent case. An incident case adherent to AEDs spent 16.4% (95% CI 6.5 to 25.2%, p-value=0.002) less than a nonadherent incident case on health care. Epilepsy is associated with higher health care costs and utilization. Older Medicare beneficiaries with epilepsy incur higher total health care spending and have higher inpatient utilization than those without epilepsy. Total health care spending is less for older Medicare beneficiaries who have prevalent or incident epilepsy if they are adherent to AEDs. Copyright © 2018 Elsevier Inc. All rights reserved.
A modified utilization gauge for western range grasses
Earl F. Aldon; Richard E. Francis
1984-01-01
Accurate, low cost measurements of forage utilization by livestock are essential in range management and the evaluation of grazing systems. However, because of difficulty in making these measurements, visual estimates often are substituted for measured values. To help land managers better determine use, range utilization calculating charts (Crafts 1938, NRCAB 1962)...
Recovery of stranded costs under electric deregulation: The Winstar doctrine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Person, J.C.
This paper explores the applicability of the Winstar doctrine to the recovery of stranded costs arising from the deregulation of the electric utility industry. Such stranded costs, which have been widely estimated to be in the $100--200 billion range, represent those utility assets whose book value exceed their market value. Not addressed in this paper are the ongoing state and federal legislative initiatives to allow for the recovery of some or all of a utility`s stranded costs, such as through the assessment of competitive transmission charges (CTCs) or through stranded cost securitization. Rather, this paper presents one of several ofmore » legal arguments that could be utilized in those situations where a legislative solution either does not exist or does not allow for full book value recovery.« less
Estimation of utilities in attention-deficit hyperactivity disorder for economic evaluations.
Lloyd, Andrew; Hodgkins, Paul; Sasane, Rahul; Akehurst, Ron; Sonuga-Barke, Edmund J S; Fitzgerald, Patrick; Nixon, Annabel; Erder, Haim; Brazier, John
2011-01-01
Attempts to estimate the cost effectiveness of attention-deficit hyperactivity disorder (ADHD) treatments in the past have relied on classifying ADHD patients as responders or non-responders to treatment. Responder status has been associated with a small gain in health-related quality of life (HR-QOL) [or utility, as measured using the generic QOL measure EQ-5D] of 0.06 (on a scale from 0 being dead to 1.0 being full health). The goal of the present study was to develop and validate several ADHD-related health states, and to estimate utility values measured amongst the general public for those states and to re-estimate utility values associated with responder status. Detailed qualitative interview data were collected from 20 young ADHD patients to characterize their HR-QOL. In addition, item-by-item clinical and HR-QOL data from a clinical trial were used to define and describe four health states (normal; borderline to mildly ill; moderately to markedly ill; and severely ill). ADHD experts assessed the content validity of the descriptions. The states were rated by 100 members of the UK general public using the time trade-off (TTO) interview and visual analog scale. Statistical mapping was also undertaken to estimate Clinical Global Impression-Improvement (CGI-I) utilities (i.e. response status) from Clinical Global Impression-Severity (CGI-S) defined states. The mapping work estimated changes in utilities from study baseline to last visit for patients with a CGI-I score of ≤ 2 or ≤ 3. The validity of the four health states developed in this study was supported by in-depth interviews with ADHD experts and patients, and clinical trial data. TTO-derived utilities for the four health states ranged from 0.839 (CGI-S state 'normal') to 0.444 (CGI-S state 'severely ill'). From the mapping work, the change in utility for treatment responders was 0.19 for patients with a CGI-I score of ≤ 2 and 0.15 for patients with a CGI-I score of ≤ 3. The present study provides utilities for different severity levels of ADHD estimated in a TTO study. This approach provides a more granular assessment of the impact of ADHD on HR-QOL than binary approaches employed in previous economic analyses. Change in utility for responders and non-responders at different levels of CGI-I was estimated, and thus these utilities may be used to compare health gains of different ADHD interventions.
Reductions in Diagnostic Imaging With High Deductible Health Plans.
Zheng, Sarah; Ren, Zhong Justin; Heineke, Janelle; Geissler, Kimberley H
2016-02-01
Diagnostic imaging utilization grew rapidly over the past 2 decades. It remains unclear whether patient cost-sharing is an effective policy lever to reduce imaging utilization and spending. Using 2010 commercial insurance claims data of >21 million individuals, we compared diagnostic imaging utilization and standardized payments between High Deductible Health Plan (HDHP) and non-HDHP enrollees. Negative binomial models were used to estimate associations between HDHP enrollment and utilization, and were repeated for standardized payments. A Hurdle model were used to estimate associations between HDHP enrollment and whether an enrollee had diagnostic imaging, and then the magnitude of associations for enrollees with imaging. Models with interaction terms were used to estimate associations between HDHP enrollment and imaging by risk score tercile. All models included controls for patient age, sex, geographic location, and health status. HDHP enrollment was associated with a 7.5% decrease in the number of imaging studies and a 10.2% decrease in standardized imaging payments. HDHP enrollees were 1.8% points less likely to use imaging; once an enrollee had at least 1 imaging study, differences in utilization and associated payments were small. Associations between HDHP and utilization were largest in the lowest (least sick) risk score tercile. Increased patient cost-sharing may contribute to reductions in diagnostic imaging utilization and spending. However, increased cost-sharing may not encourage patients to differentiate between high-value and low-value diagnostic imaging services; better patient awareness and education may be a crucial part of any reductions in diagnostic imaging utilization.
Tejwani, Rohit; Wang, Hsin-Hsiao S; Lloyd, Jessica C; Kokorowski, Paul J; Nelson, Caleb P; Routh, Jonathan C
2017-03-01
The advent of online task distribution has opened a new avenue for efficiently gathering community perspectives needed for utility estimation. Methodological consensus for estimating pediatric utilities is lacking, with disagreement over whom to sample, what perspective to use (patient vs parent) and whether instrument induced anchoring bias is significant. We evaluated what methodological factors potentially impact utility estimates for vesicoureteral reflux. Cross-sectional surveys using a time trade-off instrument were conducted via the Amazon Mechanical Turk® (https://www.mturk.com) online interface. Respondents were randomized to answer questions from child, parent or dyad perspectives on the utility of a vesicoureteral reflux health state and 1 of 3 "warm-up" scenarios (paralysis, common cold, none) before a vesicoureteral reflux scenario. Utility estimates and potential predictors were fitted to a generalized linear model to determine what factors most impacted utilities. A total of 1,627 responses were obtained. Mean respondent age was 34.9 years. Of the respondents 48% were female, 38% were married and 44% had children. Utility values were uninfluenced by child/personal vesicoureteral reflux/urinary tract infection history, income or race. Utilities were affected by perspective and were higher in the child group (34% lower in parent vs child, p <0.001, and 13% lower in dyad vs child, p <0.001). Vesicoureteral reflux utility was not significantly affected by the presence or type of time trade-off warm-up scenario (p = 0.17). Time trade-off perspective affects utilities when estimated via an online interface. However, utilities are unaffected by the presence, type or absence of warm-up scenarios. These findings could have significant methodological implications for future utility elicitations regarding other pediatric conditions. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Surrogate utility estimation by long-term partners and unfamiliar dyads.
Tunney, Richard J; Ziegler, Fenja V
2015-01-01
To what extent are people able to make predictions about other people's preferences and values?We report two experiments that present a novel method assessing some of the basic processes in surrogate decision-making, namely surrogate-utility estimation. In each experiment participants formed dyads who were asked to assign utilities to health related items and commodity items, and to predict their partner's utility judgments for the same items. In experiment one we showed that older adults in long-term relationships were able to accurately predict their partner's wishes. In experiment two we showed that younger adults who were relatively unfamiliar with one another were also able to predict other people's wishes. Crucially we demonstrated that these judgments were accurate even after partialling out each participant's own preferences indicating that in order to make surrogate utility estimations people engage in perspective-taking rather than simple anchoring and adjustment, suggesting that utility estimation is not the cause of inaccuracy in surrogate decision-making. The data and implications are discussed with respect to theories of surrogate decision-making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagnon, Pieter J
Misforecasting the adoption of customer-owned distributed photovoltaics (DPV) can have operational and financial implications for utilities - forecasting capabilities can be improved, but generally at a cost. This paper informs this decision-space by quantifying the costs of misforecasting across a wide range of DPV growth rates and misforecast severities. Using a simplified probabilistic method presented within, an analyst can make a first-order estimate of the financial benefit of improving a utility's forecasting capabilities, and thus be better informed about whether to make such an investment. For example, we show that a utility with 10 TWh per year of retail electricmore » sales who initially estimates that the increase in DPV's contribution to total generation could range from 2 to 7.5 percent over the next 15 years could expect total present-value savings of approximately 4 million dollars if they could keep the severity of successive five-year misforecasts within plus or minus 25 percent. We also have more general discussions about how misforecasting DPV impacts the buildout and operation of the bulk power system - for example, we observed that misforecasting DPV most strongly influenced the amount of utility-scale PV that gets built, due to the similarity in the energy and capacity services offered by the two solar technologies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagnon, Pieter J; Stoll, Brady; Mai, Trieu T
Misforecasting the adoption of customer-owned distributed photovoltaics (DPV) can have operational and financial implications for utilities - forecasting capabilities can be improved, but generally at a cost.This paper informs this decision-space by quantifying the costs of misforecasting across a wide range of DPV growth rates and misforecast severities. Using a simplified probabilistic method presented within, an analyst can make a first-order estimate of the financial benefit of improving a utility's forecasting capabilities, and thus be better informed about whether to make such an investment. For example, we show that a utility with 10 TWh per year of retail electric salesmore » who initially estimates that the increase in DPV's contribution to total generation could range from 2 percent to 7.5 percent over the next 15 years could expect total present-value savings of approximately $4 million if they could keep the severity of successive five-year misforecasts within +/- 25 percent. We also have more general discussions about how misforecasting DPV impacts the buildout and operation of the bulk power system - for example, we observed that misforecasting DPV most strongly influenced the amount of utility-scale PV that gets built, due to the similarity in the energy and capacity services offered by the two solar technologies.« less
USDA-ARS?s Scientific Manuscript database
Background: The utility of glycemic index (GI) values for chronic disease risk management remains controversial. While absolute GI value determinations for individual foods have been shown to vary significantly in individuals with diabetes, there is a dearth of data on the reliability of GI value de...
Healthy-years equivalent: wounded but not yet dead.
Hauber, A Brett
2009-06-01
The quality-adjusted life-year (QALY) has become the dominant measure of health value in health technology assessment in recent decades despite some well-known and fundamental flaws in the preference-elicitation methods used to construct health-state utility weights and the strong assumptions required to construct QALYs as a measure of health value using these utility weights. The healthy-years equivalent (HYE) was proposed as an alternative measure of health value that was purported to overcome many of the limitations of the QALY. The primary argument against the HYE is that it is difficult to estimate and, therefore, impractical. After much debate in the literature, the QALY appears to have won the battle; however, the HYE is not yet dead. Empirical research and recent advances in methods continue to offer evidence of the feasibility using the HYE as a measure of health value and also addresses some of criticisms surrounding the preference-elicitation methods used to estimate the HYE. This article provides a brief review of empirical applications of the HYE and identifies recent advances in empirical estimation that may breathe new life into a valiant, but wounded, measure.
Estimating health state utility values for comorbid health conditions using SF-6D data.
Ara, Roberta; Brazier, John
2011-01-01
When health state utility values for comorbid health conditions are not available, data from cohorts with single conditions are used to estimate scores. The methods used can produce very different results and there is currently no consensus on which is the most appropriate approach. The objective of the current study was to compare the accuracy of five different methods within the same dataset. Data collected during five Welsh Health Surveys were subgrouped by health status. Mean short-form 6 dimension (SF-6D) scores for cohorts with a specific health condition were used to estimate mean SF-6D scores for cohorts with comorbid conditions using the additive, multiplicative, and minimum methods, the adjusted decrement estimator (ADE), and a linear regression model. The mean SF-6D for subgroups with comorbid health conditions ranged from 0.4648 to 0.6068. The linear model produced the most accurate scores for the comorbid health conditions with 88% of values accurate to within the minimum important difference for the SF-6D. The additive and minimum methods underestimated or overestimated the actual SF-6D scores respectively. The multiplicative and ADE methods both underestimated the majority of scores. However, both methods performed better when estimating scores smaller than 0.50. Although the range in actual health state utility values (HSUVs) was relatively small, our data covered the lower end of the index and the majority of previous research has involved actual HSUVs at the upper end of possible ranges. Although the linear model gave the most accurate results in our data, additional research is required to validate our findings. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Value drivers: an approach for estimating health and disease management program savings.
Phillips, V L; Becker, Edmund R; Howard, David H
2013-12-01
Health and disease management (HDM) programs have faced challenges in documenting savings related to their implementation. The objective of this eliminate study was to describe OptumHealth's (Optum) methods for estimating anticipated savings from HDM programs using Value Drivers. Optum's general methodology was reviewed, along with details of 5 high-use Value Drivers. The results showed that the Value Driver approach offers an innovative method for estimating savings associated with HDM programs. The authors demonstrated how real-time savings can be estimated for 5 Value Drivers commonly used in HDM programs: (1) use of beta-blockers in treatment of heart disease, (2) discharge planning for high-risk patients, (3) decision support related to chronic low back pain, (4) obesity management, and (5) securing transportation for primary care. The validity of savings estimates is dependent on the type of evidence used to gauge the intervention effect, generating changes in utilization and, ultimately, costs. The savings estimates derived from the Value Driver method are generally reasonable to conservative and provide a valuable framework for estimating financial impacts from evidence-based interventions.
Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.
Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.
Kim, David D; Arterburn, David E; Sullivan, Sean D; Basu, Anirban
2018-05-12
Designing optimal insurance is important to ensure access to care for individuals that are most likely to benefit. We examined the potential impact of lowering patient cost-sharing for bariatric procedures. After defining 10 subgroups by body mass index (BMI) and type 2 diabetes mellitus (T2DM), we analyzed the National Health and Nutrition Examination Survey datasets to estimate the prevalence of each subgroup. The MarketScan claims database provided utilization rates and costs of bariatric procedures. Using an existing cost-effectiveness model, we estimated the economic value of bariatric procedures under various cost-sharing levels (0%-25%) with 2 frameworks: (1) a traditional cost-effectiveness analysis and (2) a new approach that incorporates utilization effects across subgroups. The utilization rate was higher among individuals with T2DM than those without T2DM (90.4 vs. 59.1 cases per 100,000) for bariatric procedures, which were more cost-effective for those with T2DM and a higher BMI. After accounting for utilization effects, the economic value of bariatric surgery was $177 and $63 per individual from a lifetime and a 5-year time horizon, respectively. Under no patient cost-sharing for individuals with BMI≥40 and T2DM, utilization rates were expected to increase by 21 cases per 100,000, resulting in additional $2 realized value per patient and $7.07 million in returns at the US population level. Cost-sharing is a barrier to uptake of a clinical and cost-effective treatment for severe obesity. Reducing cost-sharing for patients with severe obesity and T2DM could potentially increase the utilization of bariatric procedures and result in greater economic value to payers.
Fast Noncircular 2D-DOA Estimation for Rectangular Planar Array
Xu, Lingyun; Wen, Fangqing
2017-01-01
A novel scheme is proposed for direction finding with uniform rectangular planar array. First, the characteristics of noncircular signals and Euler’s formula are exploited to construct a new real-valued rectangular array data. Then, the rotational invariance relations for real-valued signal space are depicted in a new way. Finally the real-valued propagator method is utilized to estimate the pairing two-dimensional direction of arrival (2D-DOA). The proposed algorithm provides better angle estimation performance and can discern more sources than the 2D propagator method. At the same time, it has very close angle estimation performance to the noncircular propagator method (NC-PM) with reduced computational complexity. PMID:28417926
Estimating time-varying RSA to examine psychophysiological linkage of marital dyads.
Gates, Kathleen M; Gatzke-Kopp, Lisa M; Sandsten, Maria; Blandon, Alysia Y
2015-08-01
One of the primary tenets of polyvagal theory dictates that parasympathetic influence on heart rate, often estimated by respiratory sinus arrhythmia (RSA), shifts rapidly in response to changing environmental demands. The current standard analytic approach of aggregating RSA estimates across time to arrive at one value fails to capture this dynamic property within individuals. By utilizing recent methodological developments that enable precise RSA estimates at smaller time intervals, we demonstrate the utility of computing time-varying RSA for assessing psychophysiological linkage (or synchrony) in husband-wife dyads using time-locked data collected in a naturalistic setting. © 2015 Society for Psychophysiological Research.
[Value-based medicine in ophthalmology].
Hirneiss, C; Neubauer, A S; Tribus, C; Kampik, A
2006-06-01
Value-based medicine (VBM) unifies costs and patient-perceived value (improvement in quality of life, length of life, or both) of an intervention. Value-based ophthalmology is of increasing importance for decisions in eye care. The methods of VBM are explained and definitions for a specific terminology in this field are given. The cost-utility analysis as part of health care economic analyses is explained. VBM exceeds evidence-based medicine by incorporating parameters of cost and benefits from an ophthalmological intervention. The benefit of the intervention is defined as an increase or maintenance of visual quality of life and can be determined by utility analysis. The time trade-off method is valid and reliable for utility analysis. The resources expended for the value gained in VBM are measured with cost-utility analysis in terms of cost per quality-adjusted life years gained (euros/QALY). Numerous cost-utility analyses of different ophthalmological interventions have been published. The fundamental instrument of VBM is cost-utility analysis. The results in cost per QALY allow estimation of cost effectiveness of an ophthalmological intervention. Using the time trade-off method for utility analysis allows the comparison of ophthalmological cost-utility analyses with those of other medical interventions. VBM is important for individual medical decision making and for general health care.
Dritsaki, Melina; Achana, Felix; Mason, James; Petrou, Stavros
2017-05-01
Trial-based cost-utility analyses require health-related quality of life data that generate utility values in order to express health outcomes in terms of quality-adjusted life years (QALYs). Assessments of baseline health-related quality of life are problematic where trial participants are incapacitated or critically ill at the time of randomisation. This review aims to identify and critique methods for handling non-availability of baseline health-related quality of life data in trial-based cost-utility analyses within emergency and critical illness settings. A systematic literature review was conducted, following PRISMA guidelines, to identify trial-based cost-utility analyses of interventions within emergency and critical care settings. Databases searched included the National Institute for Health Research (NIHR) Journals Library (1991-July 2016), Cochrane Library (all years); National Health Service (NHS) Economic Evaluation Database (all years) and Ovid MEDLINE/Embase (without time restriction). Strategies employed to handle non-availability of baseline health-related quality of life data in final QALY estimations were identified and critiqued. A total of 4224 published reports were screened, 19 of which met the study inclusion criteria (mean trial size 1670): 14 (74 %) from the UK, four (21%) from other European countries and one (5%) from India. Twelve studies (63%) were based in emergency departments and seven (37%) in intensive care units. Only one study was able to elicit patient-reported health-related quality of life at baseline. To overcome the lack of baseline data when estimating QALYs, eight studies (42%) assigned a fixed utility weight corresponding to either death, an unconscious health state or a country-specific norm to patients at baseline, four (21%) ignored baseline utilities, three (16%) applied values from another study, one (5%) generated utility values via retrospective recall and one (5%) elicited utilities from experts. A preliminary exploration of these methods shows that incremental QALY estimation is unlikely to be biased if balanced trial allocation is achieved and subsequent collection of health-related quality of life data occurs at the earliest possible opportunity following commencement of treatment, followed by an adequate number of follow-up assessments. Trial-based cost-utility analyses within emergency and critical illness settings have applied different methods for QALY estimation, employing disparate assumptions about the health-related quality of life of patients at baseline. Where baseline measurement is not practical, measurement at the earliest opportunity following commencement of treatment should minimise bias in QALY estimation.
Macek, Mark D; Manski, Richard J; Vargas, Clemencia M; Moeller, John F
2002-04-01
To compare estimates of dental visits among adults using three national surveys. Cross-sectional data from the National Health Interview Survey (NHIS), National Health and Nutrition Examination Survey (NHANES), and National Health Expenditure surveys (NMCES, NMES, MEPS). This secondary data analysis assessed whether overall estimates and stratum-specific trends are different across surveys. Dental visit data are age standardized via the direct method to the 1990 population of the United States. Point estimates, standard errors, and test statistics are generated using SUDAAN. Sociodemographic, stratum-specific trends are generally consistent across surveys; however, overall estimates differ (NHANES III [364-day estimate] versus 1993 NHIS: -17.5 percent difference, Z = 7.27, p value < 0.001; NHANES III [365-day estimate] vs. 1993 NHIS: 5.4 percent difference, Z = -2.50, p value = 0.006; MEPS vs. 1993 NHIS: -29.8 percent difference, Z = 16.71, p value < 0.001). MEPS is the least susceptible to intrusion, telescoping, and social desirability. Possible explanations for discrepancies include different reference periods, lead-in statements, question format, and social desirability of responses. Choice of survey should depend on the hypothesis. If trends are necessary, choice of survey should not matter however, if health status or expenditure associations are necessary, then surveys that contain these variables should be used, and if accurate overall estimates are necessary, then MEPS should be used. A validation study should be conducted to establish "true" utilization estimates.
The Utility of Selection for Military and Civilian Jobs
1989-07-01
parsimonious use of information; the relative ease in making threshold (break-even) judgments compared to estimating actual SDy values higher than a... threshold value, even though judges are unlikely to agree on the exact point estimate for the SDy parameter; and greater understanding of how even small...ability, spatial ability, introversion , anxiety) considered to vary or differ across individuals. A construct (sometimes called a latent variable) is not
Utilization of bone impedance for age estimation in postmortem cases.
Ishikawa, Noboru; Suganami, Hideki; Nishida, Atsushi; Miyamori, Daisuke; Kakiuchi, Yasuhiro; Yamada, Naotake; Wook-Cheol, Kim; Kubo, Toshikazu; Ikegaya, Hiroshi
2015-11-01
In the field of Forensic Medicine the number of unidentified cadavers has increased due to natural disasters and international terrorism. The age estimation is very important for identification of the victims. The degree of sagittal closure is one of such age estimation methods. However it is not widely accepted as a reliable method for age estimation. In this study, we have examined whether measuring impedance value (z-values) of the sagittal suture of the skull is related to the age in men and women and discussed the possibility to use bone impedance for age estimation. Bone impedance values increased with aging and decreased after the age of 64.5. Then we compared age estimation through the conventional visual method and the proposed bone impedance measurement technique. It is suggested that the bone impedance measuring technique may be of value to forensic science as a method of age estimation. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
An Australian discrete choice experiment to value eq-5d health states.
Viney, Rosalie; Norman, Richard; Brazier, John; Cronin, Paula; King, Madeleine T; Ratcliffe, Julie; Street, Deborah
2014-06-01
Conventionally, generic quality-of-life health states, defined within multi-attribute utility instruments, have been valued using a Standard Gamble or a Time Trade-Off. Both are grounded in expected utility theory but impose strong assumptions about the form of the utility function. Preference elicitation tasks for both are complicated, limiting the number of health states that each respondent can value and, therefore, that can be valued overall. The usual approach has been to value a set of the possible health states and impute values for the remainder. Discrete Choice Experiments (DCEs) offer an attractive alternative, allowing investigation of more flexible specifications of the utility function and greater coverage of the response surface. We designed a DCE to obtain values for EQ-5D health states and implemented it in an Australia-representative online panel (n = 1,031). A range of specifications investigating non-linear preferences with respect to time and interactions between EQ-5D levels were estimated using a random-effects probit model. The results provide empirical support for a flexible utility function, including at least some two-factor interactions. We then constructed a preference index such that full health and death were valued at 1 and 0, respectively, to provide a DCE-based algorithm for Australian cost-utility analyses. Copyright © 2013 John Wiley & Sons, Ltd.
Models based on value and probability in health improve shared decision making.
Ortendahl, Monica
2008-10-01
Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.
NASA Technical Reports Server (NTRS)
Scott, Elaine P.
1994-01-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort was related to the development of the experimental techniques. Initial experiments required a resistance heater placed between two samples. The design was modified such that the heater was placed on the surface of only one sample, as would be necessary in the analysis of built up structures. Experiments using the modified technique were conducted on the composite sample used previously at different temperatures. The results were within 5 percent of those found using two samples. Finally, an initial heat transfer analysis, including conduction, convection and radiation components, was completed on a titanium sandwich structural sample. Experiments utilizing this sample are currently being designed and will be used to first estimate the material's effective thermal conductivity and later to determine the properties associated with each individual heat transfer component.
Manca, Andrea; Hawkins, Neil; Sculpher, Mark J
2005-05-01
In trial-based cost-effectiveness analysis baseline mean utility values are invariably imbalanced between treatment arms. A patient's baseline utility is likely to be highly correlated with their quality-adjusted life-years (QALYs) over the follow-up period, not least because it typically contributes to the QALY calculation. Therefore, imbalance in baseline utility needs to be accounted for in the estimation of mean differential QALYs, and failure to control for this imbalance can result in a misleading incremental cost-effectiveness ratio. This paper discusses the approaches that have been used in the cost-effectiveness literature to estimate absolute and differential mean QALYs alongside randomised trials, and illustrates the implications of baseline mean utility imbalance for QALY calculation. Using data from a recently conducted trial-based cost-effectiveness study and a micro-simulation exercise, the relative performance of alternative estimators is compared, showing that widely used methods to calculate differential QALYs provide incorrect results in the presence of baseline mean utility imbalance regardless of whether these differences are formally statistically significant. It is demonstrated that multiple regression methods can be usefully applied to generate appropriate estimates of differential mean QALYs and an associated measure of sampling variability, while controlling for differences in baseline mean utility between treatment arms in the trial. Copyright 2004 John Wiley & Sons, Ltd
Toward Empirical Estimation of the Total Value of Protecting Rivers
NASA Astrophysics Data System (ADS)
Sanders, Larry D.; Walsh, Richard G.; Loomis, John B.
1990-07-01
The purpose of this paper is to develop and apply a procedure to estimate a statistical demand function for the protection of rivers in the Rocky Mountains of Colorado. Other states and nations around the world face a similar problem of estimating how much they can afford to pay for the protection of rivers. The results suggest that in addition to the direct consumption benefits of onsite recreation, total value includes offsite consumption of the flow of information about these activities and resources consumed as preservation benefits. A sample of the general population of the state reports a willingness to pay rather than forego both types of utility. We recommended that offsite values be added to the value of onsite recreation use to determine the total value of rivers to society.
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
Unchained Melody: Revisiting the Estimation of SF-6D Values
Craig, Benjamin M.
2015-01-01
Purpose In the original SF-6D valuation study, the analytical design inherited conventions that detrimentally affected its ability to predict values on a quality-adjusted life year (QALY) scale. Our objective is to estimate UK values for SF-6D states using the original data and multi-attribute utility (MAU) regression after addressing its limitations and to compare the revised SF-6D and EQ-5D value predictions. Methods Using the unaltered data (611 respondents, 3503 SG responses), the parameters of the original MAU model were re-estimated under 3 alternative error specifications, known as the instant, episodic, and angular random utility models. Value predictions on a QALY scale were compared to EQ-5D3L predictions using the 1996 Health Survey for England. Results Contrary to the original results, the revised SF-6D value predictions range below 0 QALYs (i.e., worse than death) and agree largely with EQ-5D predictions after adjusting for scale. Although a QALY is defined as a year in optimal health, the SF-6D sets a higher standard for optimal health than the EQ-5D-3L; therefore, it has larger units on a QALY scale by construction (20.9% more). Conclusions Much of the debate in health valuation has focused on differences between preference elicitation tasks, sampling, and instruments. After correcting errant econometric practices and adjusting for differences in QALY scale between the EQ-5D and SF-6D values, the revised predictions demonstrate convergent validity, making them more suitable for UK economic evaluations compared to original estimates. PMID:26359242
Shrinkage regression-based methods for microarray missing value imputation.
Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng
2013-01-01
Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.
Doble, Brett; Lorgelly, Paula
2016-04-01
To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.
Kirkham, Amy A; Pauhl, Katherine E; Elliott, Robyn M; Scott, Jen A; Doria, Silvana C; Davidson, Hanan K; Neil-Sztramko, Sarah E; Campbell, Kristin L; Camp, Pat G
2015-01-01
To determine the utility of equations that use the 6-minute walk test (6MWT) results to estimate peak oxygen uptake ((Equation is included in full-text article.)o2) and peak work rate with chronic obstructive pulmonary disease (COPD) patients in a clinical setting. This study included a systematic review to identify published equations estimating peak (Equation is included in full-text article.)o2 and peak work rate in watts in COPD patients and a retrospective chart review of data from a hospital-based pulmonary rehabilitation program. The following variables were abstracted from the records of 42 consecutively enrolled COPD patients: measured peak (Equation is included in full-text article.)o2 and peak work rate achieved during a cycle ergometer cardiopulmonary exercise test, 6MWT distance, age, sex, weight, height, forced expiratory volume in 1 second, forced vital capacity, and lung diffusion capacity. Estimated peak (Equation is included in full-text article.)o2 and peak work rate were estimated from 6MWT distance using published equations. The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work to prescribe aerobic exercise intensities of 60% and 80% was calculated. Eleven equations from 6 studies were identified. Agreement between estimated and measured values was poor to moderate (intraclass correlation coefficients = 0.11-0.63). The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work rate to prescribe exercise intensities of 60% and 80% of measured values ranged from mean differences of 12 to 35 and 16 to 47 percentage points, respectively. There is poor to moderate agreement between measured peak (Equation is included in full-text article.)o2 and peak work rate and estimations from equations that use 6MWT distance, and the use of the estimated values for prescription of aerobic exercise intensity would result in large error. Equations estimating peak (Equation is included in full-text article.)o2 and peak work rate are of low utility for prescribing exercise intensity in pulmonary rehabilitation programs.
Environmental degradation and remediation: is economics part of the problem?
Dore, Mohammed H I; Burton, Ian
2003-01-01
It is argued that standard environmental economic and 'ecological economics', have the same fundamentals of valuation in terms of money, based on a demand curve derived from utility maximization. But this approach leads to three different measures of value. An invariant measure of value exists only if the consumer has 'homothetic preferences'. In order to obtain a numerical estimate of value, specific functional forms are necessary, but typically these estimates do not converge. This is due to the fact that the underlying economic model is not structurally stable. According to neoclassical economics, any environmental remediation can be justified only in terms of increases in consumer satisfaction, balancing marginal gains against marginal costs. It is not surprising that the optimal policy obtained from this approach suggests only small reductions in greenhouse gases. We show that a unidimensional metric of consumer's utility measured in dollar terms can only trivialize the problem of global climate change.
2012-01-01
Background To estimate utility values for different levels of migraine pain severity from a United Kingdom (UK) sample of migraineurs. Methods One hundred and six migraineurs completed the EQ-5D to evaluate their health status for mild, moderate and severe levels of migraine pain severity for a recent migraine attack, and for current health defined as health status within seven days post-migraine attack. Statistical tests were used to evaluate differences in mean utility scores by migraine severity. Results Utility scores for each health state were significantly different from 1.0 (no problems on any EQ-5D dimension) (p < 0.0001) and one another (p < 0.0001). The lowest mean utility, − 0.20 (95% confidence interval [CI]: -0.27 – -0.13), was for severe migraine pain. The smallest difference in mean utility was between mild and moderate migraine pain (0.13) and the largest difference in mean utility was between current health (without migraine) and severe migraine pain (1.07). Conclusions Results indicate that all levels of migraine pain are associated with significantly reduced utility values. As severity worsened, utility decreased and severe migraine pain was considered a health state worse than death. Results can be used in cost-utility models examining the relative economic value of therapeutic strategies for migraine in the UK. PMID:22691697
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kihm, Steve; Satchwell, Andrew; Cappers, Peter
This technical brief identifies conditions under which utility regulators should consider implementing policy approaches that seek to mitigate negative outcomes due to an increase in interest rates. Interest rates are a key factor in determining a utility’s cost of equity and investors find value when returns exceed the cost of equity. Through historical observations of periods of rising and falling interest rates and application of a pro forma financial tool, we identify the key drivers of utility stock valuations and estimate the degree to which those valuations might be affected by increasing interest rates.3 We also analyze the efficacy ofmore » responses by utility regulators to mitigate potential negative financial impacts. We find that regulators have several possible approaches to mitigate a decline in value in an environment of increasing interest rates, though regulators must weigh the tradeoffs of improving investor value with potential increases in customer costs. Furthermore, the range of approaches reflects today’s many different electric utility regulatory models and regulatory responses to a decline in investor value will fit within state-specific models.« less
Valuation Diagramming and Accounting of Transactive Energy Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhmalbaf, Atefe; Hammerstrom, Donald J.; Huang, Qiuhua
Transactive energy (TE) systems support both economic and technical objectives of a power system including efficiency and reliability. TE systems utilize value-driven mechanisms to coordinate and balance responsive supply and demand in the power system. Economic performance of TE systems cannot be assessed without estimating their value. Estimating the potential value of transactive energy systems requires a systematic valuation methodology that can capture value exchanges among different stakeholders (i.e., actors) and ultimately estimate impact of one TE design and compare it against another one. Such a methodology can help decision makers choose the alternative that results in preferred outcomes. Thismore » paper presents a valuation methodology developed to assess value of TE systems. A TE use-case example is discussed, and metrics identified in the valuation process are quantified using a TE simulation program.« less
Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb
2014-10-01
Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.
Brazier, John; Rowen, Donna; Karimi, Milad; Peasgood, Tessa; Tsuchiya, Aki; Ratcliffe, Julie
2017-10-11
In the estimation of population value sets for health state classification systems such as the EuroQOL five dimensions questionnaire (EQ-5D), there is increasing interest in asking respondents to value their own health state, sometimes referred to as "experience-based utility values" or, more correctly, own rather than hypothetical health states. Own health state values differ to hypothetical health state values, and this may be attributable to many reasons. This paper critically examines whose values matter; why there is a difference between own and hypothetical values; how to measure own health state values; and why to use own health state values. Finally, the paper examines other ways that own health state values can be taken into account, such as including the use of informed general population preferences that may better take into account experience-based values.
Mapping of the DLQI scores to EQ-5D utility values using ordinal logistic regression.
Ali, Faraz Mahmood; Kay, Richard; Finlay, Andrew Y; Piguet, Vincent; Kupfer, Joerg; Dalgard, Florence; Salek, M Sam
2017-11-01
The Dermatology Life Quality Index (DLQI) and the European Quality of Life-5 Dimension (EQ-5D) are separate measures that may be used to gather health-related quality of life (HRQoL) information from patients. The EQ-5D is a generic measure from which health utility estimates can be derived, whereas the DLQI is a specialty-specific measure to assess HRQoL. To reduce the burden of multiple measures being administered and to enable a more disease-specific calculation of health utility estimates, we explored an established mathematical technique known as ordinal logistic regression (OLR) to develop an appropriate model to map DLQI data to EQ-5D-based health utility estimates. Retrospective data from 4010 patients were randomly divided five times into two groups for the derivation and testing of the mapping model. Split-half cross-validation was utilized resulting in a total of ten ordinal logistic regression models for each of the five EQ-5D dimensions against age, sex, and all ten items of the DLQI. Using Monte Carlo simulation, predicted health utility estimates were derived and compared against those observed. This method was repeated for both OLR and a previously tested mapping methodology based on linear regression. The model was shown to be highly predictive and its repeated fitting demonstrated a stable model using OLR as well as linear regression. The mean differences between OLR-predicted health utility estimates and observed health utility estimates ranged from 0.0024 to 0.0239 across the ten modeling exercises, with an average overall difference of 0.0120 (a 1.6% underestimate, not of clinical importance). This modeling framework developed in this study will enable researchers to calculate EQ-5D health utility estimates from a specialty-specific study population, reducing patient and economic burden.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Kharroubi, Samer A; O'Hagan, Anthony; Brazier, John E
2010-07-10
Cost-effectiveness analysis of alternative medical treatments relies on having a measure of effectiveness, and many regard the quality adjusted life year (QALY) to be the current 'gold standard.' In order to compute QALYs, we require a suitable system for describing a person's health state, and a utility measure to value the quality of life associated with each possible state. There are a number of different health state descriptive systems, and we focus here on one known as the EQ-5D. Data for estimating utilities for different health states have a number of features that mean care is necessary in statistical modelling.There is interest in the extent to which valuations of health may differ between different countries and cultures, but few studies have compared preference values of health states obtained from different countries. This article applies a nonparametric model to estimate and compare EQ-5D health state valuation data obtained from two countries using Bayesian methods. The data set is the US and UK EQ-5D valuation studies where a sample of 42 states defined by the EQ-5D was valued by representative samples of the general population from each country using the time trade-off technique. We estimate a utility function across both countries which explicitly accounts for the differences between them, and is estimated using the data from both countries. The article discusses the implications of these results for future applications of the EQ-5D and for further work in this field. Copyright 2010 John Wiley & Sons, Ltd.
Smoking as a decision among pregnant and non-pregnant women.
Ortendahl, Monica
2006-10-01
The purpose was to examine values and beliefs related to smoking, and to test the validity of a decision model based on the product of the value of smoking-related events and states, and the belief that these will occur, (in decision research labeled Expected Utility, or EU). Over a two-week period eighty women, divided into subgroups consisting of pregnant vs. non-pregnant women, and those intending vs. those not intending to quit smoking, performed evaluations of values and beliefs for the two conditions of quitting and not quitting smoking. For both pregnant and non-pregnant women expected utility of smoking was negative. Of all the four groups pregnant women not intending to quit smoking estimated the expected utility of smoking as least negative. A decision analytic approach is applicable to describe the addictive behavior of smoking. Values as well as beliefs about smoking should be stressed in smoking cessation programs, especially among pregnant women.
Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.
Youssef, Noha H; Elshahed, Mostafa S
2008-09-01
Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.
Health Auctions: a Valuation Experiment (HAVE) study protocol.
Kularatna, Sanjeewa; Petrie, Dennis; Scuffham, Paul A; Byrnes, Joshua
2016-04-07
Quality-adjusted life years are derived using health state utility weights which adjust for the relative value of living in each health state compared with living in perfect health. Various techniques are used to estimate health state utility weights including time-trade-off and standard gamble. These methods have exhibited limitations in terms of complexity, validity and reliability. A new composite approach using experimental auctions to value health states is introduced in this protocol. A pilot study will test the feasibility and validity of using experimental auctions to value health states in monetary terms. A convenient sample (n=150) from a population of university staff and students will be invited to participate in 30 auction sets with a group of 5 people in each set. The 9 health states auctioned in each auction set will come from the commonly used EQ-5D-3L instrument. At most participants purchase 2 health states, and the participant who acquires the 2 'best' health states on average will keep the amount of money they do not spend in acquiring those health states. The value (highest bid and average bid) of each of the 24 health states will be compared across auctions to test for reliability across auction groups and across auctioneers. A test retest will be conducted for 10% of the sample to assess reliability of responses for health states auctions. Feasibility of conducting experimental auctions to value health states will also be examined. The validity of estimated health states values will be compared with published utility estimates from other methods. This pilot study will explore the feasibility, reliability and validity in using experimental auction for valuing health states. Ethical clearance was obtained from Griffith University ethics committee. The results will be disseminated in peer-reviewed journals and major international conferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagnon, Pieter; Barbose, Galen L.; Stoll, Brady
Misforecasting the adoption of customer-owned distributed photovoltaics (DPV) can have operational and financial implications for utilities; forecasting capabilities can be improved, but generally at a cost. This paper informs this decision-space by using a suite of models to explore the capacity expansion and operation of the Western Interconnection over a 15-year period across a wide range of DPV growth rates and misforecast severities. The system costs under a misforecast are compared against the costs under a perfect forecast, to quantify the costs of misforecasting. Using a simplified probabilistic method applied to these modeling results, an analyst can make a first-ordermore » estimate of the financial benefit of improving a utility’s forecasting capabilities, and thus be better informed about whether to make such an investment. For example, under our base assumptions, a utility with 10 TWh per year of retail electric sales who initially estimates that DPV growth could range from 2% to 7.5% of total generation over the next 15 years could expect total present-value savings of approximately $4 million if they could reduce the severity of misforecasting to within ±25%. Utility resource planners can compare those savings against the costs needed to achieve that level of precision, to guide their decision on whether to make an investment in tools or resources.« less
NASA Astrophysics Data System (ADS)
Muzylev, Eugene; Startseva, Zoya; Uspensky, Alexander; Volkova, Elena; Uspensky, Sergey
2014-05-01
At present physical-mathematical modeling processes of water and heat exchange between vegetation covered land surfaces and atmosphere is the most appropriate method to describe peculiarities of water and heat regime formation for large territories. The developed model of such processes (Land Surface Model, LSM) is intended for calculation evaporation, transpiration by vegetation, soil water content and other water and heat regime characteristics, as well as distributions of the soil temperature and humidity in depth utilizing remote sensing data from satellites on land surface and meteorological conditions. The model parameters and input variables are the soil and vegetation characteristics and the meteorological characteristics, correspondingly. Their values have been determined from ground-based observations or satellite-based measurements by radiometers AVHRR/NOAA, MODIS/EOS Terra and Aqua, SEVIRI/Meteosat-9, -10. The case study has been carried out for the part of the agricultural Central Black Earth region with coordinates 49.5 deg. - 54 deg. N, 31 deg. - 43 deg. E and a total area of 227,300 km2 located in the steppe-forest zone of the European Russia for years 2009-2012 vegetation seasons. From AVHRR data there have been derived the estimates of three types of land surface temperature (LST): land surface skin temperature Tsg, air-foliage temperature Ta and efficient radiation temperature Ts.eff, emissivity E, normalized vegetation index NDVI, vegetation cover fraction B, leaf area index LAI, cloudiness and precipitation. From MODIS data the estimates of LST Tls, E, NDVI and LAI have been obtained. The SEVIRI data have been used to build the estimates of Tls, Ta, E, LAI and precipitation. Previously developed method and technology of above AVHRR-derived estimates have been improved and adapted to the study area. To check the reliability of the Ts.eff and Ta estimations for named seasons the error statistics of their definitions has been analyzed through comparison with data of observations at agricultural meteorological stations of the study region. The mentioned MODIS-based remote sensing products for the same vegetation seasons have been built using data downloaded from the website LP DAAC (NASA). Reliability of the MODIS-derived Tls estimates have been confirmed by results of comparison with similar estimates from synchronous AVHRR, SEVIRI and ground-based data. To retrieve Tls and E from SEVIRI data at daylight and nighttime there have been developed the method and technology of thematic processing these data in IR channels NN 9, 10 (10.8 and 12.0 nm) at three successive times under cloud-free conditions without using exact values of E. This technology has been also adapted to the study area. Analysis of reliability of Tls estimation have been carried out through comparing with synchronous SEVIRI-derived Tls estimates obtained at Land Surface Analysis Satellite Applications Facility (LSA SAF, Lisbon, Portugal) and MODIS-derived Tls estimates. When the first comparison daily - or monthly-averaged values of RMS deviation have not been exceeded 2 deg. C for various dates and months during years 2009-2012 vegetation seasons. RMS deviation of Tls(SEVIRI) from Tls(MODIS) has been in the range of 1.0-3.0 deg. C. The method and technology have been also developed and tested to define Ta values from SEVIRI data at daylight and nighttime. This method is based on using satellite-derived estimates of Tls and regression relationship between Tls and ground-measured values of Ta. Comparison of satellite-based Ta estimates with data of synchronous standard term ground-based observations at the network of meteorological stations of the study area for summer periods of 2009-2012 has given RMS deviation values in the range of 1.8-3.0 deg. C. Formed archive of satellite products has been also supplemented with array of LAI estimates retrieved from SEVIRI data at LSA SAF for the study area and growing seasons 2011-2012. The possibility is shown to use the developed Multi Threshold Method (MTM) for generating the AVHRR- and SEVIRI-based estimates of daily and monthly precipitation amounts for the region of interest The MTM provides the cloud detection and identification of cloud types, estimation of the maximum liquid water content and cloud layer water content, allocation of precipitation zones and determination of instantaneous maximum of precipitation intensities in the pixel range around the clock throughout the year independently of the land surface type. In developing procedures of utilizing satellite estimates of precipitation during the vegetation season in the model there have been built up algorithms and programs of transition from estimating the rainfall intensity to assessment of their daily values. The comparison of the daily, monthly and seasonal AVHRR- and SEVIRI-derived precipitation sums with similar values retrieved from network ground-based observations using weighting interpolation procedure have been carried out. Agreement of all three evaluations is satisfactory. To assimilate remote sensing products into the model the special techniques have been developed including: 1) replacement of ground-measured model parameters LAI and B by their satellite-derived estimates. The possibility of such replacement has been confirmed through various comparisons of: a) LAI behavior for ground- and satellite-derived values; b) modeled values of Ts and Tf , satellite-based estimates of Ts.eff, Tls and Ta and ground-based measurements of LST; c) modeled and measured values of soil water content W and evapotranspiration Ev; 2) utilization of satellite-derived values of LSTs Ts.eff, Tls and Ta, and estimates of precipitation as the input model variables instead of the respective ground-measured temperatures and rainfall when assessing the accuracy of soil water content, evapotranspiration and soil temperature calculations; 3) accounting for the spatial variability of satellite-based LAI, B, LST and precipitation estimates by entering their area-distributed values into the model. For years 2009-2012 vegetation seasons there have been calculated the characteristics of the water and heat regimes of the region under investigation utilizing satellite estimates of vegetation characteristics, LST and precipitation in the model. The calculation results have shown that the discrepancies of evapotranspiration and soil water content values are within acceptable limits.
3-D transient hydraulic tomography in unconfined aquifers with fast drainage response
NASA Astrophysics Data System (ADS)
Cardiff, M.; Barrash, W.
2011-12-01
We investigate, through numerical experiments, the viability of three-dimensional transient hydraulic tomography (3DTHT) for identifying the spatial distribution of groundwater flow parameters (primarily, hydraulic conductivity K) in permeable, unconfined aquifers. To invert the large amount of transient data collected from 3DTHT surveys, we utilize an iterative geostatistical inversion strategy in which outer iterations progressively increase the number of data points fitted and inner iterations solve the quasi-linear geostatistical formulas of Kitanidis. In order to base our numerical experiments around realistic scenarios, we utilize pumping rates, geometries, and test lengths similar to those attainable during 3DTHT field campaigns performed at the Boise Hydrogeophysical Research Site (BHRS). We also utilize hydrologic parameters that are similar to those observed at the BHRS and in other unconsolidated, unconfined fluvial aquifers. In addition to estimating K, we test the ability of 3DTHT to estimate both average storage values (specific storage Ss and specific yield Sy) as well as spatial variability in storage coefficients. The effects of model conceptualization errors during unconfined 3DTHT are investigated including: (1) assuming constant storage coefficients during inversion and (2) assuming stationary geostatistical parameter variability. Overall, our findings indicate that estimation of K is slightly degraded if storage parameters must be jointly estimated, but that this effect is quite small compared with the degradation of estimates due to violation of "structural" geostatistical assumptions. Practically, we find for our scenarios that assuming constant storage values during inversion does not appear to have a significant effect on K estimates or uncertainty bounds.
Estimating the Cost-Effectiveness of Implementation: Is Sufficient Evidence Available?
Whyte, Sophie; Dixon, Simon; Faria, Rita; Walker, Simon; Palmer, Stephen; Sculpher, Mark; Radford, Stefanie
2016-01-01
Timely implementation of recommended interventions can provide health benefits to patients and cost savings to the health service provider. Effective approaches to increase the implementation of guidance are needed. Since investment in activities that improve implementation competes for funding against other health generating interventions, it should be assessed in term of its costs and benefits. In 2010, the National Institute for Health and Care Excellence released a clinical guideline recommending natriuretic peptide (NP) testing in patients with suspected heart failure. However, its implementation in practice was variable across the National Health Service in England. This study demonstrates the use of multi-period analysis together with diffusion curves to estimate the value of investing in implementation activities to increase uptake of NP testing. Diffusion curves were estimated based on historic data to produce predictions of future utilization. The value of an implementation activity (given its expected costs and effectiveness) was estimated. Both a static population and a multi-period analysis were undertaken. The value of implementation interventions encouraging the utilization of NP testing is shown to decrease over time as natural diffusion occurs. Sensitivity analyses indicated that the value of the implementation activity depends on its efficacy and on the population size. Value of implementation can help inform policy decisions of how to invest in implementation activities even in situations in which data are sparse. Multi-period analysis is essential to accurately quantify the time profile of the value of implementation given the natural diffusion of the intervention and the incidence of the disease. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Potential impacts of Brayton and Stirling cycle engines
NASA Astrophysics Data System (ADS)
Heft, R. C.
1980-11-01
Two engine technologies (Brayton cycle and Stirling cycle) are examined for their potential economic impact and fuel utilization. An economic analysis of the expected response of buyers to the attributes of the alternative engines was performed. Hedonic coefficients for vehicle fuel efficiency, performance and size were estimated for domestic cars based upon historical data. The marketplace value of the fuel efficiency enhancement provided by Brayton or Stirling engines was estimated. Under the assumptions of 10 years for plant conversions and 1990 and 1995 as the introduction data for turbine and Stirling engines respectively, the comparative fuel savings and present value of the future savings in fuel costs were estimated.
Potential impacts of Brayton and Stirling cycle engines
NASA Technical Reports Server (NTRS)
Heft, R. C.
1980-01-01
Two engine technologies (Brayton cycle and Stirling cycle) are examined for their potential economic impact and fuel utilization. An economic analysis of the expected response of buyers to the attributes of the alternative engines was performed. Hedonic coefficients for vehicle fuel efficiency, performance and size were estimated for domestic cars based upon historical data. The marketplace value of the fuel efficiency enhancement provided by Brayton or Stirling engines was estimated. Under the assumptions of 10 years for plant conversions and 1990 and 1995 as the introduction data for turbine and Stirling engines respectively, the comparative fuel savings and present value of the future savings in fuel costs were estimated.
A systematic review of health economic models and utility estimation methods in schizophrenia.
Németh, Bertalan; Fasseeh, Ahmad; Molnár, Anett; Bitter, István; Horváth, Margit; Kóczián, Kristóf; Götze, Árpád; Nagy, Balázs
2018-06-01
There is a growing need for economic evaluations describing the disease course, as well as the costs and clinical outcomes related to the treatment of schizophrenia. Areas covered: A systematic review on studies describing health economic models in schizophrenia and a targeted literature review on utility mapping algorithms in schizophrenia were carried out. Models found in the review were collated and assessed in detail according to their type and various other attributes. Fifty-nine studies were included in the review. Modeling techniques varied from simple decision trees to complex simulation models. The models used various clinical endpoints as value drivers, 47% of the models used quality-adjusted life years, and eight percent used disability-adjusted life years to measure benefits, while others applied various clinical outcomes. Most models considered patients switching between therapies, and therapeutic adherence, compliance or persistence. The targeted literature review identified four main approaches to map PANSS scores to utility values. Expert commentary: Health economic models developed for schizophrenia showed great variability, with simulation models becoming more frequently used in the last decade. Using PANSS scores as the basis of utility estimations is justifiable.
Benefit/cost comparison for utility SMES applications
NASA Astrophysics Data System (ADS)
Desteese, J. G.; Dagle, J. E.
1991-08-01
This paper summarizes eight case studies that account for the benefits and costs of superconducting magnetic energy storage (SMES) in system-specific utility applications. Four of these scenarios are hypothetical SMES applications in the Pacific Northwest, where relatively low energy costs impose a stringent test on the viability of the concept. The other four scenarios address SMES applications on high-voltage, direct-current (HVDC) transmission lines. While estimated SMES benefits are based on a previously reported methodology, this paper presents results of an improved cost-estimating approach that includes an assumed reduction in the cost of the power conditioning system (PCS) from approximately $160/kW to $80/kW. The revised approach results in all the SMES scenarios showing higher benefit/cost ratios than those reported earlier. However, in all but two cases, the value of any single benefit is still less than the unit's levelized cost. This suggests, as a general principle, that the total value of multiple benefits should always be considered if SMES is to appear cost effective in many utility applications. These results should offer utilities further encouragement to conduct more detailed analyses of SMES benefits in scenarios that apply to individual systems.
Yurimoto, Terumi; Hara, Shintaro; Isoyama, Takashi; Saito, Itsuro; Ono, Toshiya; Abe, Yusuke
2016-09-01
Estimation of pressure and flow has been an important subject for developing implantable artificial hearts. To realize real-time viscosity-adjusted estimation of pressure head and pump flow for a total artificial heart, we propose the table estimation method with quasi-pulsatile modulation of rotary blood pump in which systolic high flow and diastolic low flow phased are generated. The table estimation method utilizes three kinds of tables: viscosity, pressure and flow tables. Viscosity is estimated from the characteristic that differential value in motor speed between systolic and diastolic phases varies depending on viscosity. Potential of this estimation method was investigated using mock circulation system. Glycerin solution diluted with salty water was used to adjust viscosity of fluid. In verification of this method using continuous flow data, fairly good estimation could be possible when differential pulse width modulation (PWM) value of the motor between systolic and diastolic phases was high. In estimation under quasi-pulsatile condition, inertia correction was provided and fairly good estimation was possible when the differential PWM value was high, which was not different from the verification results using continuous flow data. In the experiment of real-time estimation applying moving average method to the estimated viscosity, fair estimation could be possible when the differential PWM value was high, showing that real-time viscosity-adjusted estimation of pressure head and pump flow would be possible with this novel estimation method when the differential PWM value would be set high.
Yeung, Kai; Basu, Anirban; Hansen, Ryan N; Watkins, John B; Sullivan, Sean D
2017-02-01
Value-based benefit design has been suggested as an effective approach to managing the high cost of pharmaceuticals in health insurance markets. Premera Blue Cross, a large regional health plan, implemented a value-based formulary (VBF) for pharmaceuticals in 2010 that explicitly used cost-effectiveness analysis (CEA) to inform medication copayments. The objective of the study was to determine the impact of the VBF. Interrupted time series of employer-sponsored plans from 2006 to 2013. Intervention group: 5235 beneficiaries exposed to the VBF. 11,171 beneficiaries in plans without any changes in pharmacy benefits. The VBF-assigned medications with lower value (estimated by CEA) to higher copayment tiers and assigned medications with higher value to lower copayment tiers. Primary outcome was medication expenditures from member, health plan, and member plus health plan perspectives. Secondary outcomes were medication utilization, emergency department visits, hospitalizations, office visits, and nonmedication expenditures. In the intervention group after VBF implementation, member medication expenditures increased by $2 per member per month (PMPM) [95% confidence interval (CI), $1-$3] or 9%, whereas health plan medication expenditures decreased by $10 PMPM (CI, $18-$2) or 16%, resulting in a net decrease of $8 PMPM (CI, $15-$2) or 10%, which translates to a net savings of $1.1 million. Utilization of medications moved into lower copayment tiers increased by 1.95 days' supply (CI, 1.29-2.62) or 17%. Total medication utilization, health services utilization, and nonmedication expenditures did not change. Cost-sharing informed by CEA reduced overall medication expenditures without negatively impacting medication utilization, health services utilization, or nonmedication expenditures.
Economic value of treating lumbar disc herniation in Brazil.
Falavigna, Asdrubal; Scheverin, Nicolas; Righesso, Orlando; Teles, Alisson R; Gullo, Maria Carolina; Cheng, Joseph S; Riew, K Daniel
2016-04-01
Lumbar discectomy is one of the most common surgical spine procedures. In order to understand the value of this surgical care, it is important to understand the costs to the health care system and patient for good results. The objective of this study was to evaluate for the first time the cost-effectiveness of spine surgery in Latin America for lumbar discectomy in terms of cost per quality-adjusted life year (QALY) gained for patients in Brazil. The authors performed a prospective cohort study involving 143 consecutive patients who underwent open discectomy for lumbar disc herniation (LDH). Patient-reported outcomes were assessed utilizing the SF-6D, which is derived from a 12-month variation of the SF-36. Direct medical costs included medical reimbursement, costs of hospital care, and overall resource consumption. Disability losses were considered indirect costs. A 4-year horizon with 3% discounting was applied to health-utilities estimates. Sensitivity analysis was performed by varying utility gain by 20%. The costs were expressed in Reais (R$) and US dollars ($), applying an exchange rate of 2.4:1 (the rate at the time of manuscript preparation). The direct and indirect costs of open lumbar discectomy were estimated at an average of R$3426.72 ($1427.80) and R$2027.67 ($844.86), respectively. The mean total cost of treatment was estimated at R$5454.40 ($2272.66) (SD R$2709.17 [$1128.82]). The SF-6D utility gain was 0.044 (95% CI 0.03197-0.05923, p = 0.017) at 12 months. The 4-year discounted QALY gain was 0.176928. The estimated cost-utility ratio was R$30,828.35 ($12,845.14) per QALY gained. The sensitivity analysis showed a range of R$25,690.29 ($10,714.28) to R$38,535.44 ($16,056.43) per QALY gained. The use of open lumbar discectomy to treat LDH is associated with a significant improvement in patient outcomes as measured by the SF-6D. Open lumbar discectomy performed in the Brazilian supplementary health care system provides a cost-utility ratio of R$30,828.35 ($12,845.14) per QALY. The value of acceptable cost-effectiveness will vary by country and region.
A dynamic programming approach to estimate the capacity value of energy storage
Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul
2013-09-17
Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less
Life satisfaction, QALYs, and the monetary value of health.
Huang, Li; Frijters, Paul; Dalziel, Kim; Clarke, Philip
2018-06-18
The monetary value of a quality-adjusted life-year (QALY) is frequently used to assess the benefits of health interventions and inform funding decisions. However, there is little consensus on methods for the estimation of this monetary value. In this study, we use life satisfaction as an indicator of 'experienced utility', and estimate the dollar equivalent value of a QALY using a fixed effect model with instrumental variable estimators. Using a nationally-representative longitudinal survey including 28,347 individuals followed during 2002-2015 in Australia, we estimate that individual's willingness to pay for one QALY is approximately A$42,000-A$67,000, and the willingness to pay for not having a long-term condition approximately A$2000 per year. As the estimates are derived using population-level data and a wellbeing measurement of life satisfaction, the approach has the advantage of being socially inclusive and recognizes the significant meaning of people's subjective valuations of health. The method could be particularly useful for nations where QALY thresholds are not yet validated or established. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Joore, Manuela; Brunenberg, Danielle; Nelemans, Patricia; Wouters, Emiel; Kuijpers, Petra; Honig, Adriaan; Willems, Danielle; de Leeuw, Peter; Severens, Johan; Boonen, Annelies
2010-01-01
This article investigates whether differences in utility scores based on the EQ-5D and the SF-6D have impact on the incremental cost-utility ratios in five distinct patient groups. We used five empirical data sets of trial-based cost-utility studies that included patients with different disease conditions and severity (musculoskeletal disease, cardiovascular pulmonary disease, and psychological disorders) to calculate differences in quality-adjusted life-years (QALYs) based on EQ-5D and SF-6D utility scores. We compared incremental QALYs, incremental cost-utility ratios, and the probability that the incremental cost-utility ratio was acceptable within and across the data sets. We observed small differences in incremental QALYs, but large differences in the incremental cost-utility ratios and in the probability that these ratios were acceptable at a given threshold, in the majority of the presented cost-utility analyses. More specifically, in the patient groups with relatively mild health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the EQ-5D to estimate utility. While in the patient groups with worse health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the SF-6D to estimate utility. Much of the appeal in using QALYs as measure of effectiveness in economic evaluations is in the comparability across conditions and interventions. The incomparability of the results of cost-utility analyses using different instruments to estimate a single index value for health severely undermines this aspect and reduces the credibility of the use of incremental cost-utility ratios for decision-making.
Ghosh-Jerath, Suparna; Singh, Archna; Kamboj, Preeti; Goldberg, Gail; Magsumbol, Melina S.
2015-01-01
Traditional knowledge and nutritional value of indigenous foods of the Oraon tribal community in Jharkhand, India was explored. Focus group discussions were conducted with adult members to identify commonly consumed indigenous foods. Taxonomic classification and quantitative estimation of nutritive value were conducted in laboratories or utilized data from Indian food composition database. More than 130 varieties of indigenous foods were identified, many of which were rich sources of micronutrients like calcium, iron, vitamin A, and folic acid. Some were reported having medicinal properties. Utilization and ease of assimilation of indigenous foods into routine diets can be leveraged to address malnutrition in tribal communities. PMID:25902000
K-ε Turbulence Model Parameter Estimates Using an Approximate Self-similar Jet-in-Crossflow Solution
DeChant, Lawrence; Ray, Jaideep; Lefantzi, Sophia; ...
2017-06-09
The k-ε turbulence model has been described as perhaps “the most widely used complete turbulence model.” This family of heuristic Reynolds Averaged Navier-Stokes (RANS) turbulence closures is supported by a suite of model parameters that have been estimated by demanding the satisfaction of well-established canonical flows such as homogeneous shear flow, log-law behavior, etc. While this procedure does yield a set of so-called nominal parameters, it is abundantly clear that they do not provide a universally satisfactory turbulence model that is capable of simulating complex flows. Recent work on the Bayesian calibration of the k-ε model using jet-in-crossflow wind tunnelmore » data has yielded parameter estimates that are far more predictive than nominal parameter values. In this paper, we develop a self-similar asymptotic solution for axisymmetric jet-in-crossflow interactions and derive analytical estimates of the parameters that were inferred using Bayesian calibration. The self-similar method utilizes a near field approach to estimate the turbulence model parameters while retaining the classical far-field scaling to model flow field quantities. Our parameter values are seen to be far more predictive than the nominal values, as checked using RANS simulations and experimental measurements. They are also closer to the Bayesian estimates than the nominal parameters. A traditional simplified jet trajectory model is explicitly related to the turbulence model parameters and is shown to yield good agreement with measurement when utilizing the analytical derived turbulence model coefficients. Finally, the close agreement between the turbulence model coefficients obtained via Bayesian calibration and the analytically estimated coefficients derived in this paper is consistent with the contention that the Bayesian calibration approach is firmly rooted in the underlying physical description.« less
Comparison of molecular breeding values based on within- and across-breed training in beef cattle.
Kachman, Stephen D; Spangler, Matthew L; Bennett, Gary L; Hanford, Kathryn J; Kuehn, Larry A; Snelling, Warren M; Thallman, R Mark; Saatchi, Mahdi; Garrick, Dorian J; Schnabel, Robert D; Taylor, Jeremy F; Pollak, E John
2013-08-16
Although the efficacy of genomic predictors based on within-breed training looks promising, it is necessary to develop and evaluate across-breed predictors for the technology to be fully applied in the beef industry. The efficacies of genomic predictors trained in one breed and utilized to predict genetic merit in differing breeds based on simulation studies have been reported, as have the efficacies of predictors trained using data from multiple breeds to predict the genetic merit of purebreds. However, comparable studies using beef cattle field data have not been reported. Molecular breeding values for weaning and yearling weight were derived and evaluated using a database containing BovineSNP50 genotypes for 7294 animals from 13 breeds in the training set and 2277 animals from seven breeds (Angus, Red Angus, Hereford, Charolais, Gelbvieh, Limousin, and Simmental) in the evaluation set. Six single-breed and four across-breed genomic predictors were trained using pooled data from purebred animals. Molecular breeding values were evaluated using field data, including genotypes for 2227 animals and phenotypic records of animals born in 2008 or later. Accuracies of molecular breeding values were estimated based on the genetic correlation between the molecular breeding value and trait phenotype. With one exception, the estimated genetic correlations of within-breed molecular breeding values with trait phenotype were greater than 0.28 when evaluated in the breed used for training. Most estimated genetic correlations for the across-breed trained molecular breeding values were moderate (> 0.30). When molecular breeding values were evaluated in breeds that were not in the training set, estimated genetic correlations clustered around zero. Even for closely related breeds, within- or across-breed trained molecular breeding values have limited prediction accuracy for breeds that were not in the training set. For breeds in the training set, across- and within-breed trained molecular breeding values had similar accuracies. The benefit of adding data from other breeds to a within-breed training population is the ability to produce molecular breeding values that are more robust across breeds and these can be utilized until enough training data has been accumulated to allow for a within-breed training set.
Pharmacoeconomics and macular degeneration.
Brown, Gary C; Brown, Melissa M; Brown, Heidi; Godshalk, Ashlee N
2007-05-01
To describe pharmacoeconomics and its relationship to drug interventions. Pharmacoeconomics is the branch of economics which applies cost-minimization, cost-benefit, cost-effectiveness and cost-utility analyses to compare the economics of different pharmaceutical products or to compare drug therapy to other treatments. Among the four instruments, cost-utility analysis is the most sophisticated, relevant and clinically applicable as it measures the value conferred by drugs for the monies expended. Value-based medicine incorporates cost-utility principles but with strict standardization of all input and output parameters to allow the comparability of analyses, unlike the current situation in the healthcare literature. Pharmacoeconomics is assuming an increasingly important role with regard to whether drugs are listed on the drug formulary of a country or province. It has been estimated that the application of standardized, value-based medicine drug analyses can save over 35% from a public healthcare insurer drug formulary while maintaining or improving patient care.
Dan Loeffler; David E. Calkin; Robin P. Silverstein
2006-01-01
Utilizing timber harvest residues (biomass) for renewable energy production provides an alternative disposal method to onsite burning that may improve the economic viability of hazardous fuels treatments. Due to the relatively low value of biomass, accurate estimates of biomass volumes and costs of collection and delivery are essential if investment in renewable energy...
Crump, R Trafford; Lai, Ernest; Liu, Guiping; Janjua, Arif; Sutherland, Jason M
2017-05-01
Chronic rhinosinusitis (CRS) is a common condition for which there are numerous medical and surgical treatments. The 22-item Sino-Nasal Outcome Test (SNOT-22) is a patient-reported outcome measure often used with patients diagnosed with CRS. However, there are no utility values associated with the SNOT-22, limiting its use in comparative effectiveness research. The purpose of this study was to establish utilities for the SNOT-22 by mapping responses to utility values associated with the EuroQol-5-dimensional questionnaire-3-level version (EQ-5D-3L). This study used data collected from patients diagnosed with CRS awaiting bilateral endoscopic sinus surgery in Vancouver, Canada. Study participants completed both the SNOT-22 and the EQ-5D-3L. Ordinary least squares was used for 3 models that estimated the EQ-5D-3L utility values as a function of the SNOT-22 items. A total of 232 participants completed both the SNOT-22 and the EQ-5D-3L. As expected, there was a negative relationship between the SNOT-22 global scores and EQ-5D-3L utility values. Adjusted R 2 for the 3 models ranged from 0.28 to 0.33, and root mean squared errors between 0.23 and 0.24. A nonparametric bootstrap analysis demonstrated robustness of the findings. This study successfully developed a mapping model to associate utility values with responses to the SNOT-22. This model could be used to conduct comparative effectiveness research in CRS to evaluate the various interventions available for treating this condition. © 2017 ARS-AAOA, LLC.
Estimating Evapotranspiration Of Orange Orchards Using Surface Renewal And Remote Sensing Techniques
NASA Astrophysics Data System (ADS)
Consoli, S.; Russo, A.; Snyder, R.
2006-08-01
Surface renewal (SR) analysis was utilized to calculate sensible heat flux density from high frequency temperature measurements above orange orchard canopies during 2005 in eastern Sicily (Italy). The H values were employed to estimate latent heat flux density (LE) using measured net radiation (Rn) and soil heat flux density (G) in the energy balance (EB) equation. Crop coefficients were determined by calculating the ratio Kc=ETa/ETo, with reference ETo derived from the daily Penman-Monteith equation. The estimated daily Kc values showed an average of about 0.75 for canopy covers having about 70% ground shading and 80% of PAR light interception. Remote sensing estimates of Kc and ET fluxes were compared with those measured by SR-EB. IKONOS satellite estimates of Kc and NDVI were linearly correlated for the orchard stands.
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Mcmaster, K. M.
1981-01-01
The methodology presented is a derivation of the utility owned solar electric systems model. The net present value of the system is determined by consideration of all financial benefits and costs including a specified return on investment. Life cycle costs, life cycle revenues, and residual system values are obtained. Break-even values of system parameters are estimated by setting the net present value to zero.
Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques
Shyu, Conrad; Ytreberg, F. Marty
2010-01-01
This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657
Yeung, Kai; Basu, Anirban; Hansen, Ryan N.; Watkins, John B.; Sullivan, Sean D.
2016-01-01
Background Value-based benefit design has been suggested as an effective approach to managing the high cost of pharmaceuticals in health insurance markets. Premera Blue Cross, a large regional health plan, implemented a Value-Based Formulary (VBF) for pharmaceuticals in 2010 that explicitly used cost-effectiveness analysis (CEA) to inform medication copayments. Objective To determine the impact of the VBF. Design Interrupted time-series of employer-sponsored plans from 2006 to 2013. Subjects Intervention group: 5,235 beneficiaries exposed to the VBF. Control group: 11,171 beneficiaries in plans without any changes in pharmacy benefits. Intervention The VBF assigned medications with lower value (estimated by CEA) to higher copayment tiers and assigned medications with higher value to lower copayment tiers. Measures Primary outcome was medication expenditures from member, health plan, and member plus health plan perspectives. Secondary outcomes were medication utilization, emergency department visits, hospitalizations, office visits, and non-medication expenditures. Results In the intervention group after VBF implementation, member medication expenditures increased by $2 per member per month (PMPM) (95% CI, $1 to $3) or 9%, while health plan medication expenditures decreased by $10 PMPM (CI, $18 to $2) or 16%, resulting in a net decrease of $8 PMPM (CI, $15 to $2) or 10%, which translates to a net savings of $1.1 million. Utilization of medications moved into lower copayment tiers increased by 1.95 days’ supply (CI, 1.29 to 2.62) or 17%. Total medication utilization, health services utilization and non-medication expenditures did not change. Conclusions Cost-sharing informed by CEA reduced overall medication expenditures without negatively impacting medication utilization, health services utilization or non-medication expenditures. PMID:27579915
Carreon, Leah Y.; Anderson, Paul A.; McDonough, Christine M.; Djurasovic, Mladen; Glassman, Steven D.
2010-01-01
Study Design Cross-sectional cohort Objective This study aims to provide an algorithm estimate SF-6D utilities using data from the NDI, neck pain and arm pain scores. Summary of Background Data Although cost-utility analysis is increasingly used to provide information about the relative value of alternative interventions, health state values or utilities are rarely available from clinical trial data. The Neck Disability Index (NDI) and numeric rating scales for neck and arm pain, are widely used disease-specific measures of symptoms, function and disability in patients with cervical degenerative disorders. The purpose of this study is to provide an algorithm to allow estimation of SF-6D utilities using data from the NDI, and numeric rating scales for neck and arm pain. Methods SF-36, NDI, neck and arm pain rating scale scores were prospectively collected pre-operatively, at 12 and 24 months post-operatively in 2080 patients undergoing cervical fusion for degenerative disorders. SF-6D utilities were computed and Spearman correlation coefficients were calculated for paired observations from multiple time points between NDI, neck and arm pain scores and SF-6D utility scores. SF-6D scores were estimated from the NDI, neck and arm pain scores using a linear regression model. Using a separate, independent dataset of 396 patients in which and NDI scores were available SF-6D was estimated for each subject and compared to their actual SF-6D. Results The mean age for those in the development sample, was 50.4 ± 11.0 years and 33% were male. In the validation sample the mean age was 53.1 ± 9.9 years and 35% were male. Correlations between the SF-6D and the NDI, neck and arm pain scores were statistically significant (p<0.0001) with correlation coefficients of 0.82, 0.62, and 0.50 respectively. The regression equation using NDI alone to predict SF-6D had an R2 of 0.66 and a root mean square error (RMSE) of 0.056. In the validation analysis, there was no statistically significant difference (p=0.961) between actual mean SF-6D (0.49 ± 0.08) and the estimated mean SF-6D score (0.49 ± 0.08) using the NDI regression model. Conclusion This regression-based algorithm may be a useful tool to predict SF-6D scores in studies of cervical degenerative disease that have collected NDI but not utility scores. PMID:20847713
Depth inpainting by tensor voting.
Kulkarni, Mandar; Rajagopalan, Ambasamudram N
2013-06-01
Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data.
Trogdon, Justin G.; Ekwueme, Donatus U.; Chamiec-Case, Linda; Guy, Gery P.
2018-01-01
Introduction Little is known about the effect of breast cancers on health-related quality of life among women diagnosed between age 18 and 44 years. The goal of this study is to estimate the effect of breast cancer on health state utility by age at diagnosis (18–44 years versus ≥45 years) and by race/ethnicity. Methods The analytic sample, drawn from the 2009 and 2010 Behavioral Risk Factor Surveillance System and analyzed in 2013, included women diagnosed with breast cancer between age 18 and 44 years (n=1,389) and age ≥45 years (n=6,037). Health state utility values were estimated using Healthy Days variables and a published algorithm. Regression analysis was conducted separately by age at diagnosis and race/ethnicity. Results The breast cancer health state utility decrement within 1 year from date of diagnosis was larger for women diagnosed at age 18–44 years than for women diagnosed at age ≥45 years (−0.116 vs −0.070, p<0.05). Within the younger age-at-diagnosis group, Hispanic women 2–4 years after diagnosis had the largest health state utility decrement (−0.221, p<0.01), followed by non-Hispanic white women within 1 year of diagnosis (−0.126, p<0.01). Conclusions This study is the first to report estimates of health state utility values for breast cancer by age at diagnosis and race/ethnicity from a nationwide sample. The results highlight the need for separate quality of life adjustments for women by age at diagnosis and race/ethnicity when conducting cost-effectiveness analysis of breast cancer prevention, detection, and treatment. PMID:26775905
NASA Astrophysics Data System (ADS)
Weng, Weifeng
This thesis presents papers on three areas of study within resource and environmental economics. "Demand Systems For Energy Forecasting" provides some practical considerations for estimating a Generalized Logit model. The main reason for using this demand system for energy and other factors is that the derived price elasticities are robust when expenditure shares are small. The primary objective of the paper is to determine the best form of the cross-price weights, and a simple inverse function of the expenditure share is selected. A second objective is to demonstrate that the estimated elasticities are sensitive to the units specified for the prices, and to show how price scales can be estimated as part of the model. "To Borrow or Not to Borrow: A Variation on the MacDougal-Kemp Theme" studies the impact of international capital movements on the conditional convergence of economies differing from each other only in initial wealth. We found that in assets, income, consumption and utility, convergence obtains, with and only with, the absence of international capital movement. When a rich country invests in a poor country, the balance of debt increases forever. Asset ownership is increased in all periods for the lender, and asset ownership of the borrower is deceased. Also, capital investment decreases the lender's utility for early periods, but increases it forever after a cross-over point. In contrast, the borrower's utility increases for early periods, but then decreases forever. "Valuing Reduced Risk for Households with Children or the Retired" presents a theoretical model of how families value risk and then exams family automobile purchases to impute the average Value of a Statistical Life (VSL) for each type of family. Data for fatal accidents are used to estimate survival rates for individuals in different types of accidents, and the probabilities of having accidents for different types of vehicle. These models are used to determine standardized risks for vehicles in hedonic models of the purchase price and fuel efficiency. The hedonic models determine the marginal capital and operating costs of reducing the risk of mortality. We find that households with children are valued much more highly than the average VSL of $2 million, and households with seniors are valued less than average.
An Updated TRMM Composite Climatology of Tropical Rainfall and Its Validation
NASA Technical Reports Server (NTRS)
Wang, Jian-Jian; Adler, Robert F.; Huffman, George; Bolvin, David
2013-01-01
An updated 15-yr Tropical Rainfall Measuring Mission (TRMM) composite climatology (TCC) is presented and evaluated. This climatology is based on a combination of individual rainfall estimates made with data from the primaryTRMMinstruments: theTRMM Microwave Imager (TMI) and the precipitation radar (PR). This combination climatology of passive microwave retrievals, radar-based retrievals, and an algorithm using both instruments simultaneously provides a consensus TRMM-based estimate of mean precipitation. The dispersion of the three estimates, as indicated by the standard deviation sigma among the estimates, is presented as a measure of confidence in the final estimate and as an estimate of the uncertainty thereof. The procedures utilized by the compositing technique, including adjustments and quality-control measures, are described. The results give a mean value of the TCC of 4.3mm day(exp -1) for the deep tropical ocean beltbetween 10 deg N and 10 deg S, with lower values outside that band. In general, the TCC values confirm ocean estimates from the Global Precipitation Climatology Project (GPCP) analysis, which is based on passive microwave results adjusted for sampling by infrared-based estimates. The pattern of uncertainty estimates shown by sigma is seen to be useful to indicate variations in confidence. Examples include differences between the eastern and western portions of the Pacific Ocean and high values in coastal and mountainous areas. Comparison of the TCC values (and the input products) to gauge analyses over land indicates the value of the radar-based estimates (small biases) and the limitations of the passive microwave algorithm (relatively large biases). Comparison with surface gauge information from western Pacific Ocean atolls shows a negative bias (16%) for all the TRMM products, although the representativeness of the atoll gauges of open-ocean rainfall is still in question.
Updated Value of Service Reliability Estimates for Electric Utility Customers in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, Michael; Schellenberg, Josh; Blundell, Marshall
2015-01-01
This report updates the 2009 meta-analysis that provides estimates of the value of service reliability for electricity customers in the United States (U.S.). The meta-dataset now includes 34 different datasets from surveys fielded by 10 different utility companies between 1989 and 2012. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods, it was possible to integrate their results into a single meta-dataset describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can bemore » generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the U.S. for industrial, commercial, and residential customers. This report focuses on the backwards stepwise selection process that was used to develop the final revised model for all customer classes. Across customer classes, the revised customer interruption cost model has improved significantly because it incorporates more data and does not include the many extraneous variables that were in the original specification from the 2009 meta-analysis. The backwards stepwise selection process led to a more parsimonious model that only included key variables, while still achieving comparable out-of-sample predictive performance. In turn, users of interruption cost estimation tools such as the Interruption Cost Estimate (ICE) Calculator will have less customer characteristics information to provide and the associated inputs page will be far less cumbersome. The upcoming new version of the ICE Calculator is anticipated to be released in 2015.« less
From clinically relevant outcome measures to quality of life in epilepsy: A time trade-off study.
de Kinderen, Reina J A; Wijnen, Ben F M; van Breukelen, Gerard; Postulart, Debby; Majoie, Marian H J M; Aldenkamp, Albert P; Evers, Silvia M A A
2016-09-01
A proposed method for bridging the gap between clinically relevant epilepsy outcome measures and quality-adjusted life years is to derive utility scores for epilepsy health states. The aim of this study is to develop such a utility-function and to investigate the impact of the epilepsy outcome measures on utility. Health states, based on clinically important epilepsy attributes (e.g. seizure frequency, seizure severity, side-effects), were valued by a sample of the Dutch population (N=525) based on the time trade-off method. In addition to standard demographics, every participant was asked to rate 10 or 11 different health state scenarios. A multilevel regression analysis was performed to account for the nested structure of the data. Results show that the best health state (no seizures and no side-effects) is estimated at 0.89 and the worst state (seizures type 5 twice a day plus severe side-effects) at 0.22 (scale: 0-1). An increase in seizure frequency, occurrence of side-effects, and seizure severity were all significantly associated with lower utility values. Furthermore, seizure severity has the largest impact on quality of life compared with seizure frequency and side-effects. This study provides a utility-function for transforming clinically relevant epilepsy outcome measures into utility estimates. We advise using our utility-function in economic evaluations, when quality of life is not directly measured in a study and hence, no health state utilities are available, or when there is convincing empirical evidence of the insensitivity of a generic quality-of-life-instrument within epilepsy. Copyright © 2016 Elsevier B.V. All rights reserved.
The Value Estimation of an HFGW Frequency Time Standard for Telecommunications Network Optimization
NASA Astrophysics Data System (ADS)
Harper, Colby; Stephenson, Gary
2007-01-01
The emerging technology of gravitational wave control is used to augment a communication system using a development roadmap suggested in Stephenson (2003) for applications emphasized in Baker (2005). In the present paper consideration is given to the value of a High Frequency Gravitational Wave (HFGW) channel purely as providing a method of frequency and time reference distribution for use within conventional Radio Frequency (RF) telecommunications networks. Specifically, the native value of conventional telecommunications networks may be optimized by using an unperturbed frequency time standard (FTS) to (1) improve terminal navigation and Doppler estimation performance via improved time difference of arrival (TDOA) from a universal time reference, and (2) improve acquisition speed, coding efficiency, and dynamic bandwidth efficiency through the use of a universal frequency reference. A model utilizing a discounted cash flow technique provides an estimation of the additional value using HFGW FTS technology could bring to a mixed technology HFGW/RF network. By applying a simple net present value analysis with supporting reference valuations to such a network, it is demonstrated that an HFGW FTS could create a sizable improvement within an otherwise conventional RF telecommunications network. Our conservative model establishes a low-side value estimate of approximately 50B USD Net Present Value for an HFGW FTS service, with reasonable potential high-side values to significant multiples of this low-side value floor.
Mapping between 6 Multiattribute Utility Instruments.
Chen, Gang; Khan, Munir A; Iezzi, Angelo; Ratcliffe, Julie; Richardson, Jeff
2016-02-01
Cost-utility analyses commonly employ a multiattribute utility (MAU) instrument to estimate the health state utilities, which are needed to calculate quality-adjusted life years. Different MAU instruments predict significantly different utilities, which makes comparison of results from different evaluation studies problematical. This article presents mapping functions ("crosswalks") from 6 MAU instruments (EQ-5D-5L, SF-6D, Health Utilities Index 3 [HUI 3], 15D, Quality of Well-Being [QWB], and Assessment of Quality of Life 8D [AQoL-8D]) to each of the other 5 instruments in the study: a total of 30 mapping functions. Data were obtained from a multi-instrument comparison survey of the public and patients in 7 disease areas conducted in 6 countries (Australia, Canada, Germany, Norway, United Kingdom, and United States). The 8022 respondents were administered each of the 6 study instruments. Mapping equations between each instrument pair were estimated using 4 econometric techniques: ordinary least squares, generalized linear model, censored least absolute deviations, and, for the first time, a robust MM-estimator. Goodness-of-fit indicators for each of the results are within the range of published studies. Transformations reduced discrepancies between predicted utilities. Incremental utilities, which determine the value of quality-related health benefits, are almost perfectly aligned at the sample means. Transformations presented here align the measurement scales of MAU instruments. Their use will increase confidence in the comparability of evaluation studies, which have employed different MAU instruments. © The Author(s) 2015.
Statistical Bayesian method for reliability evaluation based on ADT data
NASA Astrophysics Data System (ADS)
Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong
2018-05-01
Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.
The valuation of the EQ-5D in Portugal.
Ferreira, Lara N; Ferreira, Pedro L; Pereira, Luis N; Oppe, Mark
2014-03-01
The EQ-5D is a preference-based measure widely used in cost-utility analysis (CUA). Several countries have conducted surveys to derive value sets, but this was not the case for Portugal. The purpose of this study was to estimate a value set for the EQ-5D for Portugal using the time trade-off (TTO). A representative sample of the Portuguese general population (n = 450) stratified by age and gender valued 24 health states. Face-to-face interviews were conducted by trained interviewers. Each respondent ranked and valued seven health states using the TTO. Several models were estimated at both the individual and aggregated levels to predict health state valuations. Alternative functional forms were considered to account for the skewed distribution of these valuations. The models were analyzed in terms of their coefficients, overall fit and the ability for predicting the TTO values. Random effects models were estimated using generalized least squares and were robust across model specification. The results are generally consistent with other value sets. This research provides the Portuguese EQ-5D value set based on the preferences of the Portuguese general population as measured by the TTO. This value set is recommended for use in CUA conducted in Portugal.
Madan, Jason; Khan, Kamran A; Petrou, Stavros; Lamb, Sarah E
2017-05-01
Mapping algorithms are increasingly being used to predict health-utility values based on responses or scores from non-preference-based measures, thereby informing economic evaluations. We explored whether predictions in the EuroQol 5-dimension 3-level instrument (EQ-5D-3L) health-utility gains from mapping algorithms might differ if estimated using differenced versus raw scores, using the Roland-Morris Disability Questionnaire (RMQ), a widely used health status measure for low back pain, as an example. We estimated algorithms mapping within-person changes in RMQ scores to changes in EQ-5D-3L health utilities using data from two clinical trials with repeated observations. We also used logistic regression models to estimate response mapping algorithms from these data to predict within-person changes in responses to each EQ-5D-3L dimension from changes in RMQ scores. Predicted health-utility gains from these mappings were compared with predictions based on raw RMQ data. Using differenced scores reduced the predicted health-utility gain from a unit decrease in RMQ score from 0.037 (standard error [SE] 0.001) to 0.020 (SE 0.002). Analysis of response mapping data suggests that the use of differenced data reduces the predicted impact of reducing RMQ scores across EQ-5D-3L dimensions and that patients can experience health-utility gains on the EQ-5D-3L 'usual activity' dimension independent from improvements captured by the RMQ. Mappings based on raw RMQ data overestimate the EQ-5D-3L health utility gains from interventions that reduce RMQ scores. Where possible, mapping algorithms should reflect within-person changes in health outcome and be estimated from datasets containing repeated observations if they are to be used to estimate incremental health-utility gains.
Weather adjustment using seemingly unrelated regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noll, T.A.
1995-05-01
Seemingly unrelated regression (SUR) is a system estimation technique that accounts for time-contemporaneous correlation between individual equations within a system of equations. SUR is suited to weather adjustment estimations when the estimation is: (1) composed of a system of equations and (2) the system of equations represents either different weather stations, different sales sectors or a combination of different weather stations and different sales sectors. SUR utilizes the cross-equation error values to develop more accurate estimates of the system coefficients than are obtained using ordinary least-squares (OLS) estimation. SUR estimates can be generated using a variety of statistical software packagesmore » including MicroTSP and SAS.« less
Hatswell, Anthony J; Thompson, Gwilym J; Maroudas, Penny A; Sofrygin, Oleg; Delea, Thomas E
2017-01-01
Ofatumumab (Arzerra ® , Novartis) is a treatment for chronic lymphocytic leukemia refractory to fludarabine and alemtuzumab [double refractory (DR-CLL)]. Ofatumumab was licensed on the basis of an uncontrolled Phase II study, Hx-CD20-406, in which patients receiving ofatumumab survived for a median of 13.9 months. However, the lack of an internal control arm presents an obstacle for the estimation of comparative effectiveness. The objective of the study was to present a method to estimate the cost effectiveness of ofatumumab in the treatment of DR-CLL. As no suitable historical control was available for modelling, the outcomes from non-responders to ofatumumab were used to model the effect of best supportive care (BSC). This was done via a Cox regression to control for differences in baseline characteristics between groups. This analysis was included in a partitioned survival model built in Microsoft ® Excel with utilities and costs taken from published sources, with costs and quality-adjusted life years (QALYs) were discounted at a rate of 3.5% per annum. Using the outcomes seen in non-responders, ofatumumab is expected to add approximately 0.62 life years (1.50 vs. 0.88). Using published utility values this translates to an additional 0.30 QALYs (0.77 vs. 0.47). At the list price, ofatumumab had a cost per QALY of £130,563, and a cost per life year of £63,542. The model was sensitive to changes in assumptions regarding overall survival estimates and utility values. This study demonstrates the potential of using data for non-responders to model outcomes for BSC in cost-effectiveness evaluations based on single-arm trials. Further research is needed on the estimation of comparative effectiveness using uncontrolled clinical studies.
A method for estimating fall adult sex ratios from production and survival data
Wight, H.M.; Heath, R.G.; Geis, A.D.
1965-01-01
This paper presents a method of utilizing data relating to the production and survival of a bird population to estimate a basic fall adult sex ratio. This basic adult sex ratio is an average value derived from average production and survival rates. It is an estimate of the average sex ratio about which the fall adult ratios will fluctuate according to annual variations in production and survival. The basic fall adult sex ratio has been calculated as an asymptotic value which is the limit of an infinite series wherein average population characteristics are used as constants. Graphs are provided that allow the determination of basic sex ratios from production and survival data of a population. Where the respective asymptote has been determined, it may be possible to estimate various production and survival rates by use of variations of the formula for estimating the asymptote.
Estimating Most Productive Scale Size in Data Envelopment Analysis with Integer Value Data
NASA Astrophysics Data System (ADS)
Dwi Sari, Yunita; Angria S, Layla; Efendi, Syahril; Zarlis, Muhammad
2018-01-01
The most productive scale size (MPSS) is a measurement that states how resources should be organized and utilized to achieve optimal results. The most productive scale size (MPSS) can be used as a benchmark for the success of an industry or company in producing goods or services. To estimate the most productive scale size (MPSS), each decision making unit (DMU) should pay attention the level of input-output efficiency, by data envelopment analysis (DEA) method decision making unit (DMU) can identify units used as references that can help to find the cause and solution from inefficiencies can optimize productivity that main advantage in managerial applications. Therefore, data envelopment analysis (DEA) is chosen to estimating most productive scale size (MPSS) that will focus on the input of integer value data with the CCR model and the BCC model. The purpose of this research is to find the best solution for estimating most productive scale size (MPSS) with input of integer value data in data envelopment analysis (DEA) method.
Prioritizing investments in health technology assessment. Can we assess potential value for money?
Davies, L; Drummond, M; Papanikolaou, P
2000-01-01
The objective was to develop an economic prioritization model to assist those involved in the selection and prioritization of health technology assessment topics and commissioning of HTA projects. The model used decision analytic techniques to estimate the expected costs and benefits of the health care interventions that were the focus of the HTA question(s) considered by the NHS Health Technology Assessment Programme in England. Initial estimation of the value for money of HTA was conducted for several topics considered in 1997 and 1998. The results indicate that, using information routinely available in the literature and from the vignettes, it was not possible to estimate the absolute value of HTA with any certainty for this stage of the prioritization process. Overall, the results were uncertain for 65% of the HTA questions or topics analyzed. The relative costs of the interventions or technologies compared to existing costs of care and likely levels of utilization were critical factors in most of the analyses. The probability that the technology was effective with the HTA and the impact of the HTA on utilization rates were also key determinants of expected costs and benefits. The main conclusion was that it is feasible to conduct ex ante assessments of the value for money of HTA for specific topics. However, substantial work is required to ensure that the methods used are valid, reliable, consistent, and an efficient use of valuable research time.
Estimating QALY gains in applied studies: a review of cost-utility analyses published in 2010.
Wisløff, Torbjørn; Hagen, Gunhild; Hamidi, Vida; Movik, Espen; Klemp, Marianne; Olsen, Jan Abel
2014-04-01
Reimbursement agencies in several countries now require health outcomes to be measured in terms of quality-adjusted life-years (QALYs), leading to an immense increase in publications reporting QALY gains. However, there is a growing concern that the various 'multi-attribute utility' (MAU) instruments designed to measure the Q in the QALY yield disparate values, implying that results from different instruments are incommensurable. By reviewing cost-utility analyses published in 2010, we aim to contribute to improved knowledge on how QALYs are currently calculated in applied analyses; how transparently QALY measurement is presented; and how large the expected incremental QALY gains are. We searched Embase, MEDLINE and NHS EED for all cost-utility analyses published in 2010. All analyses that had estimated QALYs gained from health interventions were included. Of the 370 studies included in this review, 48% were pharmacoeconomic evaluations. Active comparators were used in 71% of studies. The median incremental QALY gain was 0.06, which translates to 3 weeks in best imaginable health. The EQ-5D-3L is the dominant instrument used. However, reporting of how QALY gains are estimated is generally inadequate. In 55% of the studies there was no reference to which MAU instrument or direct valuation method QALY data came from. The methods used for estimating expected QALY gains are not transparently reported in published papers. Given the wide variation in utility scores that different methodologies may assign to an identical health state, it is important for journal editors to require a more transparent way of reporting the estimation of incremental QALY gains.
NASA Astrophysics Data System (ADS)
Yanti, Apriwida; Susilo, Bowo; Wicaksono, Pramaditya
2016-11-01
Gajahmungkur reservoir is administratively located in Wonogiri Regency, Central Java, with the main function as a flood control in the upstream of Bengawan Solo River. Other functions of the reservoir are as hydroelectric power plant (PLTA), water supply, irrigation, fisheries and tourism. Economic utilization of the reservoir is estimated until 100 years, but it is begun to be threatened by the silting of the reservoir. Eroded materials entering water body will be suspended and accumulated. Suspended Material or TSS (Total Suspended Solid) will increase the turbidity of water, which can affect the quality of water and silting the reservoir. Remote sensing technology can be used to determine the spatial distribution of TSS. The purposes of this study were to 1) utilize and compare the accuracy of single band Landsat 8 OLI for mapping the spatial distribution of TSS and 2) estimate the TSS on Gajahmungkur reservoir surface waters up to the depth of 30 cm. The method used for modelling the TSS spatial distribution is the empirical modelling that integrates image pixel values and field data using correlation analysis and regression analysis. The data used in the empirical modelling are single band of visible, NIR, and SWIR of Landsat 8 OLI, which was acquired on 8 May 2016, and field-measured TSS values based on the field data collection conducted on 12 April 2016. The results revealed that mapping the distribution and the estimated value of TSS in Reservoir Gajahmungkur can be performed more accurately using band 4 (red band). The determinant coefficient between TSS field and TSS value of image using band 4 is 0.5431. The Standard Error (SE) of the predicted TSS value is 16.16 mg/L. The results also showed that the estimated total TSS of May 2016 according to band 4 is 1.087,56 tons. The average estimation of TSS value in up to the depth of 30 cm is 61.61 mg/L. The highest TSS distribution is in the northern parts, which was dominated by eroded materials from Keduang River.
Zhou, Zai Ming; Yang, Yan Ming; Chen, Ben Qing
2016-12-01
The effective management and utilization of resources and ecological environment of coastal wetland require investigation and analysis in high precision of the fractional vegetation cover of invasive species Spartina alterniflora. In this study, Sansha Bay was selected as the experimental region, and visible and multi-spectral images obtained by low-altitude UAV in the region were used to monitor the fractional vegetation cover of S. alterniflora. Fractional vegetation cover parameters in the multi-spectral images were then estimated by NDVI index model, and the accuracy was tested against visible images as references. Results showed that vegetation covers of S. alterniflora in the image area were mainly at medium high level (40%-60%) and high level (60%-80%). Root mean square error (RMSE) between the NDVI model estimation values and true values was 0.06, while the determination coefficient R 2 was 0.92, indicating a good consistency between the estimation value and the true value.
Moham P. Tiruveedhula; Joseph Fan; Ravi R. Sadasivuni; Surya S. Durbha; David L. Evans
2010-01-01
The accumulation of small diameter trees (SDTs) is becoming a nationwide concern. Forest management practices such as fire suppression and selective cutting of high grade timber have contributed to an overabundance of SDTs in many areas. Alternative value-added utilization of SDTs (for composite wood products and biofuels) has prompted the need to estimate their...
Crowley, Max; Jones, Damon
2017-12-01
Restricted public budgets and increasing efforts to link the impact of community interventions to public savings have increased the use of economic evaluation. While this type of evaluation can be important for program planning, it also raises important ethical issues about how we value the time of local stakeholders who support community interventions. In particular, researchers navigate issues of scientific accuracy, institutional inequality, and research utility in their pursuit of even basic cost estimates. We provide an example of how we confronted these issues when estimating the costs of a large-scale community-based intervention. Principles for valuing community members' time and conducting economic evaluations of community programs are discussed. © Society for Community Research and Action 2017.
Solar thermal technologies - Potential benefits to U.S. utilities and industry
NASA Technical Reports Server (NTRS)
Terasawa, K. L.; Gates, W. R.
1983-01-01
Solar energy systems were investigated which complement nuclear and coal technologies as a means of reducing the U.S. dependence on imported petroleum. Solar Thermal Energy Systems (STES) represents an important category of solar energy technologies. STES can be utilized in a broad range of applications servicing a variety of economic sectors, and they can be deployed in both near-term and long-term markets. The net present value of the energy cost savings attributable to electric utility and IPH applications of STES were estimated for a variety of future energy cost scenarios and levels of R&D success. This analysis indicated that the expected net benefits of developing an STES option are significantly greater than the expected costs of completing the required R&D. In addition, transportable fuels and chemical feedstocks represent a substantial future potential market for STES. Due to the basic nature of this R&D activity, however, it is currently impossible to estimate the value of STES in these markets. Despite this fact, private investment in STES R&D is not anticipated due to the high level of uncertainty characterizing the expected payoffs. Previously announced in STAR as N83-10547
Comparison of molecular breeding values based on within- and across-breed training in beef cattle
2013-01-01
Background Although the efficacy of genomic predictors based on within-breed training looks promising, it is necessary to develop and evaluate across-breed predictors for the technology to be fully applied in the beef industry. The efficacies of genomic predictors trained in one breed and utilized to predict genetic merit in differing breeds based on simulation studies have been reported, as have the efficacies of predictors trained using data from multiple breeds to predict the genetic merit of purebreds. However, comparable studies using beef cattle field data have not been reported. Methods Molecular breeding values for weaning and yearling weight were derived and evaluated using a database containing BovineSNP50 genotypes for 7294 animals from 13 breeds in the training set and 2277 animals from seven breeds (Angus, Red Angus, Hereford, Charolais, Gelbvieh, Limousin, and Simmental) in the evaluation set. Six single-breed and four across-breed genomic predictors were trained using pooled data from purebred animals. Molecular breeding values were evaluated using field data, including genotypes for 2227 animals and phenotypic records of animals born in 2008 or later. Accuracies of molecular breeding values were estimated based on the genetic correlation between the molecular breeding value and trait phenotype. Results With one exception, the estimated genetic correlations of within-breed molecular breeding values with trait phenotype were greater than 0.28 when evaluated in the breed used for training. Most estimated genetic correlations for the across-breed trained molecular breeding values were moderate (> 0.30). When molecular breeding values were evaluated in breeds that were not in the training set, estimated genetic correlations clustered around zero. Conclusions Even for closely related breeds, within- or across-breed trained molecular breeding values have limited prediction accuracy for breeds that were not in the training set. For breeds in the training set, across- and within-breed trained molecular breeding values had similar accuracies. The benefit of adding data from other breeds to a within-breed training population is the ability to produce molecular breeding values that are more robust across breeds and these can be utilized until enough training data has been accumulated to allow for a within-breed training set. PMID:23953034
Beyond Happiness and Satisfaction: Toward Well-Being Indices Based on Stated Preference*
Benjamin, Daniel J.; Kimball, Miles S.; Heffetz, Ori; Szembrot, Nichole
2014-01-01
This paper proposes foundations and a methodology for survey-based tracking of well-being. First, we develop a theory in which utility depends on “fundamental aspects” of well-being, measurable with surveys. Second, drawing from psychologists, philosophers, and economists, we compile a comprehensive list of such aspects. Third, we demonstrate our proposed method for estimating the aspects’ relative marginal utilities—a necessary input for constructing an individual-level well-being index—by asking ~4,600 U.S. survey respondents to state their preference between pairs of aspect bundles. We estimate high relative marginal utilities for aspects related to family, health, security, values, freedom, happiness, and life satisfaction. PMID:25404760
KERNELHR: A program for estimating animal home ranges
Seaman, D.E.; Griffith, B.; Powell, R.A.
1998-01-01
Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.
Leidl, Reiner; Schweikert, Bernd; Hahmann, Harry; Steinacker, Juergen M; Reitmeir, Peter
2016-03-22
Quality of life as an endpoint in a clinical study may be sensitive to the value set used to derive a single score. Focusing on patients' actual valuations in a clinical study, we compare different value sets for the EQ-5D-3L and assess how well they reproduce patients' reported results. A clinical study comparing inpatient (n = 98) and outpatient (n = 47) rehabilitation of patients after an acute coronary event is re-analyzed. Value sets include: 1. Given health states and time-trade-off valuation (GHS-TTO) rendering economic utilities; 2. Experienced health states and valuation by visual analog scale (EHS-VAS). Valuations are compared with patient-reported VAS rating. Accuracy is assessed by mean absolute error (MAE) and by Pearson's correlation ρ. External validity is tested by correlation with established MacNew global scores. Drivers of differences between value sets and VAS are analyzed using repeated measures regression. EHS-VAS had smaller MAEs and higher ρ in all patients and in the inpatient group, and correlated best with MacNew global score. Quality-adjusted survival was more accurately reflected by EHS-VAS. Younger, better educated patients reported lower VAS at admission than the EHS-based value set. EHS-based estimates were mostly able to reproduce patient-reported valuation. Economic utility measurement is conceptually different, produced results less strongly related to patients' reports, and resulted in about 20 % longer quality-adjusted survival. Decision makers should take into account the impact of choosing value sets on effectiveness results. For transferring the results of heart rehabilitation patients from another country or from another valuation method, the EHS-based value set offers a promising estimation option for those decision makers who prioritize patient-reported valuation. Yet, EHS-based estimates may not fully reflect patient-reported VAS in all situations.
Noninvasive estimation of assist pressure for direct mechanical ventricular actuation
NASA Astrophysics Data System (ADS)
An, Dawei; Yang, Ming; Gu, Xiaotong; Meng, Fan; Yang, Tianyue; Lin, Shujing
2018-02-01
Direct mechanical ventricular actuation is effective to reestablish the ventricular function with non-blood contact. Due to the energy loss within the driveline of the direct cardiac compression device, it is necessary to acquire the accurate value of assist pressure acting on the heart surface. To avoid myocardial trauma induced by invasive sensors, the noninvasive estimation method is developed and the experimental device is designed to measure the sample data for fitting the estimation models. By examining the goodness of fit numerically and graphically, the polynomial model presents the best behavior among the four alternative models. Meanwhile, to verify the effect of the noninvasive estimation, the simplified lumped parameter model is utilized to calculate the pre-support and the post-support left ventricular pressure. Furthermore, by adjusting the driving pressure beyond the range of the sample data, the assist pressure is estimated with the similar waveform and the post-support left ventricular pressure approaches the value of the adult healthy heart, indicating the good generalization ability of the noninvasive estimation method.
Estimating the Value of Life, Injury, and Travel Time Saved Using a Stated Preference Framework.
Niroomand, Naghmeh; Jenkins, Glenn P
2016-06-01
The incidence of fatality over the period 2010-2014 from automobile accidents in North Cyprus is 2.75 times greater than the average for the EU. With the prospect of North Cyprus entering the EU, many investments will need to be undertaken to improve road safety in order to reach EU benchmarks. The objective of this study is to provide local estimates of the value of a statistical life and injury along with the value of time savings. These are among the parameter values needed for the evaluation of the change in the expected incidence of automotive accidents and time savings brought about by such projects. In this study we conducted a stated choice experiment to identify the preferences and tradeoffs of automobile drivers in North Cyprus for improved travel times, travel costs, and safety. The choice of route was examined using mixed logit models to obtain the marginal utilities associated with each attribute of the routes that consumers choose. These estimates were used to assess the individuals' willingness to pay (WTP) to avoid fatalities and injuries and to save travel time. We then used the results to obtain community-wide estimates of the value of a statistical life (VSL) saved, the value of injury (VI) prevented, and the value per hour of travel time saved. The estimates for the VSL range from €315,293 to €1,117,856 and the estimates of VI from € 5,603 to € 28,186. These values are consistent, after adjusting for differences in incomes, with the median results of similar studies done for EU countries. Copyright © 2016 Elsevier Ltd. All rights reserved.
A case study in electricity regulation: Theory, evidence, and policy
NASA Astrophysics Data System (ADS)
Luk, Stephen Kai Ming
This research provides a thorough empirical analysis of the problem of excess capacity found in the electricity supply industry in Hong Kong. I utilize a cost-function based temporary equilibrium framework to investigate empirically whether the current regulatory scheme encourages the two utilities to overinvest in capital, and how much consumers would have saved if the underutilized capacity is eliminated. The research is divided into two main parts. The first section attempts to find any evidence of over-investment in capital. As a point of departure from traditional analysis, I treat physical capital as quasi-fixed, which implies a restricted cost function to represent the firm's short-run cost structure. Under such specification, the firm minimizes the cost of employing variable factor inputs subject to predetermined levels of quasi-fixed factors. Using a transcendental logarithmic restricted cost function, I estimate the cost-side equivalent of marginal product of capital, or commonly referred to as "shadow values" of capital. The estimation results suggest that the two electric utilities consistently over-invest in generation capacity. The second part of this research focuses on the economies of capital utilization, and the estimation of distortion cost in capital investment. Again, I utilize a translog specification of the cost function to estimate the actual cost of the excess capacity, and to find out how much consumers could have saved if the underutilized generation capacity were brought closer to the international standard. Estimation results indicate that an increase in the utilization rate can significantly reduce the costs of both utilities. And if the current excess capacity were reduced to the international standard, the combined savings in costs for both firms will reach 4.4 billion. This amount of savings, if redistributed to all consumers evenly, will translate into a 650 rebate per capita. Finally, two policy recommendations: a more stringent policy towards capacity expansion and the creation of a reimbursement program, are discussed.
Chen, Xiyuan; Wang, Xiying; Xu, Yuan
2014-01-01
This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124
Chen, Xiyuan; Wang, Xiying; Xu, Yuan
2014-12-09
This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.
Kharroubi, Samer A
2017-10-06
Valuations of health state descriptors such as EQ-5D or SF6D have been conducted in different countries. There is a scope to make use of the results in one country as informative priors to help with the analysis of a study in another, for this to enable better estimation to be obtained in the new country than analyzing its data separately. Data from 2 EQ-5D valuation studies were analyzed using the time trade-off technique, where values for 42 health states were devised from representative samples of the UK and US populations. A Bayesian non-parametric approach has been applied to predict the health utilities of the US population, where the UK results were used as informative priors in the model to improve their estimation. The findings showed that employing additional information from the UK data helped in the production of US utility estimates much more precisely than would have been possible using the US study data alone. It is very plausible that this method would serve useful in countries where the conduction of large evaluation studies is not very feasible.
Utility of Reference Change Values for Delta Check Limits.
Ko, Dae-Hyun; Park, Hae-Il; Hyun, Jungwon; Kim, Hyun Soo; Park, Min-Jeong; Shin, Dong Hoon
2017-10-01
To assess the utility of reference change values (RCVs) as delta check limits. A total of 1,650,518 paired results for 23 general chemistry test results from June 1, 2014, to October 31, 2016, were analyzed. The RCVs for each analyte were calculated from the analytical imprecision and within-subject biological variation. The percent differences between two consecutive results in one patient were categorized into one of four groups: outpatients, inpatients, emergency care, and general health care. For each, 2.5th and 97.5th percentile values were computed and compared with their RCVs. The distributions were assessed for normality using the Kolmogorov-Smirnov test. Most of the estimated limits were larger than the corresponding RCVs and, furthermore, with notable differences across the groups. Patients in the emergency care group usually demonstrated larger delta percent values than those in the other groups. None of the distributions of the percent differences passed tests of normality when subjected to Kolmogorov-Smirnov analysis. Comparison of estimated RCVs and real-world patient data revealed the pitfalls of applying RCVs in clinical laboratories. Laboratory managers should be aware of the limitations of RCVs and exercise caution when using them. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Chinese time trade-off values for EQ-5D health states.
Liu, Gordon G; Wu, Hongyan; Li, Minghui; Gao, Chen; Luo, Nan
2014-07-01
To generate a Chinese general population-based three-level EuroQol five-dimensios (EQ-5D-3L) social value set using the time trade-off method. The study sample was drawn from five cities in China: Beijing, Guangzhou, Shenyang, Chengdu, and Nanjing, using a quota sampling method. Utility values for a subset of 97 health states defined by the EQ-5D-3L descriptive system were directly elicited from the study sample using a modified Measurement and Valuation of Health protocol, with each respondent valuing 13 of the health states. The utility values for all 243 EQ-5D-3L health states were estimated on the basis of econometric models at both individual and aggregate levels. Various linear regression models using different model specifications were examined to determine the best model using predefined model selection criteria. The N3 model based on ordinary least square regression at the aggregate level yielded the best model fit, with a mean absolute error of 0.020, 7 and 0 states for which prediction errors were greater than 0.05 and 0.10, respectively, in absolute magnitude. This model passed tests for model misspecification (F = 2.7; P = 0.0509, Ramsey Regression Equation Specification Error Test), heteroskedasticity (χ(2) = 0.97; P = 0.3254, Breusch-Pagan/Cook-Weisberg test), and normality of the residuals (χ(2) = 1.285; P = 0.5259, Jarque-Bera test). The range of the predicted values (-0.149 to 0.887) was similar to those estimated in other countries. The study successfully developed Chinese utility values for EQ-5D-3L health states using the time trade-off method. It is the first attempt ever to develop a standardized instrument for quantifying quality-adjusted life-years in China. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Beach, Steven R. H.; Whisman, Mark A.
2012-01-01
Depression is a heterogeneous disorder with lifetime prevalence of "major depressive disorder" estimated to be 16.2%. Although the disorder is common and impairs functioning, it often goes untreated, with less than adequate response even when treated. We review research indicating the likely value of utilizing currently available, well-validated,…
Financial risk protection from social health insurance.
Barnes, Kayleigh; Mukherji, Arnab; Mullen, Patrick; Sood, Neeraj
2017-09-01
This paper estimates the impact of social health insurance on financial risk by utilizing data from a natural experiment created by the phased roll-out of a social health insurance program for the poor in India. We estimate the distributional impact of insurance on of out-of-pocket costs and incorporate these results with a stylized expected utility model to compute associated welfare effects. We adjust the standard model, accounting for conditions of developing countries by incorporating consumption floors, informal borrowing, and asset selling which allow us to separate the value of financial risk reduction from consumption smoothing and asset protection. Results show that insurance reduces out-of-pocket costs, particularly in higher quantiles of the distribution. We find reductions in the frequency and amount of money borrowed for health reasons. Finally, we find that the value of financial risk reduction outweighs total per household costs of the insurance program by two to five times. Copyright © 2017. Published by Elsevier B.V.
Pak, N; Vera, G; Araya, H
1985-03-01
The purpose of the present study was to evaluate the amino acid score adjusted by digestibility to estimate protein quality and utilizable protein in foods and diets, considering net protein utilization (NPU) as a biological reference method. Ten foods of vegetable origin and ten of animal origin, as well as eight mixtures of foods of vegetable and animal origin were studied. When all the foods were considered, a positive (r = 0.83) and highly significant correlation (p less than 0.001) between NPU and the amino acid score adjusted by digestibility was found. When the foods were separated according to their origin, this correlation was positive only for the foods of vegetable origin (r = 0.93) and statistically significant (p less than 0.001). Also, only in those foods were similar values found between NPU and amino acid score adjusted by digestibility, as well as in utilizable protein estimated considering both methods. Caution is required to interpret protein quality and utilizable protein values of foods of animal origin and mixtures of foods of vegetable and animal origin when the amino acid score method adjusted by digestibility, or NPU, are utilized.
Wallmo, Kristy; Lew, Daniel K
2011-07-01
Non-market valuation research has produced value estimates for over forty threatened and endangered (T&E) species, including mammals, fish, birds, and crustaceans. Increasingly, Stated Preference Choice Experiments (SPCE) are utilized for valuation, as the format offers flexibility for policy analysis and may reduce certain types of response biases relative to the more traditional Contingent Valuation method. Additionally, SPCE formats can allow respondents to make trade-offs among multiple species, providing information on the distinctiveness of preferences for different T&E species. In this paper we present results of an SPCE involving three U.S. Endangered Species Act (ESA)-listed species: the Puget Sound Chinook salmon, the Hawaiian monk seal, and the smalltooth sawfish. We estimate willingness-to-pay (WTP) values for improving each species' ESA listing status and statistically compare these values between the three species using a method of convolutions approach. Our results suggest that respondents have distinct preferences for the three species, and that WTP estimates differ depending on the species and the level of improvement to their ESA status. Our results should be of interest to researchers and policy-makers, as we provide value estimates for three species that have limited, if any, estimates available in the economics literature, as well as new information about the way respondents make trade-offs among three taxonomically different species. Copyright © 2011 Elsevier Ltd. All rights reserved.
Salomon, Joshua A
2003-01-01
Background In survey studies on health-state valuations, ordinal ranking exercises often are used as precursors to other elicitation methods such as the time trade-off (TTO) or standard gamble, but the ranking data have not been used in deriving cardinal valuations. This study reconsiders the role of ordinal ranks in valuing health and introduces a new approach to estimate interval-scaled valuations based on aggregate ranking data. Methods Analyses were undertaken on data from a previously published general population survey study in the United Kingdom that included rankings and TTO values for hypothetical states described using the EQ-5D classification system. The EQ-5D includes five domains (mobility, self-care, usual activities, pain/discomfort and anxiety/depression) with three possible levels on each. Rank data were analysed using a random utility model, operationalized through conditional logit regression. In the statistical model, probabilities of observed rankings were related to the latent utilities of different health states, modeled as a linear function of EQ-5D domain scores, as in previously reported EQ-5D valuation functions. Predicted valuations based on the conditional logit model were compared to observed TTO values for the 42 states in the study and to predictions based on a model estimated directly from the TTO values. Models were evaluated using the intraclass correlation coefficient (ICC) between predictions and mean observations, and the root mean squared error of predictions at the individual level. Results Agreement between predicted valuations from the rank model and observed TTO values was very high, with an ICC of 0.97, only marginally lower than for predictions based on the model estimated directly from TTO values (ICC = 0.99). Individual-level errors were also comparable in the two models, with root mean squared errors of 0.503 and 0.496 for the rank-based and TTO-based predictions, respectively. Conclusions Modeling health-state valuations based on ordinal ranks can provide results that are similar to those obtained from more widely analyzed valuation techniques such as the TTO. The information content in aggregate ranking data is not currently exploited to full advantage. The possibility of estimating cardinal valuations from ordinal ranks could also simplify future data collection dramatically and facilitate wider empirical study of health-state valuations in diverse settings and population groups. PMID:14687419
Rolf, Megan M; Taylor, Jeremy F; Schnabel, Robert D; McKay, Stephanie D; McClure, Matthew C; Northcutt, Sally L; Kerley, Monty S; Weaber, Robert L
2010-04-19
Molecular estimates of breeding value are expected to increase selection response due to improvements in the accuracy of selection and a reduction in generation interval, particularly for traits that are difficult or expensive to record or are measured late in life. Several statistical methods for incorporating molecular data into breeding value estimation have been proposed, however, most studies have utilized simulated data in which the generated linkage disequilibrium may not represent the targeted livestock population. A genomic relationship matrix was developed for 698 Angus steers and 1,707 Angus sires using 41,028 single nucleotide polymorphisms and breeding values were estimated using feed efficiency phenotypes (average daily feed intake, residual feed intake, and average daily gain) recorded on the steers. The number of SNPs needed to accurately estimate a genomic relationship matrix was evaluated in this population. Results were compared to estimates produced from pedigree-based mixed model analysis of 862 Angus steers with 34,864 identified paternal relatives but no female ancestors. Estimates of additive genetic variance and breeding value accuracies were similar for AFI and RFI using the numerator and genomic relationship matrices despite fewer animals in the genomic analysis. Bootstrap analyses indicated that 2,500-10,000 markers are required for robust estimation of genomic relationship matrices in cattle. This research shows that breeding values and their accuracies may be estimated for commercially important sires for traits recorded in experimental populations without the need for pedigree data to establish identity by descent between members of the commercial and experimental populations when at least 2,500 SNPs are available for the generation of a genomic relationship matrix.
A Weak Value Based QKD Protocol Robust Against Detector Attacks
NASA Astrophysics Data System (ADS)
Troupe, James
2015-03-01
We propose a variation of the BB84 quantum key distribution protocol that utilizes the properties of weak values to insure the validity of the quantum bit error rate estimates used to detect an eavesdropper. The protocol is shown theoretically to be secure against recently demonstrated attacks utilizing detector blinding and control and should also be robust against all detector based hacking. Importantly, the new protocol promises to achieve this additional security without negatively impacting the secure key generation rate as compared to that originally promised by the standard BB84 scheme. Implementation of the weak measurements needed by the protocol should be very feasible using standard quantum optical techniques.
A data fusion-based methodology for optimal redesign of groundwater monitoring networks
NASA Astrophysics Data System (ADS)
Hosseini, Marjan; Kerachian, Reza
2017-09-01
In this paper, a new data fusion-based methodology is presented for spatio-temporal (S-T) redesigning of Groundwater Level Monitoring Networks (GLMNs). The kriged maps of three different criteria (i.e. marginal entropy of water table levels, estimation error variances of mean values of water table levels, and estimation values of long-term changes in water level) are combined for determining monitoring sub-areas of high and low priorities in order to consider different spatial patterns for each sub-area. The best spatial sampling scheme is selected by applying a new method, in which a regular hexagonal gridding pattern and the Thiessen polygon approach are respectively utilized in sub-areas of high and low monitoring priorities. An Artificial Neural Network (ANN) and a S-T kriging models are used to simulate water level fluctuations. To improve the accuracy of the predictions, results of the ANN and S-T kriging models are combined using a data fusion technique. The concept of Value of Information (VOI) is utilized to determine two stations with maximum information values in both sub-areas with high and low monitoring priorities. The observed groundwater level data of these two stations are considered for the power of trend detection, estimating periodic fluctuations and mean values of the stationary components, which are used for determining non-uniform sampling frequencies for sub-areas. The proposed methodology is applied to the Dehgolan plain in northwestern Iran. The results show that a new sampling configuration with 35 and 7 monitoring stations and sampling intervals of 20 and 32 days, respectively in sub-areas with high and low monitoring priorities, leads to a more efficient monitoring network than the existing one containing 52 monitoring stations and monthly temporal sampling.
Development of a Portfolio Management Approach with Case Study of the NASA Airspace Systems Program
NASA Technical Reports Server (NTRS)
Neitzke, Kurt W.; Hartman, Christopher L.
2012-01-01
A portfolio management approach was developed for the National Aeronautics and Space Administration s (NASA s) Airspace Systems Program (ASP). The purpose was to help inform ASP leadership regarding future investment decisions related to its existing portfolio of advanced technology concepts and capabilities (C/Cs) currently under development and to potentially identify new opportunities. The portfolio management approach is general in form and is extensible to other advanced technology development programs. It focuses on individual C/Cs and consists of three parts: 1) concept of operations (con-ops) development, 2) safety impact assessment, and 3) benefit-cost-risk (B-C-R) assessment. The first two parts are recommendations to ASP leaders and will be discussed only briefly, while the B-C-R part relates to the development of an assessment capability and will be discussed in greater detail. The B-C-R assessment capability enables estimation of the relative value of each C/C as compared with all other C/Cs in the ASP portfolio. Value is expressed in terms of a composite weighted utility function (WUF) rating, based on estimated benefits, costs, and risks. Benefit utility is estimated relative to achieving key NAS performance objectives, which are outlined in the ASP Strategic Plan.1 Risk utility focuses on C/C development and implementation risk, while cost utility focuses on the development and implementation portions of overall C/C life-cycle costs. Initial composite ratings of the ASP C/Cs were successfully generated; however, the limited availability of B-C-R information, which is used as inputs to the WUF model, reduced the meaningfulness of these initial investment ratings. Development of this approach, however, defined specific information-generation requirements for ASP C/C developers that will increase the meaningfulness of future B-C-R ratings.
NASA Astrophysics Data System (ADS)
Muzylev, Eugene; Startseva, Zoya; Uspensky, Alexander; Vasilenko, Eugene; Volkova, Elena; Kukharsky, Alexander
2017-04-01
The model of water and heat exchange between vegetation covered territory and atmosphere (LSM, Land Surface Model) for vegetation season has been developed to calculate soil water content, evapotranspiration, infiltration of water into the soil, vertical latent and sensible heat fluxes and other water and heat balances components as well as soil surface and vegetation cover temperatures and depth distributions of moisture and temperature. The LSM is suited for utilizing satellite-derived estimates of precipitation, land surface temperature and vegetation characteristics and soil surface humidity for each pixel. Vegetation and meteorological characteristics being the model parameters and input variables, correspondingly, have been estimated by ground observations and thematic processing measurement data of scanning radiometers AVHRR/NOAA, SEVIRI/Meteosat-9, -10 (MSG-2, -3) and MSU-MR/Meteor-M № 2. Values of soil surface humidity has been calculated from remote sensing data of scatterometers ASCAT/MetOp-A, -B. The case study has been carried out for the territory of part of the agricultural Central Black Earth Region of European Russia with area of 227300 km2 located in the forest-steppe zone for years 2012-2015 vegetation seasons. The main objectives of the study have been: - to built estimates of precipitation, land surface temperatures (LST) and vegetation characteristics from MSU-MR measurement data using the refined technologies (including algorithms and programs) of thematic processing satellite information matured on AVHRR and SEVIRI data. All technologies have been adapted to the area of interest; - to investigate the possibility of utilizing satellite-derived estimates of values above in the LSM including verification of obtained estimates and development of procedure of their inputting into the model. From the AVHRR data there have been built the estimates of precipitation, three types of LST: land skin temperature Tsg, air temperature at a level of vegetation cover (taken for vegetation temperature) Ta and efficient radiation temperature Ts.eff, as well as land surface emissivity E, normalized difference vegetation index NDVI, vegetation cover fraction B, and leaf area index LAI. The SEVIRI-based retrievals have included precipitation, LST Tls and Ta, E at daylight and nighttime, LAI (daily), and B. From the MSU-MR data there have been retrieved values of all the same characteristics as from the AVHRR data. The MSU-MR-based daily and monthly sums of precipitation have been calculated using the developed earlier and modified Multi Threshold Method (MTM) intended for the cloud detection and identification of its types around the clock as well as allocation of precipitation zones and determination of instantaneous maximum rainfall intensities for each pixel at that the transition from assessing rainfall intensity to estimating their daily values is a key element of the MTM. Measurement data from 3 IR MSU-MR channels (3.8, 11 i 12 μm) as well as their differences have been used in the MTM as predictors. Controlling the correctness of the MSU-MR-derived rainfall estimates has been carried out when comparing with analogous AVHRR- and SEVIRI-based retrievals and with precipitation amounts measured at the agricultural meteorological station of the study region. Probability of rainfall zones determination from the MSU-MR data, to match against the actual ones, has been 75-85% as well as for the AVHRR and SEVIRI data. The time behaviors of satellite-derived and ground-measured daily and monthly precipitation sums for vegetation season and yeaŗ correspondingly, have been in good agreement with each other although the first ones have been smoother than the latter. Discrepancies have existed for a number of local maxima for which satellite-derived precipitation estimates have been less than ground-measured values. It may be due to the different spatial scales of areal satellite-derived and point ground-based estimates. Some spatial displacement of the satellite-determined rainfall maxima and minima regarding to ground-based data can be explained by the discrepancy between the cloud location on satellite images and in reality at high angles of the satellite sightings and considerable altitudes of the cloud tops. Reliability of MSU-MR-derived rainfall estimates at each time step obtained using the MTM has been verified by comparing their values determined from the MSU-MR, AVHRR and SEVIRI measurements and distributed over the study area with similar estimates obtained by interpolation of ground observation data. The MSU-MR-derived estimates of temperatures Tsg, Ts.eff, and Ta have been obtained using computational algorithm developed on the base of the MTM and matured on AVHRR and SEVIRI data for the region under investigation. Since the apparatus MSU-MR is similar to radiometer AVHRR, the developed methods of satellite estimating Tsg, Ts.eff, and Ta from AVHRR data could be easily transferred to the MSU-MR data. Comparison of the ground-measured and MSU-MR-, AVHRR- and SEVIRI-derived LSTs has shown that the differences between all the estimates for the vast majority of observation terms have not exceed the RMSE of these quantities built from the AVHRR data. The similar conclusion has been also made from the results of building the time behavior of the MSU-MR-derived value of LAI for vegetation season. Satellite-based estimates of precipitation, LST, LAI and B have been utilized in the model with the help of specially developed procedures of replacing these values determined from observations at agricultural meteorological stations by their satellite-derived values taking into account spatial heterogeneity of their fields. Adequacy of such replacement has been confirmed by the results of comparing modeled and ground-measured values of soil moisture content W and evapotranspiration Ev. Discrepancies between the modeled and ground-measured values of W and Ev have been in the range of 10-15 and 20-25 %, correspondingly. It may be considered as acceptable result. Resulted products of the model calculations using satellite data have been spatial fields of W, Ev, vertical sensible and latent heat fluxes and other water and heat regime characteristics for the region of interest over the year 2012-2015 vegetation seasons. Thus, there has been shown the possibility of utilizing MSU-MR/Meteor-M №2 data jointly with those of other satellites in the LSM to calculate characteristics of water and heat regimes for the area under consideration. Besides the first trial estimations of the soil surface moisture from ASCAT scatterometers data for the study region have been obtained for the years 2014-2015 vegetation seasons, their comparison has been performed with the results of modeling for several agricultural meteorological stations of the region that has been carried out utilizing ground-based and satellite data, specific requirements for the obtained information have been formulated. To date, estimates of surface moisture built from ASCAT data can be used for the selection of the model soil parameter values and the initial soil moisture conditions for the vegetation season.
Clinical variables impacting on the estimation of utilities in chronic obstructive pulmonary disease
Miravitlles, Marc; Huerta, Alicia; Valle, Manuel; García-Sidro, Patricia; Forné, Carles; Crespo, Carlos; López-Campos, José Luis
2015-01-01
Purpose Health utilities are widely used in health economics as a measurement of an individual’s preference and show the value placed on different health states over a specific period. Thus, health utilities are used as a measure of the benefits of health interventions in terms of quality-adjusted life years. This study aimed to determine the demographic and clinical variables significantly associated with health utilities for chronic obstructive pulmonary disease (COPD) patients. Patients and methods This was a multicenter, observational, cross-sectional study conducted between October 2012 and April 2013. Patients were aged ≥40 years, with spirometrically confirmed COPD. Utility values were derived from the preference-based generic questionnaire EQ-5D-3L applying weighted Spanish societal preferences. Demographic and clinical variables associated with utilities were assessed by univariate and multivariate linear regression models. Results Three hundred and forty-six patients were included, of whom 85.5% were male. The mean age was 67.9 (standard deviation [SD] =9.7) years and the mean forced expiratory volume in 1 second (%) was 46.2% (SD =15.5%); 80.3% were former smokers, and the mean smoking history was 54.2 (SD =33.2) pack-years. Median utilities (interquartile range) were 0.81 (0.26) with a mean value of 0.73 (SD =0.29); 22% of patients had a utility value of 1 (ceiling effect) and 3.2% had a utility value lower than 0. The factors associated with utilities in the multivariate analysis were sex (beta =-0.084, 95% confidence interval [CI]: −0.154; -0.013 for females), number of exacerbations the previous year (−0.027, 95% CI: −0.044; -0.010), and modified Medical Research Council Dyspnea Scale (mMRC) score (−0.123 [95% CI: −0.185; −0.061], −0.231 [95% CI: −0.301; −0.161], and −0.559 [95% CI: −0.660; −0.458] for mMRC scores 2, 3, and 4 versus 1), all P<0.05. Conclusion Multivariate analysis showed that female sex, frequent exacerbations, and an increased level of dyspnea were the main factors associated with reduced utility values in patients with COPD. PMID:25733826
48 CFR 1452.237-71 - Utilization of Woody Biomass.
Code of Federal Regulations, 2011 CFR
2011-10-01
... timber/vegetative sales contract. Payment under the timber/vegetative sales contract must be at a price... appropriate payment specified in the related timber/vegetative sales contract before removal may be authorized... sales notice and/or prospectus, including volume estimates, appraised value and any appropriate special...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyer, James M.; Erdman, Bill; Iannucci, Joseph J., Jr.
2005-03-01
This report describes Phase III of a project entitled Innovative Applications of Energy Storage in a Restructured Electricity Marketplace. For this study, the authors assumed that it is feasible to operate an energy storage plant simultaneously for two primary applications: (1) energy arbitrage, i.e., buy-low-sell-high, and (2) to reduce peak loads in utility ''hot spots'' such that the utility can defer their need to upgrade transmission and distribution (T&D) equipment. The benefits from the arbitrage plus T&D deferral applications were estimated for five cases based on the specific requirements of two large utilities operating in the Eastern U.S. A numbermore » of parameters were estimated for the storage plant ratings required to serve the combined application: power output (capacity) and energy discharge duration (energy storage). In addition to estimating the various financial expenditures and the value of electricity that could be realized in the marketplace, technical characteristics required for grid-connected distributed energy storage used for capacity deferral were also explored.« less
Wittenberg, Eve; Bray, Jeremy W.; Aden, Brandon; Gebremariam, Achamyeleh; Nosyk, Bohdan; Schackman, Bruce R.
2016-01-01
Aims To understand how the general public views the quality of life effects of opioid misuse and opioid use disorder on an individual and his/her spouse, measured in terms used in economic evaluations. Design Cross-sectional internet survey of a US-population-representative respondent panel conducted December 2013-January 2014. Setting USA. Participants 2,054 randomly-selected adults; 51% male (before weighting). Measurements Mean (95% CI) and median health “utility” for 6 opioid misuse and treatment outcomes: active injection misuse; active prescription misuse; methadone maintenance therapy at initiation, and when stabilized in treatment; and buprenorphine therapy at initiation, and when stabilized. Utility is a numerical representation of health-related quality of life used in economic evaluations to “adjust” estimated survival to include peoples' preferences for health states. Utilities are determined by surveying the general population to estimate the value they assign to particular health states—on a scale where 0=the value of being dead, and 1.0=the value of being in perfect health. Spouse spillover utility is assigned to a spouse of an individual who is in a particular health state. Findings Mean individual utility ranged from 0.574 (95%CI: 0.538, 0.611) for active injection opioid misuse to 0.766 for stabilized buprenorphine therapy (95%CI: 0.738, 0.795), with other states in between. Female respondents assigned higher utility to the active prescription misuse and buprenorphine therapy at initiation states than did males (p<0.05); all other states did not differ by respondent gender. Mean spousal utilities were significantly lower than 1.0 but mostly higher than individual utility, and were similar between male and female respondents. Conclusions In the opinion of the US public, injection opioid misuse results in worse health-related quality of life than prescription misuse, and methadone therapy results in worse health-related quality of life than buprenorphine therapy. Spouses are negatively affected by their partner's opioid misuse and early treatment. PMID:26498740
Higgins, A; Barnett, J; Meads, C; Singh, J; Longworth, L
2014-12-01
To systematically review the existing literature on the value associated with convenience in health care delivery, independent of health outcomes, and to try to estimate the likely magnitude of any value found. A systematic search was conducted for previously published studies that reported preferences for convenience-related aspects of health care delivery in a manner that was consistent with either cost-utility analysis or cost-benefit analysis. Data were analyzed in terms of the methodologies used, the aspects of convenience considered, and the values reported. Literature searches generated 4715 records. Following a review of abstracts or full-text articles, 27 were selected for inclusion. Twenty-six studies reported some evidence of convenience-related process utility, in the form of either a positive utility or a positive willingness to pay. The aspects of convenience valued most often were mode of administration (n = 11) and location of treatment (n = 6). The most common valuation methodology was a discrete-choice experiment containing a cost component (n = 15). A preference for convenience-related process utility exists, independent of health outcomes. Given the diverse methodologies used to calculate it, and the range of aspects being valued, however, it is difficult to assess how large such a preference might be, or how it may be effectively incorporated into an economic evaluation. Increased consistency in reporting these preferences is required to assess these issues more accurately. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Kolo, Matthew Tikpangi; Khandaker, Mayeen Uddin; Amin, Yusoff Mohd; Abdullah, Wan Hasiah Binti
2016-01-01
Following the increasing demand of coal for power generation, activity concentrations of primordial radionuclides were determined in Nigerian coal using the gamma spectrometric technique with the aim of evaluating the radiological implications of coal utilization and exploitation in the country. Mean activity concentrations of 226Ra, 232Th, and 40K were 8.18±0.3, 6.97±0.3, and 27.38±0.8 Bq kg-1, respectively. These values were compared with those of similar studies reported in literature. The mean estimated radium equivalent activity was 20.26 Bq kg-1 with corresponding average external hazard index of 0.05. Internal hazard index and representative gamma index recorded mean values of 0.08 and 0.14, respectively. These values were lower than their respective precautionary limits set by UNSCEAR. Average excess lifetime cancer risk was calculated to be 0.04×10-3, which was insignificant compared with 0.05 prescribed by ICRP for low level radiation. Pearson correlation matrix showed significant positive relationship between 226Ra and 232Th, and with other estimated hazard parameters. Cumulative mean occupational dose received by coal workers via the three exposure routes was 7.69 ×10-3 mSv y-1, with inhalation pathway accounting for about 98%. All radiological hazard indices evaluated showed values within limits of safety. There is, therefore, no likelihood of any immediate radiological health hazards to coal workers, final users, and the environment from the exploitation and utilization of Maiganga coal.
Kolo, Matthew Tikpangi; Khandaker, Mayeen Uddin; Amin, Yusoff Mohd; Abdullah, Wan Hasiah Binti
2016-01-01
Following the increasing demand of coal for power generation, activity concentrations of primordial radionuclides were determined in Nigerian coal using the gamma spectrometric technique with the aim of evaluating the radiological implications of coal utilization and exploitation in the country. Mean activity concentrations of 226Ra, 232Th, and 40K were 8.18±0.3, 6.97±0.3, and 27.38±0.8 Bq kg-1, respectively. These values were compared with those of similar studies reported in literature. The mean estimated radium equivalent activity was 20.26 Bq kg-1 with corresponding average external hazard index of 0.05. Internal hazard index and representative gamma index recorded mean values of 0.08 and 0.14, respectively. These values were lower than their respective precautionary limits set by UNSCEAR. Average excess lifetime cancer risk was calculated to be 0.04×10−3, which was insignificant compared with 0.05 prescribed by ICRP for low level radiation. Pearson correlation matrix showed significant positive relationship between 226Ra and 232Th, and with other estimated hazard parameters. Cumulative mean occupational dose received by coal workers via the three exposure routes was 7.69 ×10−3 mSv y-1, with inhalation pathway accounting for about 98%. All radiological hazard indices evaluated showed values within limits of safety. There is, therefore, no likelihood of any immediate radiological health hazards to coal workers, final users, and the environment from the exploitation and utilization of Maiganga coal. PMID:27348624
Courtney, T K; Clancy, E A
1998-08-01
Information on the frequency and cost of OSHA enforcement penalties for musculoskeletal disorders (MSD) in the literature is limited. Such information would be of value to organizations in estimating the likelihood and financial impact of enforcement activity in their operations. This descriptive study utilized data from federal Occupational Safety and Health Administration (OSHA) inspections to examine the distribution of penalty costs arising from inspections with MSD-related citations from January 1985 to June 1994 and to estimate the probability of OSHA inspection in general and OSHA citation for MSD hazards from October 1985 to September 1993. The mean and median values of proposed penalties were $47,707 and $3600 respectively. A substantial influence of 1991 changes to the penalty structure was noted with decreasing mean and increasing median penalty values. Penalty values increased with establishment size and were higher for unplanned than for planned inspections. The probability of a federal OSHA inspection for any establishment ranged from 1:50 in 1986 to 1:100 in 1993 whereas the estimated probability of an inspection with MSD-related citations ranged from 1:167,000 in 1996 to as much as 1:38,000 during the peak of enforcement activity in 1990. The probability of an inspection with MSD citations for the largest establishments during the period was more than 1000 times greater than that for the smallest. The results of this study may be utilized by organizations seeking to demonstrate the advantages of reducing musculoskeletal morbidity in the workplace.
NASA Astrophysics Data System (ADS)
Suzuki, Ryosuke; Nishimura, Motoki; Yuan, Lee Chang; Kamahara, Hirotsugu; Atsuta, Yoichi; Daimon, Hiroyuki
2017-10-01
Utilization of sewage sludge using anaerobic digestion has been promoted for decades. However, it is still relatively uncommon especially in Japan. As an approach to promote the utilization of sewage sludge using anaerobic digestion, an integrated system that combines anaerobic digestion with greenhouse, composting and seaweed cultivation was proposed. Based on the concept of the integrated system, not only sewage sludge can be treated using anaerobic digestion that creates green energy, but also the by-products such as CO2 and heat produced during the process can be utilized for crops production. In this study, the potentials of such integrated system were discussed through the estimation of possible commercialized scale as well as comparison of energy consumption with conventional approach for sewage sludge treatment, which is the incineration. The estimation of possible commercialized scale was calculated based on the carbon flow of the system. Results showed that 25% of the current total electricity of the wastewater treatment plant can be covered by the energy produced using anaerobic digestion of sewage sludge. It was estimated that the total energy consumption of the integrated system was actually 14% lower when compared to incineration approach. In addition to the large amount of crops that can be produced, all in all this study aimed to be the showcase of the potentials of sewage sludge as a biomass by implementing the proposed integrated system. The extra values of producing crops through the utilization of CO2 and heat can serve as a stimulus to the public, which would surely lead to higher interest to implement the utilization of sewage sludge using anaerobic digestion.
Nonlinear Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Everhart, Joel L.; Badavi, Forooz F.
1989-01-01
Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.
Weernink, Marieke G M; Groothuis-Oudshoorn, Catharina G M; IJzerman, Maarten J; van Til, Janine A
2016-01-01
The objective of this study was to compare treatment profiles including both health outcomes and process characteristics in Parkinson disease using best-worst scaling (BWS), time trade-off (TTO), and visual analogue scales (VAS). From the model comprising of seven attributes with three levels, six unique profiles were selected representing process-related factors and health outcomes in Parkinson disease. A Web-based survey (N = 613) was conducted in a general population to estimate process-related utilities using profile-based BWS (case 2), multiprofile-based BWS (case 3), TTO, and VAS. The rank order of the six profiles was compared, convergent validity among methods was assessed, and individual analysis focused on the differentiation between pairs of profiles with methods used. The aggregated health-state utilities for the six treatment profiles were highly comparable for all methods and no rank reversals were identified. On the individual level, the convergent validity between all methods was strong; however, respondents differentiated less in the utility of closely related treatment profiles with a VAS or TTO than with BWS. For TTO and VAS, this resulted in nonsignificant differences in mean utilities for closely related treatment profiles. This study suggests that all methods are equally able to measure process-related utility when the aim is to estimate the overall value of treatments. On an individual level, such as in shared decision making, BWS allows for better prioritization of treatment alternatives, especially if they are closely related. The decision-making problem and the need for explicit trade-off between attributes should determine the choice for a method. Copyright © 2016. Published by Elsevier Inc.
Assessing the resolution-dependent utility of tomograms for geostatistics
Day-Lewis, F. D.; Lane, J.W.
2004-01-01
Geophysical tomograms are used increasingly as auxiliary data for geostatistical modeling of aquifer and reservoir properties. The correlation between tomographic estimates and hydrogeologic properties is commonly based on laboratory measurements, co-located measurements at boreholes, or petrophysical models. The inferred correlation is assumed uniform throughout the interwell region; however, tomographic resolution varies spatially due to acquisition geometry, regularization, data error, and the physics underlying the geophysical measurements. Blurring and inversion artifacts are expected in regions traversed by few or only low-angle raypaths. In the context of radar traveltime tomography, we derive analytical models for (1) the variance of tomographic estimates, (2) the spatially variable correlation with a hydrologic parameter of interest, and (3) the spatial covariance of tomographic estimates. Synthetic examples demonstrate that tomograms of qualitative value may have limited utility for geostatistics; moreover, the imprint of regularization may preclude inference of meaningful spatial statistics from tomograms.
Documentary evidence of past floods in Europe and their utility in flood frequency estimation
NASA Astrophysics Data System (ADS)
Kjeldsen, T. R.; Macdonald, N.; Lang, M.; Mediero, L.; Albuquerque, T.; Bogdanowicz, E.; Brázdil, R.; Castellarin, A.; David, V.; Fleig, A.; Gül, G. O.; Kriauciuniene, J.; Kohnová, S.; Merz, B.; Nicholson, O.; Roald, L. A.; Salinas, J. L.; Sarauskiene, D.; Šraj, M.; Strupczewski, W.; Szolgay, J.; Toumazis, A.; Vanneuville, W.; Veijalainen, N.; Wilson, D.
2014-09-01
This review outlines the use of documentary evidence of historical flood events in contemporary flood frequency estimation in European countries. The study shows that despite widespread consensus in the scientific literature on the utility of documentary evidence, the actual migration from academic to practical application has been limited. A detailed review of flood frequency estimation guidelines from different countries showed that the value of historical data is generally recognised, but practical methods for systematic and routine inclusion of this type of data into risk analysis are in most cases not available. Studies of historical events were identified in most countries, and good examples of national databases attempting to collate the available information were identified. The conclusion is that there is considerable potential for improving the reliability of the current flood risk assessments by harvesting the valuable information on past extreme events contained in the historical data sets.
Slope Estimation in Noisy Piecewise Linear Functions✩
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2014-01-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020
Slope Estimation in Noisy Piecewise Linear Functions.
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2015-03-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.
Estimating the value of life and injury for pedestrians using a stated preference framework.
Niroomand, Naghmeh; Jenkins, Glenn P
2017-09-01
The incidence of pedestrian death over the period 2010 to 2014 per 1000,000 in North Cyprus is about 2.5 times that of the EU, with 10.5 times more pedestrian road injuries than deaths. With the prospect of North Cyprus entering the EU, many investments need to be undertaken to improve road safety in order to reach EU benchmarks. We conducted a stated choice experiment to identify the preferences and tradeoffs of pedestrians in North Cyprus for improved walking times, pedestrian costs, and safety. The choice of route was examined using mixed logit models to obtain the marginal utilities associated with each attribute of the routes that consumers chose. These were used to estimate the individuals' willingness to pay (WTP) to save walking time and to avoid pedestrian fatalities and injuries. We then used the results to obtain community-wide estimates of the value of a statistical life (VSL) saved, the value of an injury (VI) prevented, and the value per hour of walking time saved. The estimate of the VSL was €699,434 and the estimate of VI was €20,077. These values are consistent, after adjusting for differences in incomes, with the median results of similar studies done for EU countries. The estimated value of time to pedestrians is €7.20 per person hour. The ratio of deaths to injuries is much higher for pedestrians than for road accidents, and this is completely consistent with the higher estimated WTP to avoid a pedestrian accident than to avoid a car accident. The value of time of €7.20 is quite high relative to the wages earned. Findings provide a set of information on the VRR for fatalities and injuries and the value of pedestrian time that is critical for conducing ex ante appraisals of investments to improve pedestrian safety. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
García-Armesto, Sandra; Angulo-Pueyo, Ester; Martínez-Lizaga, Natalia; Mateus, Céu; Joaquim, Inês; Bernal-Delgado, Enrique
2015-02-01
Although C-section is a highly effective procedure, literature abounds with evidence of overuse and particularly misuse, in lower-value indications such as low-risk deliveries. This study aims to quantify utilization of C-section in low-risk cases, mapping out areas showing excess-usage in each country and to estimate excess-expenditure as a proxy of the opportunity cost borne by healthcare systems. Observational, ecologic study on deliveries in 913 sub-national administrative areas of five European countries (Denmark, England, Portugal, Slovenia and Spain) from 2002 to 2009. The study includes a cross-section analysis with 2009 data and a time-trend analysis for the whole period. Main endpoints: age-standardized utilization rates of C-section in low-risk pregnancies and deliveries per 100 deliveries. Secondary endpoints: Estimated excess-cases per geographical unit of analysis in two scenarios of minimized utilization. C-section is widely used in all examined countries (ranging from 19% of Slovenian deliveries to 33% of deliveries in Portugal). With the exception of Portugal, there are no systematic variations in intensity of use across areas in the same country. Cross-country comparison of lower-value C-section leaves Denmark with 10% and Portugal with 2%, the highest and lowest. Such behaviour was stable over the period of analysis. Within each country, the scattered geographical patterns of use intensity speak for local drivers playing a major role within the national trend. The analysis conducted suggests plenty of room for enhancing value in obstetric care and equity in women's access to such within the countries studied. The analysis of geographical variations in lower-value care can constitute a powerful screening tool. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Net merit as a measure of lifetime profit: 2010 revision
USDA-ARS?s Scientific Manuscript database
The 2010 revision of net merit (NM$) updates a number of key economic values as well as milk utilization statistics. Members of Project S-1040, Genetic Selection and Crossbreeding To Enhance Reproduction and Survival of Dairy Cattle, provided updated incomes and expenses used to estimate lifetime pr...
Genetic evaluation of Angus cattle for carcass marbling using ultrasound and genomic indicators
USDA-ARS?s Scientific Manuscript database
Objectives were to estimate genetic parameters needed to elucidate the relationships of a molecular breeding value for marbling (MBV), intramuscular fat of yearling bulls measured with ultrasound (IMF) and marbling score of harvested steers (MRB), and to assess the utility of MBV and IMF in predicti...
Estimating envelope thermal characteristics from single point in time thermal images
NASA Astrophysics Data System (ADS)
Alshatshati, Salahaldin Faraj
Energy efficiency programs implemented nationally in the U.S. by utilities have rendered savings which have cost on average 0.03/kWh. This cost is still well below generation costs. However, as the lowest cost energy efficiency measures are adopted, this the cost effectiveness of further investment declines. Thus there is a need to more effectively find the most opportunities for savings regionally and nationally, so that the greatest cost effectiveness in implementing energy efficiency can be achieved. Integral to this process. are at scale energy audits. However, on-site building energy audits process are expensive, in the range of US1.29/m2-$5.37/m2 and there are an insufficient number of professionals to perform the audits. Energy audits that can be conducted at-scale and at low cost are needed. Research is presented that addresses at community-wide scales characterization of building envelope thermal characteristics via drive-by and fly-over GPS linked thermal imaging. A central question drives this research: Can single point-in-time thermal images be used to infer U-values and thermal capacitances of walls and roofs? Previous efforts to use thermal images to estimate U-values have been limited to rare steady exterior weather conditions. The approaches posed here are based upon the development two models first is a dynamic model of a building envelope component with unknown U-value and thermal capacitance. The weather conditions prior to the thermal image are used as inputs to the model. The model is solved to determine the exterior surface temperature, ultimately predicted the temperature at the thermal measurement time. The model U-value and thermal capacitance are tuned in order to force the error between the predicted surface temperature and the measured surface temperature from thermal imaging to be near zero. This model is developed simply to show that such a model cannot be relied upon to accurately estimate the U-value. The second is a data-based methodology. This approach integrates the exterior surface temperature measurements, historical utility data, and easily accessible or potentially easily accessible housing data. A Random Forest model is developed from a training subset of residences for which the envelope U-value is known. This model is used to predict the envelope U-value for a validation set of houses with unknown U-value. Demonstrated is an ability to estimate the wall/roof U-value with an R-squared value in the range of 0.97 and 0.96 respectively, using as few as 9 and 24 training houses for respectively wall and ceiling U-value estimation. The implication of this research is significant, offering the possibility of auditing residences remotely at-scale via aerial and drive-by thermal imaging.
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.
Dishonest Academic Conduct: From the Perspective of the Utility Function.
Sun, Ying; Tian, Rui
Dishonest academic conduct has aroused extensive attention in academic circles. To explore how scholars make decisions according to the principle of maximal utility, the author has constructed the general utility function based on the expected utility theory. The concrete utility functions of different types of scholars were deduced. They are as follows: risk neutral, risk averse, and risk preference. Following this, the assignment method was adopted to analyze and compare the scholars' utilities of academic conduct. It was concluded that changing the values of risk costs, internal condemnation costs, academic benefits, and the subjective estimation of penalties following dishonest academic conduct can lead to changes in the utility of academic dishonesty. The results of the current study suggest that within scientific research, measures to prevent and govern dishonest academic conduct should be formulated according to the various effects of the above four variables.
Rencz, F; Brodszky, V; Stalmeier, P F M; Tamási, B; Kárpáti, S; Péntek, M; Baji, P; Mitev, A Z; Gulácsi, L
2016-09-01
Health-related quality of life (HRQoL) in pemphigus has been widely investigated; nevertheless, utility values for economic evaluations are still lacking. To estimate health utilities for hypothetical pemphigus vulgaris (PV) and pemphigus foliaceus (PF) health states in a general population sample. Three health states (uncontrolled PV, uncontrolled PF and controlled pemphigus) were developed based on a systematic literature review of HRQoL studies in pemphigus. Utilities were obtained from a convenience sample of 108 adults using a visual analogue scale (VAS) and 10-year time trade-off (TTO). Lead-time TTO was applied for health states regarded as worse than dead with a lead time to disease time ratio of 1 : 1. The mean VAS utility scores for PV, PF and controlled pemphigus were 0·25 ± 0·15, 0·37 ± 0·17 and 0·63 ± 0·16, respectively. Corresponding TTO utilities were as follows: 0·34 ± 0·38, 0·51 ± 0·32 and 0·75 ± 0·31. Overall, 14% and 6% judged PV and PF as being worse than dead. For both VAS and TTO values, significant differences were observed between all health states (P < 0·001). VAS utilities were rated significantly lower compared with TTO in each health state (P < 0·001). This is the first study that reports health utility values for PV and PF. Successful treatment of pemphigus might result in significant utility gain (0·24-0·41). These empirical findings with respect to three health states in pemphigus may serve as anchor points for further utility studies and cost-effectiveness analyses. © 2016 British Association of Dermatologists.
Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca
2018-05-08
Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Orans, Ren
1990-10-01
Existing procedures used to develop marginal costs for electric utilities were not designed for applications in an increasingly competitive market for electric power. The utility's value of receiving power, or the costs of selling power, however, depend on the exact location of the buyer or seller, the magnitude of the power and the period of time over which the power is used. Yet no electric utility in the United States has disaggregate marginal costs that reflect differences in costs due to the time, size or location of the load associated with their power or energy transactions. The existing marginal costing methods used by electric utilities were developed in response to the Public Utilities Regulatory Policy Act (PURPA) in 1978. The "ratemaking standards" (Title 1) established by PURPA were primarily concerned with the appropriate segmentation of total revenues to various classes-of-service, designing time-of-use rating periods, and the promotion of efficient long-term resource planning. By design, the methods were very simple and inexpensive to implement. Now, more than a decade later, the costing issues facing electric utilities are becoming increasingly complex, and the benefits of developing more specific marginal costs will outweigh the costs of developing this information in many cases. This research develops a framework for estimating total marginal costs that vary by the size, timing, and the location of changes in loads within an electric distribution system. To complement the existing work at the Electric Power Research Institute (EPRI) and Pacific Gas and Electric Company (PGandE) on estimating disaggregate generation and transmission capacity costs, this dissertation focuses on the estimation of distribution capacity costs. While the costing procedure is suitable for the estimation of total (generation, transmission and distribution) marginal costs, the empirical work focuses on the geographic disaggregation of marginal costs related to electric utility distribution investment. The study makes use of data from an actual distribution planning area, located within PGandE's service territory, to demonstrate the important characteristics of this new costing approach. The most significant result of this empirical work is that geographic differences in the cost of capacity in distribution systems can be as much as four times larger than the current system average utility estimates. Furthermore, lumpy capital investment patterns can lead to significant cost differences over time.
Fine-tuning satellite-based rainfall estimates
NASA Astrophysics Data System (ADS)
Harsa, Hastuadi; Buono, Agus; Hidayat, Rahmat; Achyar, Jaumil; Noviati, Sri; Kurniawan, Roni; Praja, Alfan S.
2018-05-01
Rainfall datasets are available from various sources, including satellite estimates and ground observation. The locations of ground observation scatter sparsely. Therefore, the use of satellite estimates is advantageous, because satellite estimates can provide data on places where the ground observations do not present. However, in general, the satellite estimates data contain bias, since they are product of algorithms that transform the sensors response into rainfall values. Another cause may come from the number of ground observations used by the algorithms as the reference in determining the rainfall values. This paper describe the application of bias correction method to modify the satellite-based dataset by adding a number of ground observation locations that have not been used before by the algorithm. The bias correction was performed by utilizing Quantile Mapping procedure between ground observation data and satellite estimates data. Since Quantile Mapping required mean and standard deviation of both the reference and the being-corrected data, thus the Inverse Distance Weighting scheme was applied beforehand to the mean and standard deviation of the observation data in order to provide a spatial composition of them, which were originally scattered. Therefore, it was possible to provide a reference data point at the same location with that of the satellite estimates. The results show that the new dataset have statistically better representation of the rainfall values recorded by the ground observation than the previous dataset.
Mavrodi, A; Aletras, V; Spanou, A; Niakas, D
2017-12-01
Contingent valuation is widely used to determine individuals' willingness to pay (WTP) for a health gain. Our study aimed to elicit an empirical estimate of the monetary value of a quality-adjusted life year (QALY) in a Greek outpatient setting in times of economic austerity and assess the impact of patients' characteristics on their valuations. We used a questionnaire as a survey tool to determine the maximum WTP for a health gain of a hypothetical therapy and to evaluate patients' health-related quality of life (EuroQoL-5D-3L) and demographic and socioeconomic characteristics. EuroQoL tariffs were used to estimate health utilities. Mean WTP values were computed and ordinary least squares regressions performed on transformed Box-Cox and logarithmic dependent WTP per QALY variables to remedy observed skewness problems. Analyses were performed for 167 patients with utility values less than unity. Mean WTP per QALY reported was similar for both payment vehicles examined: payments made out-of-pocket (€2629) and payments made through new tax imposition (€2407). Regression results showed that higher net monthly family income was associated with higher WTP per QALY for both payment vehicles. Moreover, the presence of a chronic condition and higher level of education were associated with higher out-of-pocket WTP per QALY and WTP per QALY through taxes, respectively. The very low WTP per QALY estimates could be explained by the recent severe economic depression and austerity in Greece. In fact, family income was found to be a significant predictor of WTP per QALY. Since these estimates deviate significantly from the cost-effectiveness thresholds still employed in economic evaluations in this country, research should be undertaken promptly to further examine this important issue using a nationwide representative sample of the general population along with WTP and other methodologies.
Health State Utility Impact of Breast Cancer in U.S. Women Aged 18-44 Years.
Brown, Derek S; Trogdon, Justin G; Ekwueme, Donatus U; Chamiec-Case, Linda; Guy, Gery P; Tangka, Florence K; Li, Chunyu; Trivers, Katrina F; Rodriguez, Juan L
2016-02-01
Breast cancer affects women's health-related quality of life negatively, but little is known about how breast cancer affects this in younger women aged 18-44 years. This study measures preference-based health state utility (HSU) values, a scaled index of health-related quality of life for economic evaluation, for younger women with breast cancer and compares these values with same-age women with other cancers and older women (aged ≥45 years) with breast cancer. Data from the 2009 and 2010 Behavioral Risk Factor Surveillance System were analyzed in 2014. The sample included 218,852 women; 7,433 and 18,577 had histories of breast and other cancers. HSU values were estimated using Healthy Days survey questions and a published mapping algorithm. Linear regression models for HSU were estimated by age group (18-44 and ≥45 years). The adjusted breast cancer HSU impact was four times larger for younger women than for older women (-0.097 vs -0.024, p<0.001). For younger women, the effect of breast cancer on HSU was 70% larger than that of other cancers (-0.097 vs -0.057, p=0.024). Younger breast cancer survivors reported lower HSU values than older survivors, highlighting the impact of breast cancer on the physical and mental health of younger women. The estimates may be used to evaluate quality-adjusted life-years or expectancy for prevention or treatment of breast cancer. This study also indicates that separate quality of life adjustments for women by age group are important for economic analysis of public health breast cancer interventions. Copyright © 2016 American Journal of Preventive Medicine. All rights reserved.
Economics of wild salmon ecosystems: Bristol Bay, Alaska
John W. Duffield; Christopher J. Neher; David A. Patterson; Oliver S. Goldsmith
2007-01-01
This paper provides an estimate of the economic value of wild salmon ecosystems in the major watershed of Bristol Bay, Alaska. The analysis utilizes both regional economic and social benefit-cost accounting frameworks. Key sectors analyzed include subsistence, commercial fishing, sport fishing, hunting, and nonconsumptive wildlife viewing and tourism. The mixed cash-...
Potential benefits from a successful solar thermal program
NASA Technical Reports Server (NTRS)
Terasawa, K. L.; Gates, W. R.
1982-01-01
Solar energy systems were investigated which complement nuclear and coal technologies as a means of reducing the U.S. dependence on imported petroleum. Solar Thermal Energy Systems (STES) represents an important category of solar energy technologies. STES can be utilized in a broad range of applications servicing a variety of economic sectors, and they can be deployed in both near-term and long-term markets. The net present value of the energy cost savings attributable to electric utility and IPH applications of STES were estimated for a variety of future energy cost scenarios and levels of R&D success. This analysis indicated that the expected net benefits of developing an STES option are significantly greater than the expected costs of completing the required R&D. In addition, transportable fuels and chemical feedstocks represent a substantial future potential market for STES. Due to the basic nature of this R&D activity, however, it is currently impossible to estimate the value of STES in these markets. Despite this fact, private investment in STES R&D is not anticipated due to the high level of uncertainty characterizing the expected payoffs.
Estimating the Aqueous Solubility of Pharmaceutical Hydrates
Franklin, Stephen J.; Younis, Usir S.; Myrdal, Paul B.
2016-01-01
Estimation of crystalline solute solubility is well documented throughout the literature. However, the anhydrous crystal form is typically considered with these models, which is not always the most stable crystal form in water. In this study an equation which predicts the aqueous solubility of a hydrate is presented. This research attempts to extend the utility of the ideal solubility equation by incorporating desolvation energetics of the hydrated crystal. Similar to the ideal solubility equation, which accounts for the energetics of melting, this model approximates the energy of dehydration to the entropy of vaporization for water. Aqueous solubilities, dehydration and melting temperatures, and log P values were collected experimentally and from the literature. The data set includes different hydrate types and a range of log P values. Three models are evaluated, the most accurate model approximates the entropy of dehydration (ΔSd) by the entropy of vaporization (ΔSvap) for water, and utilizes onset dehydration and melting temperatures in combination with log P. With this model, the average absolute error for the prediction of solubility of 14 compounds was 0.32 log units. PMID:27238488
Averbeck, Márcio Augusto; Krassioukov, Andrei; Thiruchelvam, Nikesh; Madersbacher, Helmut; Bøgelund, Mette; Igawa, Yasuhiko
2018-06-08
Intermittent catheterization (IC) is the gold standard for bladder management in patients with chronic urinary retention. Despite its medical benefits, IC-users experience a negative impact on their quality of life (QoL). For health economics based decision making, this impact is normally measured using generic QoL measures (such as EQ-5D) that estimate a single utility score which can be used to calculate Quality-Adjusted Life Years (QALYs). But these generic measures may not be sensitive to all relevant aspects of QoL affected by intermittent catheters. This study used alternative methods to estimate the health state utilities associated with different scenarios: using a multiple-use catheter, one-time-use catheter, pre-lubricated one-time-use catheter, and pre-lubricated one-time-use catheter with one less urinary tract infection (UTI) per year. Health state utilities were elicited through an internet-based Time Trade-Off (TTO) survey in adult volunteers representing the general population in Canada and the UK. Health states were developed to represent the catheters based on the following four attributes: steps and time needed for IC process, pain and the frequency of UTIs. The survey was completed by 956 respondents. One-time-use catheters, pre-lubricated one-time-use catheters and ready-to-use catheters were preferred to multiple-use catheters. The utility gains were associated with the following features: one-time-use (Canada: +0.013, UK: +0.021), ready-to-use (all: +0.017), and one less UTI/year (all: +0.011). Internet-based survey responders may have valued health states differently than the rest of the population: this might be a source of bias. Steps and time needed for the IC process, pain related to IC, and the frequency of UTIs have a significant impact on IC related utilities. These values could be incorporated into a cost utility analysis.
X-Ray Detection and Processing Models for Spacecraft Navigation and Timing
NASA Technical Reports Server (NTRS)
Sheikh, Suneel; Hanson, John
2013-01-01
The current primary method of deepspace navigation is the NASA Deep Space Network (DSN). High-performance navigation is achieved using Delta Differential One-Way Range techniques that utilize simultaneous observations from multiple DSN sites, and incorporate observations of quasars near the line-of-sight to a spacecraft in order to improve the range and angle measurement accuracies. Over the past four decades, x-ray astronomers have identified a number of xray pulsars with pulsed emissions having stabilities comparable to atomic clocks. The x-ray pulsar-based navigation and time determination (XNAV) system uses phase measurements from these sources to establish autonomously the position of the detector, and thus the spacecraft, relative to a known reference frame, much as the Global Positioning System (GPS) uses phase measurements from radio signals from several satellites to establish the position of the user relative to an Earth-centered fixed frame of reference. While a GPS receiver uses an antenna to detect the radio signals, XNAV uses a detector array to capture the individual xray photons from the x-ray pulsars. The navigation solution relies on detailed xray source models, signal processing, navigation and timing algorithms, and analytical tools that form the basis of an autonomous XNAV system. Through previous XNAV development efforts, some techniques have been established to utilize a pulsar pulse time-of-arrival (TOA) measurement to correct a position estimate. One well-studied approach, based upon Kalman filter methods, optimally adjusts a dynamic orbit propagation solution based upon the offset in measured and predicted pulse TOA. In this delta position estimator scheme, previously estimated values of spacecraft position and velocity are utilized from an onboard orbit propagator. Using these estimated values, the detected arrival times at the spacecraft of pulses from a pulsar are compared to the predicted arrival times defined by the pulsar s pulse timing model. A discrepancy provides an estimate of the spacecraft position offset, since an error in position will relate to the measured time offset of a pulse along the line of sight to the pulsar. XNAV researchers have been developing additional enhanced approaches to process the photon TOAs to arrive at an estimate of spacecraft position, including those using maximum-likelihood estimation, digital phase locked loops, and "single photon processing" schemes that utilize all available time data associated with each photon. Using pulsars from separate, non-coplanar locations provides range and range-rate measurements in each pulsar s direction. Combining these different pulsar measurements solves for offsets in position and velocity in three dimensions, and provides accurate overall navigation for deep space vehicles.
Chilean population norms derived from the health-related quality of Life SF-6D.
Garcia-Gordillo, Miguel A; Collado-Mateo, Daniel; Olivares, Pedro R; Adsuar, José C
2018-06-01
The Health-Related Quality of Life Short Form 6D (HRQoL SF-6D) provides utility values for health status. Utilities generated have a number of potentially valuable applications in economic evaluations and not only to ensure comparability between studies. Reference values can be useful to estimate the effect on patients' HRQoL as a result of interventions in the absence of control groups. Thus, the purpose of this study was to provide normative values in the SF-6D in relation to the Chilean population. A cross-sectional study was conducted evaluating 5293 people. SF-6D utilities were derived from the SF-12 questions. Mean SF-6D utility index for the whole sample was 0.74. It was better for men (0.78) than for women (0.71). The ceiling effect was much higher for men (11.16%) than for women (5.31%). Women were more likely to show problems in any dimension than were men. Chilean population norms for the SF-6D help in the decision-making process around health policies. Men reported higher health status than women in all subcategories analyzed. Likewise, men also reported higher scores than women in overall SF-6D dimensions.
NASA Astrophysics Data System (ADS)
Cope, Robert Frank, III
1998-12-01
The electric utility industry in the United States is currently experiencing a new and different type of growing pain. It is the pain of having to restructure itself into a competitive business. Many industry experts are trying to explain how the nation as a whole, as well as individual states, will implement restructuring and handle its numerous "transition problems." One significant transition problem for federal and state regulators rests with determining a utility's stranded costs. Stranded generation facilities are assets which would be uneconomic in a competitive environment or costs for assets whose regulated book value is greater than market value. At issue is the methodology which will be used to estimate stranded costs. The two primary methods are known as "Top-Down" and "Bottom-Up." The "Top-Down" approach simply determines the present value of the losses in revenue as the market price for electricity changes over a period of time into the future. The problem with this approach is that it does not take into account technical issues associated with the generation and wheeling of electricity. The "Bottom-Up" approach computes the present value of specific strandable generation facilities and compares the resulting valuations with their historical costs. It is regarded as a detailed and difficult, but more precise, approach to identifying stranded assets and their associated costs. This dissertation develops a "Bottom-Up" quantitative, optimization-based approach to electric power wheeling within the state of Louisiana. It optimally evaluates all production capabilities and coordinates the movement of bulk power through transmission interconnections of competing companies in and around the state. Sensitivity analysis to this approach is performed by varying seasonal consumer demand, electric power imports, and transmission inter-connection cost parameters. Generation facility economic dispatch and transmission interconnection bulk power transfers, specific to each set of parameters, lead to the identification of stranded generation facilities. Stranded costs of non-dispatched and uneconomically dispatched generation facilities can then be estimated to indicate, arguably, the largest portion of restructuring transition costs as the industry is transformed from its present monopolistic structure to a competitive one.
Valuing reduced antibiotic use for pediatric acute otitis media.
Meropol, Sharon B
2008-04-01
The 2004 American Academy of Pediatrics acute otitis media guidelines urge parents to weigh the benefits of reduced antibiotic use, adverse drug events, and future resistance versus risks of extra costs and sick days resulting from guideline use. The value of decreased antibiotic resistance has not been quantified. The objective was to perform cost-utility analysis, estimating the resistance value of implementing the guidelines for acute otitis media treatment for children <2 years of age. Outcomes were described with a common denominator and the value of avoiding resistance was estimated using a parental perspective. Decision analysis results were used for outcome probabilities. Published utilities were used to describe outcomes in quality-adjusted life-day units. The minimum resistance benefit value, where the benefits of the American Academy of Pediatrics guidelines would at least balance their costs, was defined as the guidelines' incremental costs minus their other benefits. For a child 2 to <6 months of age presenting to a primary care physician with possible otitis media, parents would need to value the resistance benefit at 0.77 quality-adjusted life-days per antibiotic prescription avoided for the guidelines' benefits to balance their costs. For the 6- to <24-month-old group, results were 0.67 quality-adjusted life-days per prescription avoided. Results were sensitive to the dollar cost utility; when willingness to pay ranged from $20,000 to $200,000 per quality-adjusted life-year, results ranged from 0.36 and 0.30 quality-adjusted life-days up to 4.10 and 3.57 quality-adjusted life-days for the 2- to <6-month-old and 6- to <24-month-old groups, respectively. Costs were driven by missed parent work days. From a societal perspective, trading 0.30 to 4 quality-adjusted life-days to avoid 1 antibiotic course might be desirable; from a parental perspective, this may not be as desirable. Parent demand for antibiotics may be rational when driven by the value of parent time. Other approaches that have the potential to reduce antibiotic use, such as wider use of influenza vaccine and improved rapid viral diagnostic techniques, might be more successful.
Appari, Ajit; Johnson, M Eric; Anthony, Denise L
2018-01-01
To determine whether the use of information technology (IT), measured by Meaningful Use capability, is associated with lower rates of inappropriate utilization of imaging services in hospital outpatient settings. A retrospective cross-sectional analysis of 3332 nonfederal U.S. hospitals using data from: Hospital Compare (2011 outpatient imaging efficiency measures), HIMSS Analytics (2009 health IT), and Health Indicator Warehouse (market characteristics). Hospitals were categorized for their health IT infrastructure including EHR Stage-1 capability, and three advanced imaging functionalities/systems including integrated picture archiving and communication system, Web-based image distribution, and clinical decision support (CDS) with physician pathways. Three imaging efficiency measures suggesting inappropriate utilization during 2011 included: percentage of "combined" (with and without contrast) computed tomography (CT) studies out of all CT studies for abdomen and chest respectively, and percentage of magnetic resonance imaging (MRI) studies of lumbar spine without antecedent conservative therapy within 60days. For each measure, three separate regression models (GLM with gamma-log link function, and denominator of imaging measure as exposure) were estimated adjusting for hospital characteristics, market characteristics, and state fixed effects. Additionally, Heckman's Inverse Mills Ratio and propensity for Stage-1 EHR capability were used to account for selection bias. We find support for association of each of the four health IT capabilities with inappropriate utilization rates of one or more imaging modality. Stage-1 EHR capability is associated with lower inappropriate utilization rates for chest CT (incidence rate ratio IRR=0.72, p-value <0.01) and lumbar MRI (IRR=0.87, p-value <0.05). Integrated PACS is associated with lower inappropriate utilization rate of abdomen CT (IRR=0.84, p-value <0.05). Imaging distribution over Web capability is associated with lower inappropriate utilization rates for chest CT (IRR=0.66, p-value <0.05) and lumbar MRI (IRR=0.86, p-value <0.05). CDS with physician pathways is associated with lower inappropriate utilization rates for abdomen CT (IRR=0.87, p-value <0.01) and lumbar MRI (IRR=0.90, p-value <0.05). All other cases showed no association. The study offers mixed results. Taken together, the results suggest that the use of Stage-1 Meaningful Use capable EHR systems along with advanced imaging related functionalities could have a beneficial impact on reducing some of the inappropriate utilization of outpatient imaging. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimating a WTP-based value of a QALY: the 'chained' approach.
Robinson, Angela; Gyrd-Hansen, Dorte; Bacon, Philomena; Baker, Rachel; Pennington, Mark; Donaldson, Cam
2013-09-01
A major issue in health economic evaluation is that of the value to place on a quality adjusted life year (QALY), commonly used as a measure of health care effectiveness across Europe. This critical policy issue is reflected in the growing interest across Europe in development of more sound methods to elicit such a value. EuroVaQ was a collaboration of researchers from 9 European countries, the main aim being to develop more robust methods to determine the monetary value of a QALY based on surveys of the general public. The 'chained' approach of deriving a societal willingness-to-pay (WTP) based monetary value of a QALY used the following basic procedure. First, utility values were elicited for health states using the standard gamble (SG) and time trade off (TTO) methods. Second, a monetary value to avoid some risk/duration of that health state was elicited and the implied WTP per QALY estimated. We developed within EuroVaQ an adaptation to the 'chained approach' that attempts to overcome problems documented previously (in particular the tendency to arrive at exceedingly high WTP per QALY values). The survey was administered via Internet panels in each participating country and almost 22,000 responses achieved. Estimates of the value of a QALY varied across question and were, if anything, on the low side with the (trimmed) 'all country' mean WTP per QALY ranging from $18,247 to $34,097. Untrimmed means were considerably higher and medians considerably lower in each case. We conclude that the adaptation to the chained approach described here is a potentially useful technique for estimating WTP per QALY. A number of methodological challenges do still exist, however, and there is scope for further refinement. Copyright © 2013 Elsevier Ltd. All rights reserved.
Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Boucher, Matthew J.
2017-01-01
Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.
Direction of Arrival Estimation for MIMO Radar via Unitary Nuclear Norm Minimization
Wang, Xianpeng; Huang, Mengxing; Wu, Xiaoqin; Bi, Guoan
2017-01-01
In this paper, we consider the direction of arrival (DOA) estimation issue of noncircular (NC) source in multiple-input multiple-output (MIMO) radar and propose a novel unitary nuclear norm minimization (UNNM) algorithm. In the proposed method, the noncircular properties of signals are used to double the virtual array aperture, and the real-valued data are obtained by utilizing unitary transformation. Then a real-valued block sparse model is established based on a novel over-complete dictionary, and a UNNM algorithm is formulated for recovering the block-sparse matrix. In addition, the real-valued NC-MUSIC spectrum is used to design a weight matrix for reweighting the nuclear norm minimization to achieve the enhanced sparsity of solutions. Finally, the DOA is estimated by searching the non-zero blocks of the recovered matrix. Because of using the noncircular properties of signals to extend the virtual array aperture and an additional real structure to suppress the noise, the proposed method provides better performance compared with the conventional sparse recovery based algorithms. Furthermore, the proposed method can handle the case of underdetermined DOA estimation. Simulation results show the effectiveness and advantages of the proposed method. PMID:28441770
Do Health Care Delivery System Reforms Improve Value? The Jury Is Still Out.
Korenstein, Deborah; Duan, Kevin; Diaz, Manuel J; Ahn, Rosa; Keyhani, Salomeh
2016-01-01
Widespread restructuring of health delivery systems is underway in the United States to reduce costs and improve the quality of health care. To describe studies evaluating the impact of system-level interventions (incentives and delivery structures) on the value of US health care, defined as the balance between quality and cost. We identified articles in PubMed (2003 to July 2014) using keywords identified through an iterative process, with reference and author tracking. We searched tables of contents of relevant journals from August 2014 through 11 August 2015 to update our sample. We included prospective or retrospective studies of system-level changes, with a control, reporting both quality and either cost or utilization of resources. Data about study design, study quality, and outcomes was extracted by one reviewer and checked by a second. Thirty reports of 28 interventions were included. Interventions included patient-centered medical home implementations (n=12), pay-for-performance programs (n=10), and mixed interventions (n=6); no other intervention types were identified. Most reports (n=19) described both cost and utilization outcomes. Quality, cost, and utilization outcomes varied widely; many improvements were small and process outcomes predominated. Improved value (improved quality with stable or lower cost/utilization or stable quality with lower cost/utilization) was seen in 23 reports; 1 showed decreased value, and 6 showed unchanged, unclear, or mixed results.Study limitations included variability among specific endpoints reported, inconsistent methodologies, and lack of full adjustment in some observational trials. Lack of standardized MeSH terms was also a challenge in the search. On balance, the literature suggests that health system reforms can improve value. However, this finding is tempered by the varying outcomes evaluated across studies with little documented improvement in outcome quality measures. Standardized measures of value would facilitate assessment of the impact of interventions across studies and better estimates of the broad impact of system change.
Craig, Benjamin M; Busschbach, Jan JV
2009-01-01
Background To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. Methods First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common instant RUM. For the interpretation of time trade-off (TTO) responses, we show that the episodic model implies a coefficient estimator, and the instant model implies a mean slope estimator. Secondly, we demonstrate these estimators and the differences between the estimates for 42 health states using TTO responses from the seminal Measurement and Valuation in Health (MVH) study conducted in the United Kingdom. Mean slopes are estimates with and without Dolan's transformation of worse-than-death (WTD) responses. Finally, we demonstrate an exploded probit estimator, an extension of the coefficient estimator for discrete choice data that accommodates both TTO and rank responses. Results By construction, mean slopes are less than or equal to coefficients, because slopes are fractions and, therefore, magnify downward errors in WTD responses. The Dolan transformation of WTD responses causes mean slopes to increase in similarity to coefficient estimates, yet they are not equivalent (i.e., absolute mean difference = 0.179). Unlike mean slopes, coefficient estimates demonstrate strong concordance with rank-based predictions (Lin's rho = 0.91). Combining TTO and rank responses under the exploded probit model improves the identification of health state values, decreasing the average width of confidence intervals from 0.057 to 0.041 compared to TTO only results. Conclusion The episodic RUM expands upon the theoretical framework underlying health state valuation and contributes to health econometrics by motivating the selection of coefficient and exploded probit estimators for the analysis of TTO and rank responses. In future MVH surveys, sample size requirements may be reduced through the incorporation of multiple responses under a single estimator. PMID:19144115
NASA Astrophysics Data System (ADS)
Sumarto; Aprianty, D.; Bachtiar, R. A.; Kristiana, L.
2018-01-01
Manonjaya salacca (snake fruit) is one of the original varieties of Indonesia which is currently declining due to the quality of taste less favoured than the snake fruit on the market. This variety in the future is feared to be lost, so it is necessary to revitalize the utilization of this snake fruit by diversifying processed products, one of them is baked food products from Manonjaya salacca flour. The purpose of this research was to know the acceptance level of baked food products from Manonjaya salacca flour organoleptically and the nutritional value estimation. This research method was observational with a descriptive explanation. Panellists in this study were consumers with a total of 61 people. Organoleptically, respondents tend to value cake, muffin, cookies, and flakes in every color, flavor, taste, and texture parameters. Nutritional value per 100 g of baked food products from Salacca flour (cake, muffin, cookies, flakes) were energy 287.5-479.0 kcal, water 0.8-3.8 g, protein 6.0-6.7 g, fat 0.8-31.0 g, carbohydrates 45.0-98.8 g, and fiber 1.1-4.6 g. Panellists were accepted the organoleptic characteristics and the estimated nutritional values on baked food products from Manonjaya variety salacca flour were varied.
NASA Astrophysics Data System (ADS)
Sun, Yong; Ma, Zilin; Tang, Gongyou; Chen, Zheng; Zhang, Nong
2016-07-01
Since the main power source of hybrid electric vehicle(HEV) is supplied by the power battery, the predicted performance of power battery, especially the state-of-charge(SOC) estimation has attracted great attention in the area of HEV. However, the value of SOC estimation could not be greatly precise so that the running performance of HEV is greatly affected. A variable structure extended kalman filter(VSEKF)-based estimation method, which could be used to analyze the SOC of lithium-ion battery in the fixed driving condition, is presented. First, the general lower-order battery equivalent circuit model(GLM), which includes column accumulation model, open circuit voltage model and the SOC output model, is established, and the off-line and online model parameters are calculated with hybrid pulse power characteristics(HPPC) test data. Next, a VSEKF estimation method of SOC, which integrates the ampere-hour(Ah) integration method and the extended Kalman filter(EKF) method, is executed with different adaptive weighting coefficients, which are determined according to the different values of open-circuit voltage obtained in the corresponding charging or discharging processes. According to the experimental analysis, the faster convergence speed and more accurate simulating results could be obtained using the VSEKF method in the running performance of HEV. The error rate of SOC estimation with the VSEKF method is focused in the range of 5% to 10% comparing with the range of 20% to 30% using the EKF method and the Ah integration method. In Summary, the accuracy of the SOC estimation in the lithium-ion battery cell and the pack of lithium-ion battery system, which is obtained utilizing the VSEKF method has been significantly improved comparing with the Ah integration method and the EKF method. The VSEKF method utilizing in the SOC estimation in the lithium-ion pack of HEV can be widely used in practical driving conditions.
The Development, Validation, and Utility of the Diabetes Prevention Trial-Type 1 Risk Score (DPTRS)
Sosenko, Jay M.; Skyler, Jay S.; Palmer, Jerry P.
2016-01-01
Summary This report details the development, validation, and utility of the Diabetes Prevention Trial-Type 1 (DPT-1) Risk Score (DPTRS) for type 1 diabetes (T1D). Proportional hazards regression was used to develop the DPTRS model which includes the glucose and C-peptide sums from oral glucose tolerance tests at 30, 60, 90, and 120 minutes, the log fasting C-peptide, age, and the log BMI. The DPTRS was externally validated in the TrialNet Natural History Study cohort (TNNHS). In a study of the application of the DPTRS, the findings showed that it could be used to identify normoglycemic individuals who were at a similar risk for T1D as those with dysglycemia. The DPTRS could also be used to identify lower risk dysglycemic individuals. Risk estimates of individuals deemed to be at higher risk according to DPTRS values did not differ significantly between the DPT-1 and the TNNHS, whereas the risk estimates for those with dysglycemia were significantly higher in DPT-1. Individuals with very high DPTRS values were found to be at such marked risk for T1D that they could reasonably be considered to be in a pre-diabetic state. The findings indicate that the DPTRS has utility in T1D prevention trials and for identifying pre-diabetic individuals. PMID:26077017
The development, validation, and utility of the Diabetes Prevention Trial-Type 1 Risk Score (DPTRS).
Sosenko, Jay M; Skyler, Jay S; Palmer, Jerry P
2015-08-01
This report details the development, validation, and utility of the Diabetes Prevention Trial-Type 1 (DPT-1) Risk Score (DPTRS) for type 1 diabetes (T1D). Proportional hazards regression was used to develop the DPTRS model which includes the glucose and C-peptide sums from oral glucose tolerance tests at 30, 60, 90, and 120 min, the log fasting C-peptide, age, and the log BMI. The DPTRS was externally validated in the TrialNet Natural History Study cohort (TNNHS). In a study of the application of the DPTRS, the findings showed that it could be used to identify normoglycemic individuals who were at a similar risk for T1D as those with dysglycemia. The DPTRS could also be used to identify lower risk dysglycemic individuals. Risk estimates of individuals deemed to be at higher risk according to DPTRS values did not differ significantly between the DPT-1 and the TNNHS; whereas, the risk estimates for those with dysglycemia were significantly higher in DPT-1. Individuals with very high DPTRS values were found to be at such marked risk for T1D that they could reasonably be considered to be in a pre-diabetic state. The findings indicate that the DPTRS has utility in T1D prevention trials and for identifying pre-diabetic individuals.
The social costs of dangerous products: an empirical investigation.
Shapiro, Sidney; Ruttenberg, Ruth; Leigh, Paul
2009-01-01
Defective consumer products impose significant costs on consumers and third parties when they cause fatalities and injuries. This Article develops a novel approach to measuring the true extent of such costs, which may not be accurately captured under current methods of estimating the cost of dangerous products. Current analysis rests on a narrowly defined set of costs, excluding certain types of costs. The cost-of-injury estimates utilized in this Article address this omission by quantifying and incorporating these costs to provide a more complete picture of the true impact of defective consumer products. The new estimates help to gauge the true value of the civil liability system.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
Assessment of 2001 NLCD percent tree and impervious cover estimates
Eric Greenfield; David J. Nowak; Jeffrey T. Walton
2009-01-01
The 2001 National Land Cover Database (NLCD) tree and impervious cover maps provide an opportunity to extract basic land-cover information helpful for natural resource assessments. To determine the potential utility and limitations of the 2001 NLCD data, this exploratory study compared 2001 NLCD-derived values of overall percent tree and impervious cover within...
A non-stationary cost-benefit based bivariate extreme flood estimation approach
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
Comparing UK, USA and Australian values for EQ-5D as a health utility measure of oral health.
Brennan, D S; Teusner, D N
2015-09-01
Using generic measures to examine outcomes of oral disorders can add additional information relating to health utility. However, different algorithms are available to generate health states. The aim was to assess UK-, US- and Australian-based algorithms for the EuroQol (EQ-5D) in relation to their discriminative and convergent validity. Methods: Data were collected from adults in Australia aged 30-61 years by mailed survey in 2009-10, including the EQ-5D and a range of self-reported oral health variables, and self-rated oral and general health. Responses were collected from n=1,093 persons (response rate 39.1%). UK-based EQ-5D estimates were lower (0.85) than the USA and Australian estimates (0.91). EQ-5D was associated (p<0.01) with all seven oral health variables, with differences in utility scores ranging from 0.03 to 0.06 for the UK, from 0.04 to 0.07 for the USA, and from 0.05 to 0.08 for the Australian-based estimates. The effect sizes (ESs) of the associations with all seven oral health variables were similar for the UK (ES=0.26 to 0.49), USA (ES=0.31 to 0.48) and Australian-based (ES=0.31 to 0.46) estimates. EQ-5D was correlated with global dental health for the UK (rho=0.29), USA (rho=0.30) and Australian-based estimates (rho=0.30), and correlations with global general health were the same (rho=0.42) for the UK, USA and Australian-based estimates. EQ-5D exhibited equivalent discriminative validity and convergent validity in relation to oral health variables for the UK, USA and Australian-based estimates.
Lichtenberg, Frank R; Tatar, Mehtap; Çalışkan, Zafer
2014-09-01
We investigate the impact of pharmaceutical innovation on longevity, hospitalization and medical expenditure in Turkey during the period 1999-2010 using longitudinal, disease-level data. From 1999 to 2008, mean age at death increased by 3.6 years, from 63.0 to 66.6 years. We estimate that in the absence of any pharmaceutical innovation, mean age at death would have increased by only 0.6 years. Hence, pharmaceutical innovation is estimated to have increased mean age at death in Turkey by 3.0 years during the period 1999-2008. We also examine the effect of pharmaceutical innovation on hospital utilization. We estimate that pharmaceutical innovation has reduced the number of hospital days by approximately 1% per year. We use our estimates of the effect of pharmaceutical innovation on age at death, hospital utilization and pharmaceutical expenditure to assess the incremental cost-effectiveness of pharmaceutical innovation, i.e., the cost per life-year gained from the introduction of new drugs. The baseline estimate of the cost per life-year gained from pharmaceutical innovation is $2776. Even the latter figure is a very small fraction of leading economists' estimates of the value of (or consumers' willingness to pay for) a one-year increase in life expectancy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Hispanic valuation of the EQ-5D health states: a social value set for Latin Americans.
Zarate, Victor; Kind, Paul; Chuang, Ling-Hsiang
2008-12-01
Cost-effectiveness analysis has been recommended by national health agencies worldwide. In the United Kingdom, the National Institute of Health and Clinical Excellence supports the use of generic health-related quality of life instruments such as EuroQol EQ-5D when quality-adjusted life-years are used to measure health benefits. Despite the urgent need for appropriate methodologies to improve the use of scarce resources in Latin American countries, little is known about how health is valued. A national population survey was conducted in the United States in 2002, based on a sample of 1603 non-Hispanic nonblacks and 1115 Hispanics. Participants provided time trade-off utilities for a subset of 42 EQ-5D health states. Hispanic respondents were grouped according to their language preferences (Spanish or English). Mean utilities were compared for each health state. A random-effects model was used to determine whether real population differences exist after adjusting for sociodemographic characteristics. A population value set for all 243 EQ-5D health states was developed using only the data from Spanish-speaking Hispanics. Mean valuations differed slightly between non-Hispanic nonblacks and English-speaking Hispanics. Spanish-speaking Hispanics, however, tended to give higher valuations than non-Hispanic nonblacks (P < 0.05) corresponding to an average of 0.034 point. A regression model was developed for Spanish-speaking Hispanics with a mean absolute error of 0.031. Values estimated using this model show marked differences when compared with corresponding values estimated using the UK (N3) and US (D1) models. The availability of a Hispanic model for EQ-5D valuations represents a significant new option for decision-makers, providing a set of social preference weights for use in Latin American countries that presently lack their own domestic value set.
Kang, Hee-Jin; Kang, Eunjeong; Jo, Min-Woo; Park, Eun-Ja; Yoon, Seonyoung; Lee, Eui-Kyung
2014-07-01
This study aimed to measure utilities, which are quantitative terms incorporating preferences, for various health states of epilepsy with partial seizure in the general population in South Korea. It also aimed to find socio-demographic characteristics associated with the utility scores. Utility scores using Time Trade-Off (TTO), Visual Analog Scale (VAS), and EuroQol five Dimension (EQ-5D) were obtained from 300 people aged 16 and over by face-to-face interviews. We measured utilities for three hypothetical health states of epilepsy for which scenarios were defined based on the frequency of partial seizure: seizure-free, seizure reduction, and withdrawal. We compared utilities with varying seizure frequency using a repeated-measures ANOVA, and analyzed the association between utilities and socio-demographic characteristics using a generalized estimating equation (GEE). The mean utility scores for withdrawal state, seizure reduction state, and seizure-free state were 0.303, 0.493, and 0.899, respectively, when measured by TTO. VAS yielded the mean utility scores of 0.211, 0.424, and 0.752 for respective health states, and corresponding scores with EQ-5D were 0.261, 0.645, and 0.959. The utility scores for the three health states were statistically different in TTO, VAS, and EQ-5D. The withdrawal state had the lowest utility scores. There were differences in mean utilities for the three health states across the three methods. Utilities by EQ-5D tended to have higher values than those by TTO and VAS. Utilities by VAS had the lowest values. In GEE analysis, the severity of epilepsy and household income were significantly related to utility scores. The withdrawal state of epilepsy had the lowest utility value and the seizure-free state had the highest by all three techniques of utility measurement used. There were significant differences in utilities between one severity level of epilepsy and another. Utility was associated with household income and the severity of disease. Utility scores for distinct epilepsy states obtained in this study could facilitate health economic analyses of epilepsy treatments and thus help decision making in resource allocation. Copyright © 2014 Elsevier B.V. All rights reserved.
Estimating recharge rates with analytic element models and parameter estimation
Dripps, W.R.; Hunt, R.J.; Anderson, M.P.
2006-01-01
Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).
Estimating missing daily temperature extremes in Jaffna, Sri Lanka
NASA Astrophysics Data System (ADS)
Thevakaran, A.; Sonnadara, D. U. J.
2018-04-01
The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.
Optimal heavy tail estimation - Part 1: Order selection
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred; Bermejo, Miguel A.
2017-12-01
The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.
Dranitsaris, George; Truter, Ilse; Lubbe, Martie S; Sriramanakoppa, Nitin N; Mendonca, Vivian M; Mahagaonkar, Sangameshwar B
2011-10-01
Decision analysis (DA) is commonly used to perform economic evaluations of new pharmaceuticals. Using multiples of Malaysia's per capita 2010 gross domestic product (GDP) as the threshold for economic value as suggested by the World Health Organization (WHO), DA was used to estimate a price per dose for bevacizumab, a drug that provides a 1.4-month survival benefit in patients with metastatic colorectal cancer (mCRC). A decision model was developed to simulate progression-free and overall survival in mCRC patients receiving chemotherapy with and without bevacizumab. Costs for chemotherapy and management of side effects were obtained from public and private hospitals in Malaysia. Utility estimates, measured as quality-adjusted life years (QALYs), were determined by interviewing 24 oncology nurses using the time trade-off technique. The price per dose was then estimated using a target threshold of US$44 400 per QALY gained, which is 3 times the Malaysian per capita GDP. A cost-effective price for bevacizumab could not be determined because the survival benefit provided was insufficient According to the WHO criteria, if the drug was able to improve survival from 1.4 to 3 or 6 months, the price per dose would be $567 and $1258, respectively. The use of decision modelling for estimating drug pricing is a powerful technique to ensure value for money. Such information is of value to drug manufacturers and formulary committees because it facilitates negotiations for value-based pricing in a given jurisdiction.
Composite Multilinearity, Epistemic Uncertainty and Risk Achievement Worth
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. Borgonovo; C. L. Smith
2012-10-01
Risk Achievement Worth is one of the most widely utilized importance measures. RAW is defined as the ratio of the risk metric value attained when a component has failed over the base case value of the risk metric. Traditionally, both the numerator and denominator are point estimates. Relevant literature has shown that inclusion of epistemic uncertainty i) induces notable variability in the point estimate ranking and ii) causes the expected value of the risk metric to differ from its nominal value. We obtain the conditions under which the equality holds between the nominal and expected values of a reliability riskmore » metric. Among these conditions, separability and state-of-knowledge independence emerge. We then study how the presence of epistemic uncertainty aspects RAW and the associated ranking. We propose an extension of RAW (called ERAW) which allows one to obtain a ranking robust to epistemic uncertainty. We discuss the properties of ERAW and the conditions under which it coincides with RAW. We apply our findings to a probabilistic risk assessment model developed for the safety analysis of NASA lunar space missions.« less
Avulova, Svetlana; Allen, Clayton; Morgans, Alicia; Moses, Kelvin A
2018-05-10
Risk of recurrent disease for men with clinical stage 1 high-risk nonseminomatous germ cell testicular cancer (CS1 NSGCT) with lymphovascular invasion (LVI) after orchiectomy is 50% and current treatment options (surveillance [S], retroperitoneal lymph node dissection [RPLND], or 1 cycle of BEP [BEP ×1]) are associated with a 99% disease specific survival, therefore practice patterns vary. We performed a decision analysis using updated data of long-term complications for men with CS1 NSGCT with LVI to quantify and assess relative treatment values. Decision analysis included previously defined utilities (via standard gamble) for posttreatment states of living from 0 (death from disease) to 1 (alive in perfect health) and updated morbidity probabilities. We quantified the values of S, RPLND, and BEP ×1 via the rollback method. Sensitivity analyses including a range of orchiectomy cure rates and utility values were performed. Estimated probabilities favoring treatment with RPLND (0.97) or BEP ×1 (0.97) were equivalent and superior to surveillance (0.88). Sensitivity analysis of orchiectomy cure rates (50%-100%) failed to find a cure rate that favored S over BEP ×1 or RPLND. Varying utility values for cure after S from 0.92 (previously defined utility) to 1 (perfect health), failed to find a viable utility state favoring S over BEP ×1 or RPLND. An orchiectomy cure rate of ≥82% would be required for S to equal treatment of either type. We demonstrate that for surveillance to be superior to treatment with BEP ×1 or RPLND, the orchiectomy cure rate must be at least 82%, which is not expected in a patient population with high-risk CS1 NSGCT. Copyright © 2018 Elsevier Inc. All rights reserved.
Norton, Giulia; McDonough, Christine M; Cabral, Howard; Shwartz, Michael; Burgess, James F
2015-05-15
Markov cost-utility model. To evaluate the cost-utility of cognitive behavioral therapy (CBT) for the treatment of persistent nonspecific low back pain (LBP) from the perspective of US commercial payers. CBT is widely deemed clinically effective for LBP treatment. The evidence is suggestive of cost-effectiveness. We constructed and validated a Markov intention-to-treat model to estimate the cost-utility of CBT, with 1-year and 10-year time horizons. We applied likelihood of improvement and utilities from a randomized controlled trial assessing CBT to treat LBP. The trial randomized subjects to treatment but subjects freely sought health care services. We derived the cost of equivalent rates and types of services from US commercial claims for LBP for a similar population. For the 10-year estimates, we derived recurrence rates from the literature. The base case included medical and pharmaceutical services and assumed gradual loss of skill in applying CBT techniques. Sensitivity analyses assessed the distribution of service utilization, utility values, and rate of LBP recurrence. We compared health plan designs. Results are based on 5000 iterations of each model and expressed as an incremental cost per quality-adjusted life-year. The incremental cost-utility of CBT was $7197 per quality-adjusted life-year in the first year and $5855 per quality-adjusted life-year over 10 years. The results are robust across numerous sensitivity analyses. No change of parameter estimate resulted in a difference of more than 7% from the base case for either time horizon. Including chiropractic and/or acupuncture care did not substantively affect cost-effectiveness. The model with medical but no pharmaceutical costs was more cost-effective ($5238 for 1 yr and $3849 for 10 yr). CBT is a cost-effective approach to manage chronic LBP among commercial health plans members. Cost-effectiveness is demonstrated for multiple plan designs. 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monsabert, S. de; Lemmer, H.; Dinwiddie, D.
1995-10-01
In the past, most buildings, structures, and ship visits were not metered, and flat estimates were calculated based on various estimating techniques. The decomposition process was further complicated by the fact that many of the meters monitor consumption values only and do not provide demand or time of use data. This method of billing provides no incentives to the PWC customers to implement energy conservation programs, including load shedding, Energy Monitoring and Control Systems (EMCS), building shell improvements, low flow toilets and shower heads, efficient lighting systems, or other energy savings alternatives. Similarly, the method had no means of adjustmentmore » for seasonal or climatic variations outside of the norm. As an alternative to flat estimates, the Customized Utility Billing Integrated Control (CUBIC) system and the Graphical Data Input System (GDIS) were developed to better manage the data to the major claimant area users based on utilities usage factors, building size, weather data, and hours of operation. GDIS is a graphical database that assists PWC engineers in the development and maintenance of single-line utility diagrams of the facilities and meters. It functions as a drawing associate system and is written in AutoLISP for AutoCAD version 12. GDIS interprets the drawings and provides the facility-to-meter and meter-to-meter hierarchy data that are used by the CUBIC to allocate the billings. This paper reviews the design, development and implementation aspects of CUBIC/GDIS and discusses the benefits of this improved utilities management system.« less
Da, Yang
2015-12-18
The amount of functional genomic information has been growing rapidly but remains largely unused in genomic selection. Genomic prediction and estimation using haplotypes in genome regions with functional elements such as all genes of the genome can be an approach to integrate functional and structural genomic information for genomic selection. Towards this goal, this article develops a new haplotype approach for genomic prediction and estimation. A multi-allelic haplotype model treating each haplotype as an 'allele' was developed for genomic prediction and estimation based on the partition of a multi-allelic genotypic value into additive and dominance values. Each additive value is expressed as a function of h - 1 additive effects, where h = number of alleles or haplotypes, and each dominance value is expressed as a function of h(h - 1)/2 dominance effects. For a sample of q individuals, the limit number of effects is 2q - 1 for additive effects and is the number of heterozygous genotypes for dominance effects. Additive values are factorized as a product between the additive model matrix and the h - 1 additive effects, and dominance values are factorized as a product between the dominance model matrix and the h(h - 1)/2 dominance effects. Genomic additive relationship matrix is defined as a function of the haplotype model matrix for additive effects, and genomic dominance relationship matrix is defined as a function of the haplotype model matrix for dominance effects. Based on these results, a mixed model implementation for genomic prediction and variance component estimation that jointly use haplotypes and single markers is established, including two computing strategies for genomic prediction and variance component estimation with identical results. The multi-allelic genetic partition fills a theoretical gap in genetic partition by providing general formulations for partitioning multi-allelic genotypic values and provides a haplotype method based on the quantitative genetics model towards the utilization of functional and structural genomic information for genomic prediction and estimation.
NASA Astrophysics Data System (ADS)
Ghosh, S. M.; Behera, M. D.
2017-12-01
Forest aboveground biomass (AGB) is an important factor for preparation of global policy making decisions to tackle the impact of climate change. Several previous studies has concluded that remote sensing methods are more suitable for estimating forest biomass on regional scale. Among all available remote sensing data and methods, Synthetic Aperture Radar (SAR) data in combination with decision tree based machine learning algorithms has shown better promise in estimating higher biomass values. There aren't many studies done for biomass estimation of dense Indian tropical forests with high biomass density. In this study aboveground biomass was estimated for two major tree species, Sal (Shorea robusta) and Teak (Tectona grandis), of Katerniaghat Wildlife Sanctuary, a tropical forest situated in northern India. Biomass was estimated by combining C-band SAR data from Sentinel-1A satellite, vegetation indices produced using Sentinel-2A data and ground inventory plots. Along with SAR backscatter value, SAR texture images were also used as input as earlier studies had found that image texture has a correlation with vegetation biomass. Decision tree based nonlinear machine learning algorithms were used in place of parametric regression models for establishing relationship between fields measured values and remotely sensed parameters. Using random forest model with a combination of vegetation indices with SAR backscatter as predictor variables shows best result for Sal forest, with a coefficient of determination value of 0.71 and a RMSE value of 105.027 t/ha. In teak forest also best result can be found in the same combination but for stochastic gradient boosted model with a coefficient of determination value of 0.6 and a RMSE value of 79.45 t/ha. These results are mostly better than the results of other studies done for similar kind of forests. This study shows that Sentinel series satellite data has exceptional capabilities in estimating dense forest AGB and machine learning algorithms are better means to do so than parametric regression models.
Carosi, Andrea
2017-02-01
This data article provides cross-sectionals on the local values of the coefficients of ROE, R&D-TO-SALES, and TOTAL ASSET as regressors of the MARKET-TO-BOOK ratio and is related to the research article entitled "Do Local Causations Matter? The Effect of Firm Location on the Relations of ROE, R&D, and Firm Size with Market-to-Book" (A. Carosi, 2016) [1]. The data are aggregated at the regional level (NUTS2). The reported data are the regional average values of the coefficients of ROE, R&D-TO-SALES, and LN(TOTAL ASSET) on LN(MARKET-TO-BOOK), estimated upon the Italian non-financial listed firms in 1999-2007. Local coefficient estimates for family firms and utilities are also provided.
Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosking, Jonathan R. M.; Natarajan, Ramesh
The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.
Grid-connected distributed solar power systems
NASA Astrophysics Data System (ADS)
Moyle, R.; Chernoff, H.; Schweizer, T.
This paper discusses some important, though often ignored, technical and economic issues of distributed solar power systems: protection of the utility system and nonsolar customers requires suitable interfaced equipment. Purchase criteria must mirror reality; most analyses use life-cycle costing with low discount rates - most buyers use short payback periods. Distributing, installing, and marketing small, distributed solar systems is more costly than most analyses estimate. Results show that certain local conditions and uncommon purchase considerations can combine to make small, distributed solar power attractive, but lower interconnect costs (per kW), lower marketing and product distribution costs, and more favorable purchase criteria make large, centralized solar energy more attractive. Specifically, the value of dispersed solar systems to investors and utilities can be higher than $2000/kw. However, typical residential owners place a value of well under $1000 on the installed system.
Accuracy of Transcutaneous CO2 Values Compared With Arterial and Capillary Blood Gases.
Lambert, Laura L; Baldwin, Melissa B; Gonzalez, Cruz Velasco; Lowe, Gary R; Willis, J Randy
2018-05-08
Transcutaneous monitors are utilized to monitor a patient's respiratory status. Some patients have similar values when comparing transcutaneous carbon dioxide ( P tcCO 2 ) values with blood gas analysis, whereas others show extreme variability. A retrospective review of data was performed to determine how accurately P tcCO 2 correlated with CO 2 values obtained by arterial blood gas (ABG) or capillary blood gas. To determine whether P tcCO 2 values correlated with ABG or capillary blood gas values, subjects' records were retrospectively reviewed. Data collected included the P tcCO 2 value at the time of blood gas procurement and the ABG or capillary blood gas P CO 2 value. Agreement of pairs of methods (ABG vs P tcCO 2 and capillary blood gas vs P tcCO 2 ) was assessed with the Bland-Altman approach with limits of agreement estimated with a mixed model to account for serial measurements per subject. A total of 912 pairs of ABG/ P tcCO 2 values on 54 subjects and 307 pairs of capillary blood gas/ P tcCO 2 values on 34 subjects were analyzed. The P CO 2 range for ABG was 24-106 mm Hg, and P tcCO 2 values were 27-133 mm Hg. The P CO 2 range for capillary blood gas was 29-108 mm Hg, and P tcCO 2 values were 30-103 mm Hg. For ABG/ P tcCO 2 comparisons, the Pearson correlation coefficient was 0.82, 95% CI was 0.80-0.84, and P was <.001. For capillary blood gas/ P tcCO 2 comparisons, the Pearson correlation coefficient was 0.77, 95% CI was 0.72-0.81, and P was <.001. For ABG/ P tcCO 2 , the estimated difference ± SD was -6.79 t± 7.62 mm Hg, and limits of agreement were -22.03 to 8.45. For capillary blood gas/ P tcCO 2 , the estimated difference ± SD was -1.61 ± 7.64 mm Hg, and limits of agreement were -16.88 to 13.66. The repeatability coefficient was about 30 mm Hg. Based on these data, capillary blood gas comparisons showed less variation and a slightly lower correlation with P tcCO 2 than did ABG comparisons. After accounting for serial measurements per patient, due to the wide limits of agreement and poor repeatability, the utility of relying on P tcCO 2 readings for this purpose is questionable. Copyright © 2018 by Daedalus Enterprises.
Estimating the Aqueous Solubility of Pharmaceutical Hydrates.
Franklin, Stephen J; Younis, Usir S; Myrdal, Paul B
2016-06-01
Estimation of crystalline solute solubility is well documented throughout the literature. However, the anhydrous crystal form is typically considered with these models, which is not always the most stable crystal form in water. In this study, an equation which predicts the aqueous solubility of a hydrate is presented. This research attempts to extend the utility of the ideal solubility equation by incorporating desolvation energetics of the hydrated crystal. Similar to the ideal solubility equation, which accounts for the energetics of melting, this model approximates the energy of dehydration to the entropy of vaporization for water. Aqueous solubilities, dehydration and melting temperatures, and log P values were collected experimentally and from the literature. The data set includes different hydrate types and a range of log P values. Three models are evaluated, the most accurate model approximates the entropy of dehydration (ΔSd) by the entropy of vaporization (ΔSvap) for water, and utilizes onset dehydration and melting temperatures in combination with log P. With this model, the average absolute error for the prediction of solubility of 14 compounds was 0.32 log units. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Lamers, Leida M; Uyl-de Groot, Carin A; Buijt, Ivonne
2007-01-01
The Functional Assessment of Cancer Therapy-Lung (FACT-L) is a validated, sensitive and reliable patient questionnaire that evaluates and quantifies quality of life (QOL) across several domains, including lung cancer-related symptoms. The FACT-L was not designed for use in economic evaluation and does not incorporate preferences into its scoring system. To derive a set of Dutch preference weights for FACT-L health states that can be used to convert FACT-L into a single value that can be used in cost-utility analyses. A representative sample of the Dutch population (n = 1076) directly valued an orthogonal set of eight FACT-L health states on a 100-point rating scale with the anchor points 'worst imaginable health state' and 'best imaginable health state'. Eleven FACT-L items were selected to describe the FACT-L health states that were directly valued. Regression analysis was used to interpolate values for all other possible health states. Scores were transformed into values on a scale where 0 indicated dead and 1 indicated full health. The estimated values for FACT-L health states ranged from 0.08 to 0.93. The estimated value sets were applied to FACT-L data of lung cancer patients participating in a clinical study. Significant differences in the mean value and mean gain of 0.12 and 0.07, respectively, were found between patients in remission and patients with progressive disease at 4 weeks' follow-up. Our results reaffirmed that the methodology used here is a feasible option to convert data collected with a disease-specific outcome measure into preferences. We concluded that the sensitivity of the derived set of societal preferences to capture differences and changes in clinical health states is an indication of its construct validity.
Health State Utilities Associated with Glucose Monitoring Devices.
Matza, Louis S; Stewart, Katie D; Davies, Evan W; Hellmund, Richard; Polonsky, William H; Kerr, David
2017-03-01
Glucose monitoring is important for patients with diabetes treated with insulin. Conventional glucose monitoring requires a blood sample, typically obtained by pricking the finger. A new sensor-based system called "flash glucose monitoring" monitors glucose levels with a sensor worn on the arm, without requiring blood samples. To estimate the utility difference between these two glucose monitoring approaches for use in cost-utility models. In time trade-off interviews, general population participants in the United Kingdom (London and Edinburgh) valued health states that were drafted and refined on the basis of literature, clinician input, and a pilot study. The health states had identical descriptions of diabetes and insulin treatment, differing only in glucose monitoring approach. A total of 209 participants completed the interviews (51.7% women; mean age = 42.1 years). Mean utilities were 0.851 ± 0.140 for conventional monitoring and 0.882 ± 0.121 for flash monitoring (significant difference between the mean utilities; t = 8.3; P < 0.0001). Of the 209 participants, 78 (37.3%) had a higher utility for flash monitoring, 2 (1.0%) had a higher utility for conventional monitoring, and 129 (61.7%) had the same utility for both health states. The flash glucose monitoring system was associated with a significantly greater utility than the conventional monitoring system. This difference may be useful in cost-utility models comparing the value of glucose monitoring devices for patients with diabetes. This study adds to the literature on treatment process utilities, suggesting that time trade-off methods may be used to quantify preferences among medical devices. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fetterly, K; Favazza, C
2015-06-15
Purpose: Mathematical model observers provide a figure of merit that simultaneously considers a test object and the contrast, noise, and spatial resolution properties of an imaging system. The purpose of this work was to investigate the utility of a channelized Hotelling model observer (CHO) to assess system performance over a large range of angiographic exposure conditions. Methods: A 4 mm diameter disk shaped, iodine contrast test object was placed on a 20 cm thick Lucite phantom and 1204 image frames were acquired using fixed x-ray beam quality and for several detector target dose (DTD) values in the range 6 tomore » 240 nGy. The CHO was implemented in the spatial domain utilizing 96 Gabor functions as channels. Detectability index (DI) estimates were calculated using the “resubstitution” and “holdout” methods to train the CHO. Also, DI values calculated using discrete subsets of the data were used to estimate a minimally biased DI as might be expected from an infinitely large dataset. The relationship between DI, independently measured CNR, and changes in results expected assuming a quantum limited detector were assessed over the DTD range. Results: CNR measurements demonstrated that the angiography system is not quantum limited due to relatively increasing contamination from electronic noise that reduces CNR for low DTD. Direct comparison of DI versus CNR indicates that the CHO relatively overestimates DI for low DTD and/or underestimates DI values for high DTD. The relative magnitude of the apparent bias error in the DI values was ∼20% over the 40x DTD range investigated. Conclusion: For the angiography system investigated, the CHO can provide a minimally biased figure of merit if implemented over a restricted exposure range. However, bias leads to overestimates of DI for low exposures. This work emphasizes the need to verify CHO model performance during real-world application.« less
Estimating ice-affected streamflow by extended Kalman filtering
Holtschlag, D.J.; Grewal, M.S.
1998-01-01
An extended Kalman filter was developed to automate the real-time estimation of ice-affected streamflow on the basis of routine measurements of stream stage and air temperature and on the relation between stage and streamflow during open-water (ice-free) conditions. The filter accommodates three dynamic modes of ice effects: sudden formation/ablation, stable ice conditions, and eventual elimination. The utility of the filter was evaluated by applying it to historical data from two long-term streamflow-gauging stations, St. John River at Dickey, Maine and Platte River at North Bend, Nebr. Results indicate that the filter was stable and that parameters converged for both stations, producing streamflow estimates that are highly correlated with published values. For the Maine station, logarithms of estimated streamflows are within 8% of the logarithms of published values 87.2% of the time during periods of ice effects and within 15% 96.6% of the time. Similarly, for the Nebraska station, logarithms of estimated streamflows are within 8% of the logarithms of published values 90.7% of the time and within 15% 97.7% of the time. In addition, the correlation between temporal updates and published streamflows on days of direct measurements at the Maine station was 0.777 and 0.998 for ice-affected and open-water periods, respectively; for the Nebraska station, corresponding correlations were 0.864 and 0.997.
Chuback, Jennifer; Yarascavitch, Blake; Yarascavitch, Alec; Kaur, Manraj Nirmal; Martin, Stuart; Thoma, Achilleas
2015-11-01
In an otherwise healthy patient with severe facial disfigurement secondary to burns, composite tissue allotransplantation (CTA) results in life-long immunosuppressive therapy and its associated risk. In this study, we assess the net gain of CTA of face (in terms of utilities) from the perspectives of patient, general public and medical expert, in comparison to the risks. Using the standard gamble (SG) and time-trade off (TTO) techniques, utilities were obtained from members of general public, patients with facial burns, and medical experts (n=25 for each group). The gain (or loss) in utility and quality adjusted life years (QALY) were estimated using face-to-face interviews. A sensitivity analysis using variable life expectancy was conducted. From the patient perspective, severe facial burn was associated with a health utility value of 0.53, and 27.1 QALYs as calculated by SG, and a health utility value of 0.57, and 28.9 QALYs as calculated by TTO. In comparison, CTA of the face was associated with a health utility value of 0.64, and 32.3 QALYs (or 18.2 QALYs years per sensitivity analysis) as calculated by SG, and a health utility value of 0.67, and 34.1 QALYs (or 19.2QALYs per sensitivity analysis) as calculated by TTO. However, a loss of 8.9 QALYs (by SG method) to 9.5 QALYs (by TTO method) was observed when the life expectancy was decreased in the sensitivity analysis. Similar results were obtained from the general population and medical experts perspectives. We found that severe facial disfigurement is associated with a significant reduction in the health-related quality of life, and CTA has the potential to improve this. Further, we found that a trade-off exists between the life expectancy and gain in the QALYs, i.e. if life expectancy following CTA of face is reduced, the gain in QALY is also diminished. This trade-off needs to be validated in future studies. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
Single-Step BLUP with Varying Genotyping Effort in Open-Pollinated Picea glauca.
Ratcliffe, Blaise; El-Dien, Omnia Gamal; Cappa, Eduardo P; Porth, Ilga; Klápště, Jaroslav; Chen, Charles; El-Kassaby, Yousry A
2017-03-10
Maximization of genetic gain in forest tree breeding programs is contingent on the accuracy of the predicted breeding values and precision of the estimated genetic parameters. We investigated the effect of the combined use of contemporary pedigree information and genomic relatedness estimates on the accuracy of predicted breeding values and precision of estimated genetic parameters, as well as rankings of selection candidates, using single-step genomic evaluation (HBLUP). In this study, two traits with diverse heritabilities [tree height (HT) and wood density (WD)] were assessed at various levels of family genotyping efforts (0, 25, 50, 75, and 100%) from a population of white spruce ( Picea glauca ) consisting of 1694 trees from 214 open-pollinated families, representing 43 provenances in Québec, Canada. The results revealed that HBLUP bivariate analysis is effective in reducing the known bias in heritability estimates of open-pollinated populations, as it exposes hidden relatedness, potential pedigree errors, and inbreeding. The addition of genomic information in the analysis considerably improved the accuracy in breeding value estimates by accounting for both Mendelian sampling and historical coancestry that were not captured by the contemporary pedigree alone. Increasing family genotyping efforts were associated with continuous improvement in model fit, precision of genetic parameters, and breeding value accuracy. Yet, improvements were observed even at minimal genotyping effort, indicating that even modest genotyping effort is effective in improving genetic evaluation. The combined utilization of both pedigree and genomic information may be a cost-effective approach to increase the accuracy of breeding values in forest tree breeding programs where shallow pedigrees and large testing populations are the norm. Copyright © 2017 Ratcliffe et al.
Development and Applications of a Stage Stacking Procedure
NASA Technical Reports Server (NTRS)
Kulkarni, Sameer; Celestina, Mark L.; Adamczyk, John J.
2012-01-01
The preliminary design of multistage axial compressors in gas turbine engines is typically accomplished with mean-line methods. These methods, which rely on empirical correlations, estimate compressor performance well near the design point, but may become less reliable off-design. For land-based applications of gas turbine engines, off-design performance estimates are becoming increasingly important, as turbine plant operators desire peaking or load-following capabilities and hot-day operability. The current work develops a one-dimensional stage stacking procedure, including a newly defined blockage term, which is used to estimate the off-design performance and operability range of a 13-stage axial compressor used in a power generating gas turbine engine. The new blockage term is defined to give mathematical closure on static pressure, and values of blockage are shown to collapse to curves as a function of stage inlet flow coefficient and corrected shaft speed. In addition to these blockage curves, the stage stacking procedure utilizes stage characteristics of ideal work coefficient and adiabatic efficiency. These curves are constructed using flow information extracted from computational fluid dynamics (CFD) simulations of groups of stages within the compressor. Performance estimates resulting from the stage stacking procedure are shown to match the results of CFD simulations of the entire compressor to within 1.6% in overall total pressure ratio and within 0.3 points in overall adiabatic efficiency. Utility of the stage stacking procedure is demonstrated by estimation of the minimum corrected speed which allows stable operation of the compressor. Further utility of the stage stacking procedure is demonstrated with a bleed sensitivity study, which estimates a bleed schedule to expand the compressors operating range.
NASA Astrophysics Data System (ADS)
Li, Xingmin; Lu, Ling; Yang, Wenfeng; Cheng, Guodong
2012-07-01
Estimating surface evapotranspiration is extremely important for the study of water resources in arid regions. Data from the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer (NOAA/AVHRR), meteorological observations and data obtained from the Watershed Allied Telemetry Experimental Research (WATER) project in 2008 are applied to the evaporative fraction model to estimate evapotranspiration over the Heihe River Basin. The calculation method for the parameters used in the model and the evapotranspiration estimation results are analyzed and evaluated. The results observed within the oasis and the banks of the river suggest that more evapotranspiration occurs in the inland river basin in the arid region from May to September. Evapotranspiration values for the oasis, where the land surface types and vegetations are highly variable, are relatively small and heterogeneous. In the Gobi desert and other deserts with little vegetation, evapotranspiration remains at its lowest level during this period. These results reinforce the conclusion that rational utilization of water resources in the oasis is essential to manage the water resources in the inland river basin. In the remote sensing-based evapotranspiration model, the accuracy of the parameter estimate directly affects the accuracy of the evapotranspiration results; more accurate parameter values yield more precise values for evapotranspiration. However, when using the evaporative fraction to estimate regional evapotranspiration, better calculation results can be achieved only if evaporative fraction is constant in the daytime.
Parents' willingness to pay for biologic treatments in juvenile idiopathic arthritis.
Burnett, Heather F; Ungar, Wendy J; Regier, Dean A; Feldman, Brian M; Miller, Fiona A
2014-12-01
Biologic therapies are considered the standard of care for children with the most severe forms of juvenile idiopathic arthritis (JIA). Inconsistent and inadequate drug coverage, however, prevents many children from receiving timely and equitable access to the best treatment. The objective of this study was to evaluate parents' willingness to pay (WTP) for biologic and nonbiologic disease-modifying antirheumatic drugs (DMARDs) used to treat JIA. Utility weights from a discrete choice experiment were used to estimate the WTP for treatment characteristics including child-reported pain, participation in daily activities, side effects, days missed from school, drug treatment, and cost. Conditional logit regression was used to estimate utilities for each attribute level, and expected compensating variation was used to estimate the WTP. Bootstrapping was used to generate 95% confidence intervals for all WTP estimates. Parents had the highest marginal WTP for improved participation in daily activities and pain relief followed by the elimination of side effects of treatment. Parents were willing to pay $2080 (95% confidence interval $698-$4065) more for biologic DMARDs than for nonbiologic DMARDs if the biologic DMARD was more effective. Parents' WTP indicates their preference for treatments that reduce pain and improve daily functioning without side effects by estimating the monetary equivalent of utility for drug treatments in JIA. In addition to evidence of safety and efficacy, assessments of parents' preferences provide a broader perspective to decision makers by helping them understand the aspects of drug treatments in JIA that are most valued by families. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Peak, Jasmine; Goranitis, Ilias; Day, Ed; Copello, Alex; Freemantle, Nick; Frew, Emma
2018-05-30
Economic evaluation normally requires information to be collected on outcome improvement using utility values. This is often not collected during the treatment of substance use disorders making cost-effectiveness evaluations of therapy difficult. One potential solution is the use of mapping to generate utility values from clinical measures. This study develops and evaluates mapping algorithms that could be used to predict the EuroQol-5D (EQ-5D-5 L) and the ICEpop CAPability measure for Adults (ICECAP-A) from the three commonly used clinical measures; the CORE-OM, the LDQ and the TOP measures. Models were estimated using pilot trial data of heroin users in opiate substitution treatment. In the trial the EQ-5D-5 L, ICECAP-A, CORE-OM, LDQ and TOP were administered at baseline, three and twelve month time intervals. Mapping was conducted using estimation and validation datasets. The normal estimation dataset, which comprised of baseline sample data, used ordinary least squares (OLS) and tobit regression methods. Data from the baseline and three month time periods were combined to create a pooled estimation dataset. Cluster and mixed regression methods were used to map from this dataset. Predictive accuracy of the models was assessed using the root mean square error (RMSE) and the mean absolute error (MAE). Algorithms were validated using sample data from the follow-up time periods. Mapping algorithms can be used to predict the ICECAP-A and the EQ-5D-5 L in the context of opiate dependence. Although both measures can be predicted, the ICECAP-A was better predicted by the clinical measures. There were no advantages of pooling the data. There were 6 chosen mapping algorithms, which had MAE scores ranging from 0.100 to 0.138 and RMSE scores ranging from 0.134 to 0.178. It is possible to predict the scores of the ICECAP-A and the EQ-5D-5 L with the use of mapping. In the context of opiate dependence, these algorithms provide the possibility of generating utility values from clinical measures and thus enabling economic evaluation of alternative therapy options. ISRCTN22608399 . Date of registration: 27/04/2012. Date of first randomisation: 14/08/2012.
NASA Astrophysics Data System (ADS)
Wells, Aaron Raymond
This research focuses on the Emory and Obed Watersheds in the Cumberland Plateau in Central Tennessee and the Lower Hatchie River Watershed in West Tennessee. A framework based on market and nonmarket valuation techniques was used to empirically estimate economic values for environmental amenities and negative externalities in these areas. The specific techniques employed include a variation of hedonic pricing and discrete choice conjoint analysis (i.e., choice modeling), in addition to geographic information systems (GIS) and remote sensing. Microeconomic models of agent behavior, including random utility theory and profit maximization, provide the principal theoretical foundation linking valuation techniques and econometric models. The generalized method of moments estimator for a first-order spatial autoregressive function and mixed logit models are the principal econometric methods applied within the framework. The dissertation is subdivided into three separate chapters written in a manuscript format. The first chapter provides the necessary theoretical and mathematical conditions that must be satisfied in order for a forest amenity enhancement program to be implemented. These conditions include utility, value, and profit maximization. The second chapter evaluates the effect of forest land cover and information about future land use change on respondent preferences and willingness to pay for alternative hypothetical forest amenity enhancement options. Land use change information and the amount of forest land cover significantly influenced respondent preferences, choices, and stated willingness to pay. Hicksian welfare estimates for proposed enhancement options ranged from 57.42 to 25.53, depending on the policy specification, information level, and econometric model. The third chapter presents economic values for negative externalities associated with channelization that affect the productivity and overall market value of forested wetlands. Results of robust, generalized moments estimation of a double logarithmic first-order spatial autoregressive error model (inverse distance weights with spatial dependence up to 1500m) indicate that the implicit cost of damages to forested wetlands caused by channelization equaled -$5,438 ha-1. Collectively, the results of this dissertation provide economic measures of the damages to and benefits of environmental assets, help private landowners and policy makers identify the amenity attributes preferred by the public, and improve the management of natural resources.
Esposito, Felice; Cappabianca, Paolo; Angileri, Filippo F; Cavallo, Luigi M; Priola, Stefano M; Crimi, Salvatore; Solari, Domenico; Germanò, Antonino F; Tomasello, Francesco
2016-07-26
Gelatin-thrombin hemostatic matrix (FloSeal®) use is associated with shorter surgical times and less blood loss, parameters that are highly valued in neurosurgical procedures. We aimed to assess the effectiveness of gelatin-thrombin in neurosurgical procedures and estimate its economic value. In a 6-month retrospective evaluation at 2 hospitals, intraoperative and postoperative information were collected from patients undergoing neurosurgical procedures where bleeding was controlled with gelatin-thrombin matrix or according to local bleeding control guidelines (control group). Study endpoints were: length of surgery, estimated blood loss, hospitalization duration, blood units utilized, intensive care unit days, postoperative complications, and time-to-recovery. Statistical methods compared endpoints between the gelatin-thrombin and control groups and resource utilization costs were estimated. Seventy-eight patients (38 gelatin-thrombin; 40 control) were included. Gelatin-thrombin was associated with a shorter surgery duration than control 166±40 versus 185±55, p=0.0839); a lower estimated blood loss (185±80 versus 250±95ml; p=0.0017); a shorter hospital stay (10±3 versus 13±3 days; p<0.001); fewer intensive care unit days (10 days/3 patients and 20 days/4 patients); and shorter time-to-recovery (3±2.2 versus 4±2.8 weeks; p=0861). Fewer gelatin-thrombin patients experienced postoperative complications (3 minor) than the control group (5 minor; 3 major). No gelatin-thrombin patient required blood transfusion; 5 units were administered in the control group. The cost of gelatin-thrombin (€268.40/unit) was offset by the shorter surgery duration (difference of 19 minutes at €858 per hour) and the economic value of improved the other endpoint outcomes (ie, shorter hospital stay, less blood loss/lack of need for transfusion, fewer intensive care unit days, and complications). Gelatin-thrombin hemostatic matrix use in patients undergoing neurosurgical procedures was associated with better intra- and post-operative parameters than conventional hemostasis methods, with these parameters having substantial economic benefits.
Mercury in US coal: Observations using the COALQUAL and ICR data
Quick, J.C.; Brill, T.C.; Tabet, D.E.
2003-01-01
The COALQUAL data set lists the mercury content of samples collected from the in-ground US coal resource, whereas the ICR data set lists the mercury content of samples collected from coal shipments delivered to US electric utilities. After selection and adjustment of records, the COALQUAL data average 0.17 ??g Hg/g dry coal or 5.8 kg Hg/PJ, whereas the ICR data average 0.10 ??g Hg/g dry coal or 3.5 kg Hg/PJ. Because sample frequency does not correspond to the inground or produced tonnage, these values are not accurate estimates of the mercury content of either in-ground or delivered US coal. Commercial US coal contains less mercury than previously estimated, and its mercury content has declined during the 1990s. Selective mining and more extensive coal washing may accelerate the current trend towards lower mercury content in coal burned at US electric utilities.
Svedbom, Axel; Borgström, Fredrik; Hernlund, Emma; Ström, Oskar; Alekna, Vidmantas; Bianchi, Maria Luisa; Clark, Patricia; Curiel, Manuel Díaz; Dimai, Hans Peter; Jürisson, Mikk; Uusküla, Anneli; Lember, Margus; Kallikorm, Riina; Lesnyak, Olga; McCloskey, Eugene; Ershova, Olga; Sanders, Kerrie M; Silverman, Stuart; Tamulaitiene, Marija; Thomas, Thierry; Tosteson, Anna N A; Jönsson, Bengt; Kanis, John A
2018-03-01
The International Costs and Utilities Related to Osteoporotic fractures Study is a multinational observational study set up to describe the costs and quality of life (QoL) consequences of fragility fracture. This paper aims to estimate and compare QoL after hip, vertebral, and distal forearm fracture using time-trade-off (TTO), the EuroQol (EQ) Visual Analogue Scale (EQ-VAS), and the EQ-5D-3L valued using the hypothetical UK value set. Data were collected at four time-points for five QoL point estimates: within 2 weeks after fracture (including pre-fracture recall), and at 4, 12, and 18 months after fracture. Health state utility values (HSUVs) were derived for each fracture type and time-point using the three approaches (TTO, EQ-VAS, EQ-5D-3L). HSUV were used to estimate accumulated QoL loss and QoL multipliers. In total, 1410 patients (505 with hip, 316 with vertebral, and 589 with distal forearm fracture) were eligible for analysis. Across all time-points for the three fracture types, TTO provided the highest HSUVs, whereas EQ-5D-3L consistently provided the lowest HSUVs directly after fracture. Except for 13-18 months after distal forearm fracture, EQ-5D-3L generated lower QoL multipliers than the other two methods, whereas no equally clear pattern was observed between EQ-VAS and TTO. On average, the most marked differences between the three approaches were observed immediately after the fracture. The approach to derive QoL markedly influences the estimated QoL impact of fracture. Therefore the choice of approach may be important for the outcome and interpretation of cost-effectiveness analysis of fracture prevention.
Converting Parkinson-Specific Scores into Health State Utilities to Assess Cost-Utility Analysis.
Chen, Gang; Garcia-Gordillo, Miguel A; Collado-Mateo, Daniel; Del Pozo-Cruz, Borja; Adsuar, José C; Cordero-Ferrera, José Manuel; Abellán-Perpiñán, José María; Sánchez-Martínez, Fernando Ignacio
2018-06-07
The aim of this study was to compare the Parkinson's Disease Questionnaire-8 (PDQ-8) with three multi-attribute utility (MAU) instruments (EQ-5D-3L, EQ-5D-5L, and 15D) and to develop mapping algorithms that could be used to transform PDQ-8 scores into MAU scores. A cross-sectional study was conducted. A final sample of 228 evaluable patients was included in the analyses. Sociodemographic and clinical data were also collected. Two EQ-5D questionnaires were scored using Spanish tariffs. Two models and three statistical techniques were used to estimate each model in the direct mapping framework for all three MAU instruments, including the most widely used ordinary least squares (OLS), the robust MM-estimator, and the generalized linear model (GLM). For both EQ-5D-3L and EQ-5D-5L, indirect response mapping based on an ordered logit model was also conducted. Three goodness-of-fit tests were employed to compare the models: the mean absolute error (MAE), the root-mean-square error (RMSE), and the intra-class correlation coefficient (ICC) between the predicted and observed utilities. Health state utility scores ranged from 0.61 (EQ-5D-3L) to 0.74 (15D). The mean PDQ-8 score was 27.51. The correlation between overall PDQ-8 score and each MAU instrument ranged from - 0.729 (EQ-5D-5L) to - 0.752 (EQ-5D-3L). A mapping algorithm based on PDQ-8 items had better performance than using the overall score. For the two EQ-5D questionnaires, in general, the indirect mapping approach had comparable or even better performance than direct mapping based on MAE. Mapping algorithms developed in this study enable the estimation of utility values from the PDQ-8. The indirect mapping equations reported for two EQ-5D questionnaires will further facilitate the calculation of EQ-5D utility scores using other country-specific tariffs.
Capacity value of energy storage considering control strategies.
Shi, Nian; Luo, Yi
2017-01-01
In power systems, energy storage effectively improves the reliability of the system and smooths out the fluctuations of intermittent energy. However, the installed capacity value of energy storage cannot effectively measure the contribution of energy storage to the generator adequacy of power systems. To achieve a variety of purposes, several control strategies may be utilized in energy storage systems. The purpose of this paper is to study the influence of different energy storage control strategies on the generation adequacy. This paper presents the capacity value of energy storage to quantitatively estimate the contribution of energy storage on the generation adequacy. Four different control strategies are considered in the experimental method to study the capacity value of energy storage. Finally, the analysis of the influence factors on the capacity value under different control strategies is given.
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Mcmaster, K. M.
1981-01-01
The utility owned solar electric system methodology is generalized and updated. The net present value of the system is determined by consideration of all financial benefits and costs (including a specified return on investment). Life cycle costs, life cycle revenues, and residual system values are obtained. Break even values of system parameters are estimated by setting the net present value to zero. While the model was designed for photovoltaic generators with a possible thermal energy byproduct, it applicability is not limited to such systems. The resulting owner-dependent methodology for energy generation system assessment consists of a few equations that can be evaluated without the aid of a high-speed computer.
First-principle calculations of electronic structures and polar properties of (κ,ε)-Ga2O3
NASA Astrophysics Data System (ADS)
Kim, Juyeong; Tahara, Daisuke; Miura, Yoshino; Kim, Bog G.
2018-06-01
Physical properties of κ- and ε-Ga2O3 are investigated using density functional theory. We utilized the supercell method considering the partial occupancies in ε-Ga2O3. The polarization values of these materials were analyzed to overcome the inconsistency between experimental and theoretical studies. The polarization values of κ- and ε-Ga2O3 were ∼26.39 and 24.44 µC/cm2, respectively. The bandgap values of 4.62 and 4.27 eV were estimated with the hybrid functional method, which suggested an underestimation of the PBEsol functional values of 2.32 and 2.06 eV for κ- and ε-Ga2O3, respectively.
Medical costs and quality-adjusted life years associated with smoking: a systematic review.
Feirman, Shari P; Glasser, Allison M; Teplitskaya, Lyubov; Holtgrave, David R; Abrams, David B; Niaura, Raymond S; Villanti, Andrea C
2016-07-27
Estimated medical costs ("T") and QALYs ("Q") associated with smoking are frequently used in cost-utility analyses of tobacco control interventions. The goal of this study was to understand how researchers have addressed the methodological challenges involved in estimating these parameters. Data were collected as part of a systematic review of tobacco modeling studies. We searched five electronic databases on July 1, 2013 with no date restrictions and synthesized studies qualitatively. Studies were eligible for the current analysis if they were U.S.-based, provided an estimate for Q, and used a societal perspective and lifetime analytic horizon to estimate T. We identified common methods and frequently cited sources used to obtain these estimates. Across all 18 studies included in this review, 50 % cited a 1992 source to estimate the medical costs associated with smoking and 56 % cited a 1996 study to derive the estimate for QALYs saved by quitting or preventing smoking. Approaches for estimating T varied dramatically among the studies included in this review. T was valued as a positive number, negative number and $0; five studies did not include estimates for T in their analyses. The most commonly cited source for Q based its estimate on the Health Utilities Index (HUI). Several papers also cited sources that based their estimates for Q on the Quality of Well-Being Scale and the EuroQol five dimensions questionnaire (EQ-5D). Current estimates of the lifetime medical care costs and the QALYs associated with smoking are dated and do not reflect the latest evidence on the health effects of smoking, nor the current costs and benefits of smoking cessation and prevention. Given these limitations, we recommend that researchers conducting economic evaluations of tobacco control interventions perform extensive sensitivity analyses around these parameter estimates.
Nurses wanted Is the job too harsh or is the wage too low?
Di Tommaso, M L; Strøm, S; Saether, E M
2009-05-01
When entering the job market, nurses choose among different kind of jobs. Each of these jobs is characterized by wage, sector (primary care or hospital) and shift (daytime work or shift). This paper estimates a multi-sector-job-type random utility model of labor supply on data for Norwegian registered nurses (RNs) in 2000. The empirical model implies that labor supply is rather inelastic; 10% increase in the wage rates for all nurses is estimated to yield 3.3% increase in overall labor supply. This modest response shadows for much stronger inter-job-type responses. Our approach differs from previous studies in two ways: First, to our knowledge, it is the first time that a model of labor supply for nurses is estimated taking explicitly into account the choices that RN's have regarding work place and type of job. Second, it differs from previous studies with respect to the measurement of the compensations for different types of work. So far, it has been focused on wage differentials. But there are more attributes of a job than the wage. Based on the estimated random utility model we therefore calculate the expected value of compensation that makes a utility maximizing agent indifferent between types of jobs, here between shift work and daytime work. It turns out that Norwegian nurses working shifts may be willing to work shift relative to daytime work for a lower wage than the current one.
Estimating Forest Canopy Heights and Aboveground Biomass with Simulated ICESat-2 Data
NASA Astrophysics Data System (ADS)
Malambo, L.; Narine, L.; Popescu, S. C.; Neuenschwander, A. L.; Sheridan, R.
2016-12-01
The Ice, Cloud and Land Elevation Satellite (ICESat) 2 is scheduled for launch in 2017 and one of its overall science objectives will be to measure vegetation heights, which can be used to estimate and monitor aboveground biomass (AGB) over large spatial scales. This study serves to develop a methodology for utilizing vegetation data collected by ICESat-2 that will be on a five-year mission from 2017, for mapping forest canopy heights and estimating aboveground forest biomass (AGB). The specific objectives are to, (1) simulate ICESat-2 photon-counting lidar (PCL) data, (2) utilize simulated PCL data to estimate forest canopy heights and propose a methodology for upscaling PCL height measurements to obtain spatially contiguous coverage and, (3) estimate and map AGB using simulated PCL data. The laser pulse from ICESat-2 will be divided into three pairs of beams spaced approximately 3 km apart, with footprints measuring approximately 14 m in diameter and with 70 cm along-track intervals. Using existing airborne lidar data (ALS) for Sam Houston National Forest (SHNF) and known ICESat-2 beam locations, footprints are generated along beam locations and PCL data are then simulated from discrete return lidar points within each footprint. By applying data processing algorithms, photons are classified into top of canopy points and ground surface elevation points to yield tree canopy height values within each ICESat-2 footprint. AGB is then estimated using simple linear regression that utilizes AGB from a biomass map generated with ALS data for SHNF and simulated PCL height metrics for 100 m segments along ICESat-2 tracks. Two approaches also investigated for upscaling AGB estimates to provide wall-to-wall coverage of AGB are (1) co-kriging and (2) Random Forest. Height and AGB maps, which are the outcomes of this study, will demonstrate how data acquired by ICESat-2 can be used to measure forest parameters and in extension, estimate forest carbon for climate change initiatives.
Edwards, Rhiannon Tudor; Yeo, Seow Tien; Russell, Daphne; Thomson, Colin E; Beggs, Ian; Gibson, J N Alastair; McMillan, Diane; Martin, Denis J; Russell, Ian T
2015-01-01
Morton's neuroma is a common foot condition affecting health-related quality of life. Though its management frequently includes steroid injections, evidence of cost-effectiveness is sparse. So, we aimed to evaluate whether steroid injection is cost-effective in treating Morton's neuroma compared with anaesthetic injection alone. We undertook incremental cost-effectiveness and cost-utility analyses from the perspective of the National Health Service, alongside a patient-blinded pragmatic randomised trial in hospital-based orthopaedic outpatient clinics in Edinburgh, UK. Of the original randomised sample of 131 participants with Morton's neuroma (including 67 controls), economic analysis focused on 109 (including 55 controls). Both groups received injections guided by ultrasound. We estimated the incremental cost per point improvement in the area under the curve of the Foot Health Thermometer (FHT-AUC) until three months after injection. We also conducted cost-utility analyses using European Quality of life-5 Dimensions-3 Levels (EQ-5D-3L), enhanced by the Foot Health Thermometer (FHT), to estimate utility and thus quality-adjusted life years (QALYs). The unit cost of an ultrasound-guided steroid injection was £149. Over the three months of follow-up, the mean cost of National Health Service resources was £280 for intervention participants and £202 for control participants - a difference of £79 [bootstrapped 95% confidence interval (CI): £18 to £152]. The corresponding estimated incremental cost-effectiveness ratio was £32 per point improvement in the FHT-AUC (bootstrapped 95% CI: £7 to £100). If decision makers value improvement of one point at £100 (the upper limit of this CI), there is 97.5% probability that steroid injection is cost-effective. As EQ-5D-3L seems unresponsive to changes in foot health, we based secondary cost-utility analysis on the FHT-enhanced EQ-5D. This estimated the corresponding incremental cost-effectiveness ratio as £6,400 per QALY. Over the recommended UK threshold, ranging from £20,000 to £30,000 per QALY, there is 80%-85% probability that steroid injection is cost-effective. Steroid injections are effective and cost-effective in relieving foot pain measured by the FHT for three months. However, cost-utility analysis was initially inconclusive because the EQ-5D-3L is less responsive than the FHT to changes in foot health. By using the FHT to enhance the EQ-5D, we inferred that injections yield good value in cost per QALY. Current Controlled Trials ISRCTN13668166.
Dranitsaris, George; Truter, Ilse; Lubbe, Martie S; Sriramanakoppa, Nitin N; Mendonca, Vivian M; Mahagaonkar, Sangameshwar B
2011-01-01
Background: Decision analysis (DA) is commonly used to perform economic evaluations of new pharmaceuticals. Using multiples of Malaysia’s per capita 2010 gross domestic product (GDP) as the threshold for economic value as suggested by the World Health Organization (WHO), DA was used to estimate a price per dose for bevacizumab, a drug that provides a 1.4-month survival benefit in patients with metastatic colorectal cancer (mCRC). Methods: A decision model was developed to simulate progression-free and overall survival in mCRC patients receiving chemotherapy with and without bevacizumab. Costs for chemotherapy and management of side effects were obtained from public and private hospitals in Malaysia. Utility estimates, measured as quality-adjusted life years (QALYs), were determined by interviewing 24 oncology nurses using the time trade-off technique. The price per dose was then estimated using a target threshold of US$44 400 per QALY gained, which is 3 times the Malaysian per capita GDP. Results: A cost-effective price for bevacizumab could not be determined because the survival benefit provided was insufficient According to the WHO criteria, if the drug was able to improve survival from 1.4 to 3 or 6 months, the price per dose would be $567 and $1258, respectively. Conclusion: The use of decision modelling for estimating drug pricing is a powerful technique to ensure value for money. Such information is of value to drug manufacturers and formulary committees because it facilitates negotiations for value-based pricing in a given jurisdiction. PMID:22589671
Health Literacy Impact on National Healthcare Utilization and Expenditure.
Rasu, Rafia S; Bawa, Walter Agbor; Suminski, Richard; Snella, Kathleen; Warady, Bradley
2015-08-17
Health literacy presents an enormous challenge in the delivery of effective healthcare and quality outcomes. We evaluated the impact of low health literacy (LHL) on healthcare utilization and healthcare expenditure. Database analysis used Medical Expenditure Panel Survey (MEPS) from 2005-2008 which provides nationally representative estimates of healthcare utilization and expenditure. Health literacy scores (HLSs) were calculated based on a validated, predictive model and were scored according to the National Assessment of Adult Literacy (NAAL). HLS ranged from 0-500. Health literacy level (HLL) and categorized in 2 groups: Below basic or basic (HLS <226) and above basic (HLS ≥226). Healthcare utilization expressed as a physician, nonphysician, or emergency room (ER) visits and healthcare spending. Expenditures were adjusted to 2010 rates using the Consumer Price Index (CPI). A P value of 0.05 or less was the criterion for statistical significance in all analyses. Multivariate regression models assessed the impact of the predicted HLLs on outpatient healthcare utilization and expenditures. All analyses were performed with SAS and STATA® 11.0 statistical software. The study evaluated 22 599 samples representing 503 374 648 weighted individuals nationally from 2005-2008. The cohort had an average age of 49 years and included more females (57%). Caucasian were the predominant racial ethnic group (83%) and 37% of the cohort were from the South region of the United States of America. The proportion of the cohort with basic or below basic health literacy was 22.4%. Annual predicted values of physician visits, nonphysician visits, and ER visits were 6.6, 4.8, and 0.2, respectively, for basic or below basic compared to 4.4, 2.6, and 0.1 for above basic. Predicted values of office and ER visits expenditures were $1284 and $151, respectively, for basic or below basic and $719 and $100 for above basic (P < .05). The extrapolated national estimates show that the annual costs for prescription alone for adults with LHL possibly associated with basic and below basic health literacy could potentially reach about $172 billion. Health literacy is inversely associated with healthcare utilization and expenditure. Individuals with below basic or basic HLL have greater healthcare utilization and expendituresspending more on prescriptions compared to individuals with above basic HLL. Public health strategies promoting appropriate education among individuals with LHL may help to improve health outcomes and reduce unnecessary healthcare visits and costs. © 2015 by Kerman University of Medical Sciences.
NASA Technical Reports Server (NTRS)
Parrish, R. S.; Carter, M. C.
1974-01-01
This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.
Entanglement-Assisted Weak Value Amplification
NASA Astrophysics Data System (ADS)
Pang, Shengshi; Dressel, Justin; Brun, Todd A.
2014-07-01
Large weak values have been used to amplify the sensitivity of a linear response signal for detecting changes in a small parameter, which has also enabled a simple method for precise parameter estimation. However, producing a large weak value requires a low postselection probability for an ancilla degree of freedom, which limits the utility of the technique. We propose an improvement to this method that uses entanglement to increase the efficiency. We show that by entangling and postselecting n ancillas, the postselection probability can be increased by a factor of n while keeping the weak value fixed (compared to n uncorrelated attempts with one ancilla), which is the optimal scaling with n that is expected from quantum metrology. Furthermore, we show the surprising result that the quantum Fisher information about the detected parameter can be almost entirely preserved in the postselected state, which allows the sensitive estimation to approximately saturate the relevant quantum Cramér-Rao bound. To illustrate this protocol we provide simple quantum circuits that can be implemented using current experimental realizations of three entangled qubits.
Valuing the Recreational Benefits from the Creation of Nature Reserves in Irish Forests
Riccardo Scarpa; Susan M. Chilton; W. George Hutchinson; Joseph Buongiorno
2000-01-01
Data from a large-scale contingent valuation study are used to investigate the effects of forest attribum on willingness to pay for forest recreation in Ireland. In particular, the presence of a nature reserve in the forest is found to significantly increase the visitors' willingness to pay. A random utility model is used to estimate the welfare change associated...
NASA Astrophysics Data System (ADS)
Muzylev, Eugene; Startseva, Zoya; Uspensky, Alexander; Volkova, Elena; Kukharsky, Alexander; Uspensky, Sergey
2015-04-01
To date, physical-mathematical modeling processes of land surface-atmosphere interaction is considered to be the most appropriate tool for obtaining reliable estimates of water and heat balance components of large territories. The model of these processes (Land Surface Model, LSM) developed for vegetation period is destined for simulating soil water content W, evapotranspiration Ev, vertical latent LE and heat fluxes from land surface as well as vertically distributed soil temperature and moisture, soil surface Tg and foliage Tf temperatures, and land surface skin temperature (LST) Ts. The model is suitable for utilizing remote sensing data on land surface and meteorological conditions. In the study these data have been obtained from measurements by scanning radiometers AVHRR/NOAA, MODIS/EOS Terra and Aqua, SEVIRI/geostationary satellites Meteosat-9, -10 (MSG-2, -3). The heterogeneity of the land surface and meteorological conditions has been taken into account in the model by using soil and vegetation characteristics as parameters and meteorological characteristics as input variables. Values of these characteristics have been determined from ground observations and remote sensing information. So, AVHRR data have been used to build the estimates of effective land surface temperature (LST) Ts.eff and emissivity E, vegetation-air temperature (temperature at the vegetation level) Ta, normalized vegetation index NDVI, vegetation cover fraction B, the leaf area index LAI, and precipitation. From MODIS data the values of LST Tls, Å, NDVI, LAI have been derived. From SEVIRI data there have been retrieved Tls, E, Ta, NDVI, LAI and precipitation. All named retrievals covered the vast territory of the part of the agricultural Central Black Earth Region located in the steppe-forest zone of European Russia. This territory with coordinates 49°30'-54°N, 31°-43°E and a total area of 227,300 km2 has been chosen for investigation. It has been carried out for years 2009-2013 vegetation seasons. To provide the retrieval of Ts.eff, E, Ta, NDVI, B, and LAI the previously developed technologies of AVHRR data processing have been refined and adapted to the region of interest. The updated linear regression estimators for Ts.eff and Tà have been built using representative training samples compiled for above vegetation seasons. The updated software package has been applied for AVHRR data processing to generate estimates of named values. To verify the accuracy of these estimates the error statistics of Ts.eff and Ta derivation has been investigated for various days of named seasons using comparison with in-situ ground-based measurements. On the base of special technology and Internet resources the remote sensing products Tls, E, NDVI, LAI derived from MODIS data and covering the study area have been extracted from LP DAAC web-site for the same vegetation seasons. The reliability of the MODIS-derived Tls estimates has been confirmed via comparison with analogous and collocated ground-, AVHRR-, and SEVIRI-based ones. The prepared remote sensing dataset has also included the SEVIRI-derived estimates of Tls, E, NDVI, Ta at daylight and night-time and daily estimates of LAI. The Tls estimates has been built utilizing the method and technology developed for the retrieval of Tls and E from 15 minutes time interval SEVIRI data in IR channels 10.8 and 12.0 µm (classified as 100% cloud-free and covering the area of interest) at three successive times without accurate a priori knowledge of E. Comparison of the SEVIRI-based Tls retrievals with independent collocated Tls estimates generated at the Land Surface Analysis Satellite Applications Facility (LSA SAF, Lisbon, Portugal) has given daily- or monthly-averaged values of RMS deviation in the range of 2°C for various dates and months during the mentioned vegetation seasons which is quite acceptable result. The reliability of the SEVIRI-based Tls estimates for the study area has been also confirmed by comparing with AVHRR- and MODIS-derived LST estimates for the same seasons. The SEVIRI-derived values of Ta considered as the temperature of the vegetation cover has been obtained using Tls estimates and a previously found multiple linear regression relationship between Tls and Ta formulated accounting for solar zenith angle and land elevation. A comparison with ground-based collocated Ta observations has given RMS errors of 2.5°C and lower. It can be treated as a proof of the proposed technique's functionality. SEVIRI-derived LAI estimates have been retrieved at LSA SAF from measurements by this sensor in channels 0.6, 0.8, and 1.6 μm under cloud-free conditions at that when using data in the channel 1.6 μm the accuracy of these estimates has increased. In the study the AVHRR- and SEVIRI-derived estimates of daily and monthly precipitation sums for the territory under investigation for the years 2009 - 2013 vegetation seasons have been also used. These estimates have been obtained by the improved integrated Multi Threshold Method (MTM) providing detection and identification of cloud types around the clock throughout the year as well as identification of precipitation zones and determination of instantaneous precipitation maximum intensity within the pixel using the measurement data in different channels of named sensors as predictors. Validation of the MTM has been performed by comparing the daily and monthly precipitation sums with appropriate values resulted from ground-based observations at the meteorological stations of the region. The probability of detecting precipitation zones from satellite data corresponding to the actual ones has been amounted to 70-80%. AVHRR- and SEVIRI-derived daily and monthly precipitation sums have been in reasonable agreement with each other and with results of ground-based observations although they are smoother than the last values. Discrepancies have been noted only for local maxima for which satellite-based estimates of precipitation have been much less than ground-based ones. It may be due to the different spatial scales of areal satellite-derived and point ground-based estimates. To utilize satellite-derived vegetation and meteorological characteristics in the model the special procedures have been developed including: - replacement of ground-based LAI and B estimates used as model parameters by their satellite-derived estimates from AVHRR, MODIS and SEVIRI data. Correctness of such replacement has been confirmed by comparing the time behavior of LAI over the period of vegetation as well as modeled and measured values of evapotranspiration Ev and soil moisture content W; - entering AVHRR-, MODIS- and SEVIRI-derived estimates of Ts.eff Tls, and Ta into the model as input variables instead of ground-measured values with verification of adequacy of model operation under such a change through comparison of the calculated and measured values of W and Ev; - inputing satellite-derived estimates of precipitation during vegetation period retrieved from AVHRR and SEVIRI data using the MTM into the model as input variables. When developing given procedure algorithms and programs have been created to transit from assessment of the rainfall intensity to evaluation of its daily values. The implementation of such a transition requires controlling correctness of the estimates built at each time step. This control includes comparison of areal distributions of three-hour, daily and monthly precipitation amounts obtained from satellite data and calculated by interpolation of standard network observation data; - taking into account spatial heterogeneity of fields of satellite AVHRR-, MODIS- and SEVIRI-derived estimates of LAI, B, LST and precipitation. This has involved the development of algorithms and software for entering the values of all named characteristics into the model in each computational grid node. Values of evapotranspiration E, soil water content W, vertical latent and sensible heat fluxes and other water and heat balance components as well as land surface temperature and moisture area-distributed over the territory of interest have been resulted from the model calculations for the years 2009-2013 vegetation seasons. These calculations have been carried out utilizing satellite-derived estimates of the vegetation characteristics, LST and precipitation. E and W calculation errors have not exceeded the standard values.
Long-term morbidity, mortality, and economics of rheumatoid arthritis.
Wong, J B; Ramey, D R; Singh, G
2001-12-01
To estimate the morbidity, mortality, and lifetime costs of care for rheumatoid arthritis (RA). We developed a Markov model based on the Arthritis, Rheumatism, and Aging Medical Information System Post-Marketing Surveillance Program cohort, involving 4,258 consecutively enrolled RA patients who were followed up for 17,085 patient-years. Markov states of health were based on drug treatment and Health Assessment Questionnaire scores. Costs were based on resource utilization, and utilities were based on visual analog scale-based general health scores. The cohort had a mean age of 57 years, 76.4% were women, and the mean duration of disease was 11.8 years. Compared with a life expectancy of 22.0 years for the general population, this cohort had a life expectancy of 18.6 years and 11.3 quality-adjusted life years. Lifetime direct medical care costs were estimated to be $93,296. Higher costs were associated with higher disability scores. A Markov model can be used to estimate lifelong morbidity, mortality, and costs associated with RA, providing a context in which to consider the potential value of new therapies for the disease.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuller, Jason C.; Parker, Graham B.
This report is the second in a series of three reports describing the potential of GE’s DR-enabled appliances to provide benefits to the utility grid. The first report described the modeling methodology used to represent the GE appliances in the GridLAB-D simulation environment and the estimated potential for peak demand reduction at various deployment levels. The third report will explore the technical capability of aggregated group actions to positively impact grid stability, including frequency and voltage regulation and spinning reserves, and the impacts on distribution feeder voltage regulation, including mitigation of fluctuations caused by high penetration of photovoltaic distributed generation.more » In this report, a series of analytical methods were presented to estimate the potential cost benefit of smart appliances while utilizing demand response. Previous work estimated the potential technical benefit (i.e., peak reduction) of smart appliances, while this report focuses on the monetary value of that participation. The effects on wholesale energy cost and possible additional revenue available by participating in frequency regulation and spinning reserve markets were explored.« less
Quantitative Doppler Analysis Using Conventional Color Flow Imaging Acquisitions.
Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Lovstakken, Lasse
2018-05-01
Interleaved acquisitions used in conventional triplex mode result in a tradeoff between the frame rate and the quality of velocity estimates. On the other hand, workflow becomes inefficient when the user has to switch between different modes, and measurement variability is increased. This paper investigates the use of power spectral Capon estimator in quantitative Doppler analysis using data acquired with conventional color flow imaging (CFI) schemes. To preserve the number of samples used for velocity estimation, only spatial averaging was utilized, and clutter rejection was performed after spectral estimation. The resulting velocity spectra were evaluated in terms of spectral width using a recently proposed spectral envelope estimator. The spectral envelopes were also used for Doppler index calculations using in vivo and string phantom acquisitions. In vivo results demonstrated that the Capon estimator can provide spectral estimates with sufficient quality for quantitative analysis using packet-based CFI acquisitions. The calculated Doppler indices were similar to the values calculated using spectrograms estimated on a commercial ultrasound scanner.
Research on dynamic creep strain and settlement prediction under the subway vibration loading.
Luo, Junhui; Miao, Linchang
2016-01-01
This research aims to explore the dynamic characteristics and settlement prediction of soft soil. Accordingly, the dynamic shear modulus formula considering the vibration frequency was utilized and the dynamic triaxial test conducted to verify the validity of the formula. Subsequently, the formula was applied to the dynamic creep strain function, with the factors influencing the improved dynamic creep strain curve of soft soil being analyzed. Meanwhile, the variation law of dynamic stress with sampling depth was obtained through the finite element simulation of subway foundation. Furthermore, the improved dynamic creep strain curve of soil layer was determined based on the dynamic stress. Thereafter, it could to estimate the long-term settlement under subway vibration loading by norms. The results revealed that the dynamic shear modulus formula is straightforward and practical in terms of its application to the vibration frequency. The values predicted using the improved dynamic creep strain formula closed to the experimental values, whilst the estimating settlement closed to the measured values obtained in the field test.
Estimating the Rate of Occurrence of Renal Stones in Astronauts
NASA Technical Reports Server (NTRS)
Myers, J.; Goodenow, D.; Gokoglu, S.; Kassemi, M.
2016-01-01
Changes in urine chemistry, during and post flight, potentially increases the risk of renal stones in astronauts. Although much is known about the effects of space flight on urine chemistry, no inflight incidence of renal stones in US astronauts exists and the question "How much does this risk change with space flight?" remains difficult to accurately quantify. In this discussion, we tackle this question utilizing a combination of deterministic and probabilistic modeling that implements the physics behind free stone growth and agglomeration, speciation of urine chemistry and published observations of population renal stone incidences to estimate changes in the rate of renal stone presentation. The modeling process utilizes a Population Balance Equation based model developed in the companion IWS abstract by Kassemi et al. (2016) to evaluate the maximum growth and agglomeration potential from a specified set of urine chemistry values. Changes in renal stone occurrence rates are obtained from this model in a probabilistic simulation that interrogates the range of possible urine chemistries using Monte Carlo techniques. Subsequently, each randomly sampled urine chemistry undergoes speciation analysis using the well-established Joint Expert Speciation System (JESS) code to calculate critical values, such as ionic strength and relative supersaturation. The Kassemi model utilizes this information to predict the mean and maximum stone size. We close the assessment loop by using a transfer function that estimates the rate of stone formation from combining the relative supersaturation and both the mean and maximum free stone growth sizes. The transfer function is established by a simulation analysis which combines population stone formation rates and Poisson regression. Training this transfer function requires using the output of the aforementioned assessment steps with inputs from known non-stone-former and known stone-former urine chemistries. Established in a Monte Carlo system, the entire renal stone analysis model produces a probability distribution of the stone formation rate and an expected uncertainty in the estimate. The utility of this analysis will be demonstrated by showing the change in renal stone occurrence predicted by this method using urine chemistry distributions published in Whitson et al. 2009. A comparison to the model predictions to previous assessments of renal stone risk will be used to illustrate initial validation of the model.
Collado-Mateo, Daniel; Chen, Gang; Garcia-Gordillo, Miguel A; Iezzi, Angelo; Adsuar, José C; Olivares, Pedro R; Gusi, Narcis
2017-05-30
The revised version of the Fibromyalgia Impact Questionnaire (FIQR) is one of the most widely used specific questionnaires in FM studies. However, this questionnaire does not allow calculation of QALYs as it is not a preference-based measure. The aim of this study was to develop mapping algorithm which enable FIQR scores to be transformed into utility scores that can be used in the cost utility analyses. A cross-sectional survey was conducted. One hundred and 92 Spanish women with Fibromyalgia were asked to complete four general quality of life questionnaires, i.e. EQ-5D-5 L, 15D, AQoL-8D and SF-12, and one specific disease instrument, the FIQR. A direct mapping approach was adopted to derive mapping algorithms between the FIQR and each of the four multi-attribute utility (MAU) instruments. Health state utility was treated as the dependent variable in the regression analysis, whilst the FIQR score and age were predictors. The mean utility scores ranged from 0.47 (AQoL-8D) to 0.69 (15D). All correlations between the FIQR total score and MAU instruments utility scores were highly significant (p < 0.0001) with magnitudes larger than 0.5. Although very slight differences in the mean absolute error were found between ordinary least squares (OLS) estimator and generalized linear model (GLM), models based on GLM were better for EQ-5D-5 L, AQoL-8D and 15D. Mapping algorithms developed in this study enable the estimation of utility values from scores in a fibromyalgia specific questionnaire.
ESTABLISHMENT OF A FIBRINOGEN REFERENCE INTERVAL IN ORNATE BOX TURTLES (TERRAPENE ORNATA ORNATA).
Parkinson, Lily; Olea-Popelka, Francisco; Klaphake, Eric; Dadone, Liza; Johnston, Matthew
2016-09-01
This study sought to establish a reference interval for fibrinogen in healthy ornate box turtles ( Terrapene ornata ornata). A total of 48 turtles were enrolled, with 42 turtles deemed to be noninflammatory and thus fitting the inclusion criteria and utilized to estimate a fibrinogen reference interval. Turtles were excluded based upon physical examination and blood work abnormalities. A Shapiro-Wilk normality test indicated that the noninflammatory turtle fibrinogen values were normally distributed (Gaussian distribution) with an average of 108 mg/dl and a 95% confidence interval of the mean of 97.9-117 mg/dl. Those turtles excluded from the reference interval because of abnormalities affecting their health had significantly different fibrinogen values (P = 0.313). A reference interval for healthy ornate box turtles was calculated. Further investigation into the utility of fibrinogen measurement for clinical usage in ornate box turtles is warranted.
Gray, D T; Weinstein, M C
1998-01-01
Decision and cost-utility analyses considered the tradeoffs of treating patent ductus arteriosus (PDA) using conventional surgery versus transcatheter implantation of the Rashkind occluder. Physicians and informed lay parents assigned utility scores to procedure success/complications combinations seen in prognostically similar pediatric patients with isolated PDA treated from 1982 to 1987. Utility scores multiplied by outcome frequencies from a comparative study generated expected utility values for the two approaches. Cost-utility analyses combined these results with simulated provider cost estimates from 1989. On a 0-100 scale (worst to best observed outcome), the median expected utility for surgery was 99.96, versus 98.88 for the occluder. Results of most sensitivity analyses also slightly favored surgery. Expected utility differences based on 1987 data were minimal. With a mean overall simulated cost of $8,838 vs $12,466 for the occluder, surgery was favored in most cost-utility analyses. Use of the inherently less invasive but less successful, more risky, and more costly occluder approach conferred no apparent net advantage in this study. Analyses of comparable current data would be informative.
Health State Utility Values for Age-Related Macular Degeneration: Review and Advice.
Butt, Thomas; Tufail, Adnan; Rubin, Gary
2017-02-01
Health state utility values are a major source of uncertainty in economic evaluations of interventions for age-related macular degeneration (AMD). This review identifies and critiques published utility values and methods for eliciting de novo utility values in AMD. We describe how utility values have been used in healthcare decision making and provide guidance on the choice of utility values for future economic evaluations for AMD. Literature was searched using PubMed, and health technology assessments (HTA) were searched using HTA agency websites to identify articles reporting utility values or approaches to derive utility values in AMD and articles applying utilities for use in healthcare decision making relating to treatments for AMD. A total of 70 studies qualified for data extraction, 22 of which were classified as containing utility values and/or elicitation methods, and 48 were classified as using utility values in decision making. A large number of studies have elicited utility values for AMD, although those applied to decision making have focused on a few of these. There is an appreciation of the challenges in the measurement and valuation of health states, with recent studies addressing challenges such as the insensitivity of generic health-related quality of life (HRQoL) questionnaires and utility in the worse-seeing eye. We would encourage careful consideration when choosing utility values in decision making and an explicit critique of their applicability to the decision problem.
Primary Care Physician and Patient Perceptions of Reimbursement for Total Knee and Hip Replacement.
Wiznia, Daniel H; Kim, Chang-Yeon; Wang, Yuexin; Swami, Nishwant; Pelker, Richard R
2016-07-01
The opinions of nonspecialists and patients will be important to determining reimbursements for specialists such as orthopedic surgeons. In addition, primary care physician (PCP) perceptions of reimbursements may affect utilization of orthopedic services. We distributed a web-based survey to PCPs, asking how much they believed orthopedic surgeons were reimbursed for total hip arthroplasty (THA) and total knee arthroplasty (TKA). We also proctored a paper-based survey to postoperative patients, asking how much orthopedic surgeons should be reimbursed. There was a significant difference between perceived and actual reimbursement values for THA and TKA. Hospital-affiliated PCPs estimated higher reimbursements for both THA ($1657 vs $838, P < .0001 for Medicaid and $2246 vs $1515, P = .018 for Medicare) and TKA ($1260 vs $903, P = .052 for Medicaid and $2022 vs $1514, P = .049 for Medicare). Similarly, larger practices estimated higher reimbursements for both THA ($1861 vs $838, P < .0001 for Medicaid and $2635 vs $1515, P = .004 for Medicare) and TKA ($1583 vs $903, P = .005 for Medicaid and $2380 vs $1514, P = .011 for Medicare). Compared to PCPs, patients estimated that orthopedic surgeons should be paid 4 times higher for both THA ($9787 vs $2235, P < .0001) and TKA ($9088 vs $2134, P < .0001). PCPs believe that reimbursements for orthopedic procedures are higher than actual values. The effect that these perceptions will have on efforts at cost reform and utilization of orthopedic services requires further study. Copyright © 2016 Elsevier Inc. All rights reserved.
Estimating neighborhood variability with a binary comparison matrix.
Murphy, D.L.
1985-01-01
A technique which utilizes a binary comparison matrix has been developed to implement a neighborhood function for a raster format data base. The technique assigns an index value to the center pixel of 3- by 3-pixel neighborhoods. The binary comparison matrix provides additional information not found in two other neighborhood variability statistics; the function is sensitive to both the number of classes within the neighborhood and the frequency of pixel occurrence in each of the classes. Application of the function to a spatial data base from the Kenai National Wildlife Refuge, Alaska, demonstrates 1) the numerical distribution of the index values, and 2) the spatial patterns exhibited by the numerical values. -Author
Modeling global mangrove soil carbon stocks: filling the gaps in coastal environments
NASA Astrophysics Data System (ADS)
Rovai, A.; Twilley, R.
2017-12-01
We provide an overview of contemporaneous global mangrove soil organic carbon (SOC) estimates, focusing on a framework to explain disproportionate differences among observed data as a way to improve global estimates. This framework is based on a former conceptual model, the coastal environmental setting, in contrast to the more popular latitude-based hypotheses largely believed to explain hemispheric variation in mangrove ecosystem properties. To demonstrate how local and regional estimates of SOC linked to coastal environmental settings can render more realistic global mangrove SOC extrapolations we combined published and unpublished data, yielding a total of 106 studies, reporting on 552 sites from 43 countries. These sites were classified into distinct coastal environmental setting types according to two concurrent worldwide typology of nearshore coastal systems classifications. Mangrove SOC density varied substantially across coastal environmental settings, ranging from 14.9 ± 0.8 in river dominated (deltaic) soils to 53.9 ± 1.6 mg cm-3 (mean ± SE) in karstic coastlines. Our findings reveal striking differences between published values and contemporary global mangrove SOC extrapolation based on country-level mean reference values, particularly for karstic-dominated coastlines where mangrove SOC stocks have been underestimated by up to 50%. Correspondingly, climate-based global estimates predicted lower mangrove SOC density values (32-41 mg C cm-3) for mangroves in karstic environments, differing from published (21-126 mg C cm-3) and unpublished (47-58 mg C cm-3) values. Moreover, climate-based projections yielded higher SOC density values (27-70 mg C cm-3) for river-dominated mangroves compared to lower ranges reported in the literature (11-24 mg C cm-3). We argue that this inconsistent reporting of SOC stock estimates between river-dominated and karstic coastal environmental settings is likely due to the omission of geomorphological and geophysical environmental drivers, which control C storage in coastal wetlands. We encourage the science community more close utilize coastal environmental settings and new inventories of geomorphological typologies to build more robust estimates of local and regional estimates of SOC that can be extrapolated to global C estimates.
Johnson, F R; Banzhaf, M R; Desvousges, W H
2000-06-01
This study uses stated-preference (SP) analysis to measure willingness to pay (WTP) to reduce acute episodes of respiratory and cardiovascular ill health. The SP survey employs a modified version of the health state descriptions used in the Quality of Well Being (QWB) Index. The four health state attributes are symptom, episode duration, activity restrictions and cost. Preferences are elicited using two different SP formats: graded-pair and discrete-choice. The different formats cause subjects to focus on different evaluation strategies. Combining two elicitation formats yields more valid and robust estimates than using only one approach. Estimates of indirect utility function parameters are obtained using advanced panel econometrics for each format separately and jointly. Socio-economic differences in health preferences are modelled by allowing the marginal utility of money relative to health attributes to vary across respondents. Because the joint model captures the combined preference information provided by both elicitation formats, these model estimates are used to calculate WTP. The results demonstrate the feasibility of estimating meaningful WTP values for policy-relevant respiratory and cardiac symptoms, even from subjects who never have personally experienced these conditions. Furthermore, because WTP estimates are for individual components of health improvements, estimates can be aggregated in various ways depending upon policy needs. Thus, using generic health attributes facilitates transferring WTP estimates for benefit-cost analysis of a variety of potential health interventions. Copyright 2000 John Wiley & Sons, Ltd.
Naing, Cho; Poovorawan, Yong; Mak, Joon Wah; Aung, Kyan; Kamolratankul, Pirom
2015-06-01
The present study aimed to assess the cost-utility analysis of using an adjunctive recombinant activated factor VIIa (rFVIIa) in children for controlling life-threatening bleeding in dengue haemorrhagic fever (DHF)/dengue shock syndrome (DSS). We constructed a decision-tree model, comparing a standard care and the use of an additional adjuvant rFVIIa for controlling life-threatening bleeding in children with DHF/DSS. Cost and utility benefit were estimated from the societal perspective. The outcome measure was cost per quality-adjusted life years (QALYs). Overall, treatment with adjuvant rFVIIa gained QALYs, but the total cost was higher. The incremental cost-utility ratio for the introduction of adjuvant rFVIIa was $4241.27 per additional QALY. Sensitivity analyses showed the utility value assigned for calculation of QALY was the most sensitive parameter. We concluded that despite high cost, there is a role for rFVIIa in the treatment of life-threatening bleeding in patients with DHF/DSS.
Soil moisture data as a constraint for groundwater recharge estimation
NASA Astrophysics Data System (ADS)
Mathias, Simon A.; Sorensen, James P. R.; Butler, Adrian P.
2017-09-01
Estimating groundwater recharge rates is important for water resource management studies. Modeling approaches to forecast groundwater recharge typically require observed historic data to assist calibration. It is generally not possible to observe groundwater recharge rates directly. Therefore, in the past, much effort has been invested to record soil moisture content (SMC) data, which can be used in a water balance calculation to estimate groundwater recharge. In this context, SMC data is measured at different depths and then typically integrated with respect to depth to obtain a single set of aggregated SMC values, which are used as an estimate of the total water stored within a given soil profile. This article seeks to investigate the value of such aggregated SMC data for conditioning groundwater recharge models in this respect. A simple modeling approach is adopted, which utilizes an emulation of Richards' equation in conjunction with a soil texture pedotransfer function. The only unknown parameters are soil texture. Monte Carlo simulation is performed for four different SMC monitoring sites. The model is used to estimate both aggregated SMC and groundwater recharge. The impact of conditioning the model to the aggregated SMC data is then explored in terms of its ability to reduce the uncertainty associated with recharge estimation. Whilst uncertainty in soil texture can lead to significant uncertainty in groundwater recharge estimation, it is found that aggregated SMC is virtually insensitive to soil texture.
Development of a Greek solar map based on solar model estimations
NASA Astrophysics Data System (ADS)
Kambezidis, H. D.; Psiloglou, B. E.; Kavadias, K. A.; Paliatsos, A. G.; Bartzokas, A.
2016-05-01
The realization of Renewable Energy Sources (RES) for power generation as the only environmentally friendly solution, moved solar systems to the forefront of the energy market in the last decade. The capacity of the solar power doubles almost every two years in many European countries, including Greece. This rise has brought the need for reliable predictions of meteorological data that can easily be utilized for proper RES-site allocation. The absence of solar measurements has, therefore, raised the demand for deploying a suitable model in order to create a solar map. The generation of a solar map for Greece, could provide solid foundations on the prediction of the energy production of a solar power plant that is installed in the area, by providing an estimation of the solar energy acquired at each longitude and latitude of the map. In the present work, the well-known Meteorological Radiation Model (MRM), a broadband solar radiation model, is engaged. This model utilizes common meteorological data, such as air temperature, relative humidity, barometric pressure and sunshine duration, in order to calculate solar radiation through MRM for areas where such data are not available. Hourly values of the above meteorological parameters are acquired from 39 meteorological stations, evenly dispersed around Greece; hourly values of solar radiation are calculated from MRM. Then, by using an integrated spatial interpolation method, a Greek solar energy map is generated, providing annual solar energy values all over Greece.
NASA Technical Reports Server (NTRS)
York, P.; Labell, R. W.
1980-01-01
An aircraft wing weight estimating method based on a component buildup technique is described. A simplified analytically derived beam model, modified by a regression analysis, is used to estimate the wing box weight, utilizing a data base of 50 actual airplane wing weights. Factors representing materials and methods of construction were derived and incorporated into the basic wing box equations. Weight penalties to the wing box for fuel, engines, landing gear, stores and fold or pivot are also included. Methods for estimating the weight of additional items (secondary structure, control surfaces) have the option of using details available at the design stage (i.e., wing box area, flap area) or default values based on actual aircraft from the data base.
Halsteinli, Vidar; Kittelsen, Sverre A; Magnussen, Jon
2010-02-01
The performance of health service providers may be monitored by measuring productivity. However, the policy value of such measures may depend crucially on the accuracy of input and output measures. In particular, an important question is how to adjust adequately for case-mix in the production of health care. In this study, we assess productivity growth in Norwegian outpatient child and adolescent mental health service units (CAMHS) over a period characterized by governmental utilization of simple productivity indices, a substantial increase in capacity and a concurrent change in case-mix. We analyze the sensitivity of the productivity growth estimates using different specifications of output to adjust for case-mix differences. Case-mix adjustment is achieved by distributing patients into eight groups depending on reason for referral, age and gender, as well as correcting for the number of consultations. We utilize the nonparametric Data Envelopment Analysis (DEA) method to implicitly calculate weights that maximize each unit's efficiency. Malmquist indices of technical productivity growth are estimated and bootstrap procedures are performed to calculate confidence intervals and to test alternative specifications of outputs. The dataset consist of an unbalanced panel of 48-60 CAMHS in the period 1998-2006. The mean productivity growth estimate from a simple unadjusted patient model (one single output) is 35%; adjusting for case-mix (eight outputs) reduces the growth estimate to 15%. Adding consultations increases the estimate to 28%. The latter reflects an increase in number of consultations per patient. We find that the governmental productivity indices strongly tend to overestimate productivity growth. Case-mix adjustment is of major importance and governmental utilization of performance indicators necessitates careful considerations of output specifications. Copyright 2009 Elsevier Ltd. All rights reserved.
Capacity value of energy storage considering control strategies
Luo, Yi
2017-01-01
In power systems, energy storage effectively improves the reliability of the system and smooths out the fluctuations of intermittent energy. However, the installed capacity value of energy storage cannot effectively measure the contribution of energy storage to the generator adequacy of power systems. To achieve a variety of purposes, several control strategies may be utilized in energy storage systems. The purpose of this paper is to study the influence of different energy storage control strategies on the generation adequacy. This paper presents the capacity value of energy storage to quantitatively estimate the contribution of energy storage on the generation adequacy. Four different control strategies are considered in the experimental method to study the capacity value of energy storage. Finally, the analysis of the influence factors on the capacity value under different control strategies is given. PMID:28558027
Dilokthornsakul, P; Sawangjit, R; Inprasong, C; Chunhasewee, S; Rattanapan, P; Thoopputra, T; Chaiyakunapruk, N
2016-01-01
Stevens-Johnson syndrome (SJS) and Toxic Epidermal Necrolysis (TEN) are life-threatening dermatologic conditions. Although, the incidence of SJS/TEN in Thailand is high, information on cost of care for SJS/TEN is limited. This study aims to estimate healthcare resource utilization and cost of SJS/TEN in Thailand, using hospital perspective. A retrospective study using an electronic health database from a university-affiliated hospital in Thailand was undertaken. Patients admitted with SJS/TEN from 2002 to 2007 were included. Direct medical cost was estimated by the cost-to-charge ratio. Cost was converted to 2013 value by consumer price index, and converted to $US using 31 Baht/ 1 $US. The healthcare resource utilization was also estimated. A total of 157 patients were included with average age of 45.3±23.0 years. About 146 patients (93.0%) were diagnosed as SJS and the remaining (7.0%) were diagnosed as TEN. Most of the patients (83.4%) were treated with systemic corticosteroids. Overall, mortality rate was 8.3%, while the average length of stay (LOS) was 10.1±13.2 days. The average cost of managing SJS/TEN for all patients was $1,064±$2,558. The average cost for SJS patients was $1,019±$2,601 while that for TEN patients was $1,660±$1,887. Healthcare resource utilization and cost of care for SJS/TEN in Thailand were tremendous. The findings are important for policy makers to allocate healthcare resources and develop strategies to prevent SJS/TEN which could decrease length of stay and cost of care.
A New Approach to Extreme Value Estimation Applicable to a Wide Variety of Random Variables
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
1997-01-01
Designing reliable structures requires an estimate of the maximum and minimum values (i.e., strength and load) that may be encountered in service. Yet designs based on very extreme values (to insure safety) can result in extra material usage and hence, uneconomic systems. In aerospace applications, severe over-design cannot be tolerated making it almost mandatory to design closer to the assumed limits of the design random variables. The issue then is predicting extreme values that are practical, i.e. neither too conservative or non-conservative. Obtaining design values by employing safety factors is well known to often result in overly conservative designs and. Safety factor values have historically been selected rather arbitrarily, often lacking a sound rational basis. To answer the question of how safe a design needs to be has lead design theorists to probabilistic and statistical methods. The so-called three-sigma approach is one such method and has been described as the first step in utilizing information about the data dispersion. However, this method is based on the assumption that the random variable is dispersed symmetrically about the mean and is essentially limited to normally distributed random variables. Use of this method can therefore result in unsafe or overly conservative design allowables if the common assumption of normality is incorrect.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
NASA Astrophysics Data System (ADS)
Medrano, Nicolas W.
Ambient air pollution is a major issue in urban environments, causing negative health impacts and increasing costs for metropolitan economies. Vegetation has been shown to remove these pollutants at a substantial rate. This study utilizes the i-Tree Eco (UFORE) and i-Tree Canopy models to estimate air pollution removal services provided by trees in Government Canyon State Natural Area (GCSNA), an approximately 4,700 hectare area in San Antonio, Texas. For i-Tree Eco, a stratified project of the five prominent vegetation types was completed. A comparison of removal services provided by vegetation communities indicated there was no significant difference in removal rates. Total pollution removal of GCSNA was estimated to be 239.52 metric tons/year at a rate of 64.42 kg/ha of tree cover/year. By applying this value to the area within Bexar County, Texas belonging to the Balcones Canyonlands ecoregion, it was determined that for 2013 an estimated 2,598.45 metric tons/year of air pollution was removed at a health value to society of 19.4 million. This is a reduction in pollution removal services since 2003, in which 3,050.35 metric tons/year were removed at a health value of 22.8 million. These results suggest urban sprawl taking place in San Antonio is reducing air pollution removal services provided by trees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Mabrey, J.B.
1994-07-01
This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less
Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package
NASA Astrophysics Data System (ADS)
Cheng, L.; AghaKouchak, A.; Gilleland, E.
2013-12-01
Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.
NASA Technical Reports Server (NTRS)
Houborg, Rasmus; Anderson, Martha; Kustas, Bill; Rodell, Matthew
2011-01-01
This study investigates the utility of integrating remotely sensed estimates of leaf chlorophyll (C(sub ab)) into a thermal-based Two-Source Energy Balance (TSEB) model that estimates land-surface CO2 and energy fluxes using an analytical, light-use-efficiency (LUE) based model of canopy resistance. Day to day variations in nominal LUE (LUE(sub n)) were assessed for a corn crop field in Maryland U.S.A. through model calibration with CO2 flux tower observations. The optimized daily LUE(sub n) values were then compared to estimates of C(sub ab) integrated from gridded maps of chlorophyll content weighted over the tower flux source area. Changes in Cab exhibited a curvilinear relationship with corresponding changes in daily calibrated LUE(sub n) values derived from the tower flux data, and hourly water, energy and carbon flux estimation accuracies from TSEB were significantly improved when using C(sub ab) for delineating spatio-temporal variations in LUE(sub n). The results demonstrate the synergy between thermal infrared and shortwave reflective wavebands in producing valuable remote sensing data for monitoring of carbon and water fluxes.
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
Static terrestrial laser scanning of juvenile understory trees for field phenotyping
NASA Astrophysics Data System (ADS)
Wang, Huanhuan; Lin, Yi
2014-11-01
This study was to attempt the cutting-edge 3D remote sensing technique of static terrestrial laser scanning (TLS) for parametric 3D reconstruction of juvenile understory trees. The data for test was collected with a Leica HDS6100 TLS system in a single-scan way. The geometrical structures of juvenile understory trees are extracted by model fitting. Cones are used to model trunks and branches. Principal component analysis (PCA) is adopted to calculate their major axes. Coordinate transformation and orthogonal projection are used to estimate the parameters of the cones. Then, AutoCAD is utilized to simulate the morphological characteristics of the understory trees, and to add secondary branches and leaves in a random way. Comparison of the reference values and the estimated values gives the regression equation and shows that the proposed algorithm of extracting parameters is credible. The results have basically verified the applicability of TLS for field phenotyping of juvenile understory trees.
Methods and statistics for combining motif match scores.
Bailey, T L; Gribskov, M
1998-01-01
Position-specific scoring matrices are useful for representing and searching for protein sequence motifs. A sequence family can often be described by a group of one or more motifs, and an effective search must combine the scores for matching a sequence to each of the motifs in the group. We describe three methods for combining match scores and estimating the statistical significance of the combined scores and evaluate the search quality (classification accuracy) and the accuracy of the estimate of statistical significance of each. The three methods are: 1) sum of scores, 2) sum of reduced variates, 3) product of score p-values. We show that method 3) is superior to the other two methods in both regards, and that combining motif scores indeed gives better search accuracy. The MAST sequence homology search algorithm utilizing the product of p-values scoring method is available for interactive use and downloading at URL http:/(/)www.sdsc.edu/MEME.
Robust pupil center detection using a curvature algorithm
NASA Technical Reports Server (NTRS)
Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)
1999-01-01
Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.
Selection of latent variables for multiple mixed-outcome models
ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI
2014-01-01
Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219
[Bioimpedometry and its utilization in dialysis therapy].
Lopot, František
2016-01-01
Measurement of living tissue impedance - bioimpedometry - started to be used in medicine some 50 years ago, first exclusively for estimation of extracellular and intracellular compartment volumes. Its most simple single frequency (50 kHz) version works directly with the measured impedance vector. Technically more sophisticated versions convert the measured impedance in values of volumes of different compartments of body fluids and calculate also principal markers of nutritional status (lean body mass, adipose tissue mass). The latest version specifically developed for application in dialysis patients includes body composition modelling and provides even absolute value of overhydration (excess fluid). Still in experimental phase is the bioimpedance exploitation for more precise estimation of residual glomerular filtration. Not yet standardized is also segmental bioimpedance measurement which should enable separate assessment of hydration status of the trunk segment and ultrafiltration capacity of peritoneum in peritoneal dialysis patients.Key words: assessment - bioimpedance - excess fluid - fluid status - glomerular filtration - haemodialysis - nutritional status - peritoneal dialysis.
Morlière, Camille; Verpillot, Elise; Donon, Laurence; Salmi, Louis-Rachid; Joseph, Pierre-Alain; Vignes, Jean-Rodolphe; Bénard, Antoine
2015-12-01
Sacral anterior root stimulation (SARS) and posterior sacral rhizotomy restores the ability to urinate on demand with low residual volumes, which is a key for preventing urinary complications that account for 10% of the causes of death in patients with spinal cord injury with a neurogenic bladder. Nevertheless, comparative cost-effectiveness results on a long time horizon are lacking to adequately inform decisions of reimbursement. This study aimed to estimate the long-term cost-utility of SARS using the Finetech-Brindley device compared with medical treatment (anticholinergics+catheterization). The following study design is used for the paper: Markov model elaborated with a 10-year time horizon; with four irreversible states: (1) initial treatment, (2) year 1 of surgery for urinary complication, (3) year >1 of surgery for urinary complication, and (4) death; and reversible states: urinary calculi; Finetech-Brindley device failures. The sample consisted of theoretical cohorts of patients with a complete spinal cord lesion since ≥1 year, and a neurogenic bladder. Effectiveness was expressed as quality adjusted life years (QALYs). Costs were valued in EUR 2013 in the perspective of the French health system. A systematic review and meta-analyses were performed to estimate transition probabilities and QALYs. Costs were estimated from the literature, and through simulations using the 2013 French prospective payment system classification. Probabilistic analyses were conducted to handle parameter uncertainty. In the base case analysis (2.5% discount rate), the cost-utility ratio was 12,710 EUR per QALY gained. At a threshold of 30,000 EUR per QALY the probability of SARS being cost-effective compared with medical treatment was 60%. If the French Healthcare System reimbursed SARS for 80 patients per year during 10 years (anticipated target population), the expected incremental net health benefit would be 174 QALYs, and the expected value of perfect information (EVPI) would be 4.735 million EUR. The highest partial EVPI is reached for utility values and costs (1.3-1.6 million EUR). Our model shows that SARS using Finetech-Brindley device offers the most important benefit and should be considered cost-effective at a cost-effectiveness threshold of 30,000 EUR per QALY. Despite a high uncertainty, EVPI and partial EVPI may indicate that further research would not be profitable to inform decision-making. Copyright © 2015 Elsevier Inc. All rights reserved.
Quantifying confidence in density functional theory predictions of magnetic ground states
NASA Astrophysics Data System (ADS)
Houchins, Gregory; Viswanathan, Venkatasubramanian
2017-10-01
Density functional theory (DFT) simulations, at the generalized gradient approximation (GGA) level, are being routinely used for material discovery based on high-throughput descriptor-based searches. The success of descriptor-based material design relies on eliminating bad candidates and keeping good candidates for further investigation. While DFT has been widely successfully for the former, oftentimes good candidates are lost due to the uncertainty associated with the DFT-predicted material properties. Uncertainty associated with DFT predictions has gained prominence and has led to the development of exchange correlation functionals that have built-in error estimation capability. In this work, we demonstrate the use of built-in error estimation capabilities within the BEEF-vdW exchange correlation functional for quantifying the uncertainty associated with the magnetic ground state of solids. We demonstrate this approach by calculating the uncertainty estimate for the energy difference between the different magnetic states of solids and compare them against a range of GGA exchange correlation functionals as is done in many first-principles calculations of materials. We show that this estimate reasonably bounds the range of values obtained with the different GGA functionals. The estimate is determined as a postprocessing step and thus provides a computationally robust and systematic approach to estimating uncertainty associated with predictions of magnetic ground states. We define a confidence value (c-value) that incorporates all calculated magnetic states in order to quantify the concurrence of the prediction at the GGA level and argue that predictions of magnetic ground states from GGA level DFT is incomplete without an accompanying c-value. We demonstrate the utility of this method using a case study of Li-ion and Na-ion cathode materials and the c-value metric correctly identifies that GGA-level DFT will have low predictability for NaFePO4F . Further, there needs to be a systematic test of a collection of plausible magnetic states, especially in identifying antiferromagnetic (AFM) ground states. We believe that our approach of estimating uncertainty can be readily incorporated into all high-throughput computational material discovery efforts and this will lead to a dramatic increase in the likelihood of finding good candidate materials.
Value of Landsat in urban water resources planning
NASA Technical Reports Server (NTRS)
Jackson, T. J.; Ragan, R. M.
1977-01-01
The reported investigation had the objective to evaluate the utility of satellite multispectral remote sensing in urban water resources planning. The results are presented of a study which was conducted to determine the economic impact of Landsat data. The use of Landsat data to estimate hydrologic model parameters employed in urban water resources planning is discussed. A decision regarding an employment of the Landsat data has to consider the tradeoff between data accuracy and cost. Bayesian decision theory is used in this connection. It is concluded that computer-aided interpretation of Landsat data is a highly cost-effective method of estimating the percentage of impervious area.
Ruiz, Juan Gabriel; Charpak, Nathalie; Castillo, Mario; Bernal, Astrid; Ríos, John; Trujillo, Tammy; Córdoba, María Adelaida
2017-06-01
Although kangaroo mother care (KMC) has been shown to be safe and effective in randomized controlled trials (RCTs), there are no published complete economic evaluations including the three components of the full intervention. A cost utility analysis performed on the results of an RCT conducted in Bogotá, Colombia between 1993 and 1996. Hospital and ambulatory costs were estimated by microcosting in a sample of preterm infants from a University Hospital in Bogotá in 2011 and at a KMC clinic in the same period. Utility scores were assigned by experts by means of (1) direct ordering and scoring discrete health states and (2) constructing a multi-attribute utility function. Ninety-five percent confidence intervals (CIs) for the incremental cost-utility ratios (ICURs) were computed by the Fiellers theorem method. One-way sensitivity analysis on price estimates for valuing costs was performed. ICUR at 1 year of corrected age was $ -1,546 per extra quality-adjusted life year gained using the KMC method (95% CI $ -7,963 to $ 4,910). In Bogotá, the use of KMC is dominant: more effective and cost-saving. Although results from an economic analysis should not be extrapolated to different systems and communities, this dominant result suggests that KMC could be cost-effective in similar low and middle income countries settings. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, P.; Hummon, M.
2013-02-01
Concentrating solar power (CSP) deployed with thermal energy storage (TES) provides a dispatchable source of renewable energy. The value of CSP with TES, as with other potential generation resources, needs to be established using traditional utility planning tools. Production cost models, which simulate the operation of grid, are often used to estimate the operational value of different generation mixes. CSP with TES has historically had limited analysis in commercial production simulations. This document describes the implementation of CSP with TES in a commercial production cost model. It also describes the simulation of grid operations with CSP in a test systemmore » consisting of two balancing areas located primarily in Colorado.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, P.; Hummon, M.
2012-11-01
Concentrating solar power (CSP) deployed with thermal energy storage (TES) provides a dispatchable source of renewable energy. The value of CSP with TES, as with other potential generation resources, needs to be established using traditional utility planning tools. Production cost models, which simulate the operation of grid, are often used to estimate the operational value of different generation mixes. CSP with TES has historically had limited analysis in commercial production simulations. This document describes the implementation of CSP with TES in a commercial production cost model. It also describes the simulation of grid operations with CSP in a test systemmore » consisting of two balancing areas located primarily in Colorado.« less
Offshore Storage Resource Assessment - Final Scientific/Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, Bill; Ozgen, Chet
The DOE developed volumetric equation for estimating Prospective Resources (CO 2 storage) in oil and gas reservoirs was utilized on each depleted field in the Federal GOM. This required assessment of the in-situ hydrocarbon fluid volumes for the fields under evaluation in order to apply the DOE equation. This project utilized public data from the U.S. Department of the Interior, Bureau of Ocean Energy Management (BOEM) Reserves database and from a well reputed, large database (250,000+ wells) of GOM well and production data marketed by IHS, Inc. IHS interpreted structure map files were also accessed for a limited number ofmore » fields. The databases were used along with geological and petrophysical software to identify depleted oil and gas fields in the Federal GOM region. BOEM arranged for access by the project team to proprietary reservoir level maps under an NDA. Review of the BOEM’s Reserves database as of December 31, 2013 indicated that 675 fields in the region were depleted. NITEC identified and rank these 675 fields containing 3,514 individual reservoirs based on BOEM’s estimated OOIP or OGIP values available in the Reserves database. The estimated BOEM OOIP or OGIP values for five fields were validated by an independent evaluation using available petrophysical, geologic and engineering data in the databases. Once this validation was successfully completed, the BOEM ranked list was used to calculate the estimated CO 2 storage volume for each field/reservoir using the DOE CO 2 Resource Estimate Equation. This calculation assumed a range for the CO 2 efficiency factor in the equation, as it was not known at that point in time. NITEC then utilize reservoir simulation to further enhance and refine the DOE equation estimated range of CO 2 storage volumes. NITEC used a purpose built, publically available, 4-component, compositional reservoir simulator developed under funding from DOE (DE-FE0006015) to assess CO 2-EOR and CO 2 storage in 73 fields/461 reservoirs. This simulator was fast and easy to utilize and provided a valuable enhanced assessment and refinement of the estimated CO 2 storage volume for each reservoir simulated. The user interface was expanded to allow for calculation of a probability based assessment of the CO 2 storage volume based on typical uncertainties in operating conditions and reservoir properties during the CO 2 injection period. This modeling of the CO 2 storage estimates for the simulated reservoirs resulted in definition of correlations applicable to all reservoir types (a refined DOE equation) which can be used for predictive purposes using available public data. Application of the correlations to the 675 depleted fields yielded a total CO 2 storage capacity of 4,748 MM tons. The CO 2 storage assessments were supplemented with simulation modeling of eleven (11) oil reservoirs that quantified the change in the stored CO 2 storage volume with the addition of CO 2-EOR (Enhanced Oil Recovery) production. Application of CO 2-EOR to oil reservoirs resulted in higher volumes of CO 2 storage.« less
Sabatelli, Lorenzo
2016-01-01
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis. Key parameters are: the elasticity of the marginal utility of income, and the average budget share. The proposed method can be used to forecast the direct and indirect impact of price changes and of financial instruments of policy using available estimates of the income elasticity of demand. PMID:26999511
Sabatelli, Lorenzo
2016-01-01
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis. Key parameters are: the elasticity of the marginal utility of income, and the average budget share. The proposed method can be used to forecast the direct and indirect impact of price changes and of financial instruments of policy using available estimates of the income elasticity of demand.
Melching, C.S.; Marquardt, J.S.
1997-01-01
Design hydrographs computed from design storms, simple models of abstractions (interception, depression storage, and infiltration), and synthetic unit hydrographs provide vital information for stormwater, flood-plain, and water-resources management throughout the United States. Rainfall and runoff data for small watersheds in Lake County collected between 1990 and 1995 were studied to develop equations for estimation of synthetic unit-hydrograph parameters on the basis of watershed and storm characteristics. The synthetic unit-hydrograph parameters of interest were the time of concentration (TC) and watershed-storage coefficient (R) for the Clark unit-hydrograph method, the unit-graph lag (UL) for the Soil Conservation Service (now known as the Natural Resources Conservation Service) dimensionless unit hydrograph, and the hydrograph-time lag (TL) for the linear-reservoir method for unit-hydrograph estimation. Data from 66 storms with effective-precipitation depths greater than 0.4 inches on 9 small watersheds (areas between 0.06 and 37 square miles (mi2)) were utilized to develop the estimation equations, and data from 11 storms on 8 of these watersheds were utilized to verify (test) the estimation equations. The synthetic unit-hydrograph parameters were determined by calibration using the U.S. Army Corps of Engineers Flood Hydrograph Package HEC-1 (TC, R, and UL) or by manual analysis of the rainfall and run-off data (TL). The relation between synthetic unit-hydrograph parameters, and watershed and storm characteristics was determined by multiple linear regression of the logarithms of the parameters and characteristics. Separate sets of equations were developed with watershed area and main channel length as the starting parameters. Percentage of impervious cover, main channel slope, and depth of effective precipitation also were identified as important characteristics for estimation of synthetic unit-hydrograph parameters. The estimation equations utilizing area had multiple correlation coefficients of 0.873, 0.961, 0.968, and 0.963 for TC, R, UL, and TL, respectively, and the estimation equations utilizing main channel length had multiple correlation coefficients of 0.845, 0.957, 0.961, and 0.963 for TC, R, UL, and TL, respectively. Simulation of the measured hydrographs for the verification storms utilizing TC and R obtained from the estimation equations yielded good results without calibration. The peak discharge for 8 of the 11 storms was estimated within 25 percent and the time-to-peak discharge for 10 of the 11 storms was estimated within 20 percent. Thus, application of the estimation equations to determine synthetic unit-hydrograph parameters for design-storm simulation may result in reliable design hydrographs; as long as the physical characteristics of the watersheds under consideration are within the range of those for the watersheds in this study (area: 0.06-37 mi2, main channel length: 0.33-16.6 miles, main channel slope: 3.13-55.3 feet per mile, and percentage of impervious cover: 7.32-40.6 percent). The estimation equations are most reliable when applied to watersheds with areas less than 25 mi2.
Yoshida, Wako; Dolan, Ray J.; Friston, Karl J.
2008-01-01
This paper introduces a model of ‘theory of mind’, namely, how we represent the intentions and goals of others to optimise our mutual interactions. We draw on ideas from optimum control and game theory to provide a ‘game theory of mind’. First, we consider the representations of goals in terms of value functions that are prescribed by utility or rewards. Critically, the joint value functions and ensuing behaviour are optimised recursively, under the assumption that I represent your value function, your representation of mine, your representation of my representation of yours, and so on ad infinitum. However, if we assume that the degree of recursion is bounded, then players need to estimate the opponent's degree of recursion (i.e., sophistication) to respond optimally. This induces a problem of inferring the opponent's sophistication, given behavioural exchanges. We show it is possible to deduce whether players make inferences about each other and quantify their sophistication on the basis of choices in sequential games. This rests on comparing generative models of choices with, and without, inference. Model comparison is demonstrated using simulated and real data from a ‘stag-hunt’. Finally, we note that exactly the same sophisticated behaviour can be achieved by optimising the utility function itself (through prosocial utility), producing unsophisticated but apparently altruistic agents. This may be relevant ethologically in hierarchal game theory and coevolution. PMID:19112488
Gregory E. Frey; D. Evan Mercer; Frederick W. Cubbage; Robert C. Abt
2010-01-01
The Lower Mississippi River Alluvial Valley (LMAV), once was the largest forested bottom-land area in the continental United States, but has undergone widespread loss of forest through conversion to farmland. Restoration of forest functions and values has been a key conservation goal in the LMAV since the 1970s. This study utilizes a partial differential real options...
NASA Technical Reports Server (NTRS)
Adler, Robert; Huffman, George; Xie, Ping Ping; Rudolf, Bruno; Gruber, Arnold; Janowiak, John
1999-01-01
A new 20-year, monthly, globally complete precipitation analysis has been completed as part of the World Climate Research Program's (WCRP/GEWEX) Global Precipitation Climatology Project (GPCP). This Version 2 of the community generated data set is a result of combining the procedures and data sets as described. The global, monthly, 2.5x 2.5 degree latitude-longitude product utilizes precipitation estimates from low-orbit microwave sensors (SSM/1) and geosynchronous IR sensors and raingauge information over land. The low-orbit microwave estimates are used to adjust or correct the geosynchronous IR estimates, thereby maximizing the utility of the more physically-based microwave estimates and the finer time sampling of the geosynchronous observations. Information from raingauges is blended into the analyses over land. In the 1986-present period TOVS-based precipitation estimates are adjusted to GPCP fields and used in polar regions to produce globally-complete results. The extension back to 1979 utilizes the procedures of Xie and Arkin and their OLR Precipitation Index (OPI). The 20-year climatology of the Version 2 GPCP analysis indicates the expected features of a very strong Pacific Ocean ITCZ and SPCZ with maximum 20-year means approaching 10 mm/day. A similar strength maximum over land is evident over Borneo. Weaker maxima in the tropics occur in the Atlantic ITCZ and over South America and Africa. In mid-latitudes of the Northern Hemisphere the Western Pacific and Western Atlantic maxima have values of approximately 7 mm/day, while in the Southern Hemisphere the mid-latitude maxima are located southeast of Africa, in mid-Pacific as an extension of the SPCZ and southeast of South America. In terms of global totals the GPCP analysis shows 2.7 mm/day (3.0 mm/day over ocean; 2.1 mm/day over land), similar to the Jaeger climatology, but not other climatologies. Zonal averages peak at 6 mm/day at 7*N with mid-latitude peaks of about 3 mm/day at 40-45* latitude. Poleward of 45* the GPCP analysis shows larger zonally-averaged values than most previous satellite-based estimates, although the values are similar to tl,ie Jaeger climatology. Over both ocean areas and at high latitudes the analysis requires additional validation and comparison with special, independent data sets from field experiments and from the Tropical Rain Measuring Mission (TRMM) to confirm the absolute magnitude and variations of precipitation seen in the analysis. Interannual and other variations of the global fields will be shown focusing on the recent ('97-'99) ENSO event compared with previous events, including teleconnections at mid and high latitudes. An ENSO Precipitation Index (ESPI) calculated using the new data set will be described and related to the evolution of the ENSO events during the 20-year period.
Lawrenz, Morgan; Baron, Riccardo; Wang, Yi; McCammon, J Andrew
2012-01-01
The Independent-Trajectory Thermodynamic Integration (IT-TI) approach for free energy calculation with distributed computing is described. IT-TI utilizes diverse conformational sampling obtained from multiple, independent simulations to obtain more reliable free energy estimates compared to single TI predictions. The latter may significantly under- or over-estimate the binding free energy due to finite sampling. We exemplify the advantages of the IT-TI approach using two distinct cases of protein-ligand binding. In both cases, IT-TI yields distributions of absolute binding free energy estimates that are remarkably centered on the target experimental values. Alternative protocols for the practical and general application of IT-TI calculations are investigated. We highlight a protocol that maximizes predictive power and computational efficiency.
Population-based utilities for upper extremity functions in the setting of tetraplegia.
Ram, Ashwin N; Curtin, Catherine M; Chung, Kevin C
2009-11-01
People with tetraplegia face substantial physical and financial hardships. Although upper extremity reconstruction has been advocated for people with tetraplegia, these procedures are markedly underused in the United States. Population-based preference evaluation of upper extremity reconstruction is important to quantify the value of these reconstructive procedures. This study sought to establish the preferences for 3 health states: tetraplegia, tetraplegia with corrected pinch function, and tetraplegia with corrected elbow extension function. A computer-based, time trade-off survey was administered to a cohort of 81 able-bodied second-year medical students who served as a surrogate for the general public. This survey instrument has undergone pilot testing and has established face validity to evaluate the 3 health states of interest. Utilities were calculated based on an estimated 20 years of remaining life. The mean utility for the tetraplegic health state was low. On average, respondents gave up 10.8 +/- 5.0 out of a hypothetical 20 years for perfect health, for a utility of tetraplegia equal to 0.46. For recovery of pinch function, respondents gave up an average of 6.5 +/- 4.3 years, with a corresponding health utility of 0.68. For recovery of elbow extension function, respondents gave up an average of 7.6 +/- 4.5 years, with a corresponding health utility of 0.74. This study established the preferences for 2 upper extremity surgical interventions: tetraplegia with pinch and tetraplegia with elbow extension. The findings from this study place a high value on upper-limb reconstructive procedures with tetraplegia.
Robustness analysis of multirate and periodically time varying systems
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1991-01-01
A new method for analyzing the stability and robustness of multirate and periodically time varying systems is presented. It is shown that a multirate or periodically time varying system can be transformed into an equivalent time invariant system. For a SISO system, traditional gain and phase margins can be found by direct application of the Nyquist criterion to this equivalent time invariant system. For a MIMO system, structured and unstructured singular values can be used to determine the system's robustness. The limitations and implications of utilizing this equivalent time invariant system for calculating gain and phase margins, and for estimating robustness via singular value analysis are discussed.
Parameter interdependence and uncertainty induced by lumping in a hydrologic model
NASA Astrophysics Data System (ADS)
Gallagher, Mark R.; Doherty, John
2007-05-01
Throughout the world, watershed modeling is undertaken using lumped parameter hydrologic models that represent real-world processes in a manner that is at once abstract, but nevertheless relies on algorithms that reflect real-world processes and parameters that reflect real-world hydraulic properties. In most cases, values are assigned to the parameters of such models through calibration against flows at watershed outlets. One criterion by which the utility of the model and the success of the calibration process are judged is that realistic values are assigned to parameters through this process. This study employs regularization theory to examine the relationship between lumped parameters and corresponding real-world hydraulic properties. It demonstrates that any kind of parameter lumping or averaging can induce a substantial amount of "structural noise," which devices such as Box-Cox transformation of flows and autoregressive moving average (ARMA) modeling of residuals are unlikely to render homoscedastic and uncorrelated. Furthermore, values estimated for lumped parameters are unlikely to represent average values of the hydraulic properties after which they are named and are often contaminated to a greater or lesser degree by the values of hydraulic properties which they do not purport to represent at all. As a result, the question of how rigidly they should be bounded during the parameter estimation process is still an open one.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hornibrook, E.R.C.; Longstaffe, F.J.; Fyfe, W.S.
Two types of distribution for {alpha}{sub c} values are observed in anaerobic environments when {delta}{sup 13}C-{Sigma}CO{sub 2} and {delta}{sub 13}C-CH{sub 4} values are measured across gradients of depth or age of organic debris. The type-I distribution involves a systematic increase in {alpha}{sub c} values with depth as a result of decreasing {delta}{sup 13}C-CH{sub 4} and increasing {delta}{sup 13}C-{Sigma}CO{sub 2} values. This behavior corresponds to a progressive increase in the prevalence of methanogenesis by the CO{sub 2} reduction pathway relative to acetate fermentation. Utilization of autotrophically formed acetate by methanogens would also cause an increase in {alpha}{sub c} values. The type-IImore » distribution occurs when both {delta}{sup 13}C-CH{sub 4} and {delta}{sup 13}C-{Sigma}CO{sub 2} values decrease with depth, resulting in approximately constant {alpha}{sub c} values. This condition corresponds with a strong dependence of methanogens on porewater {Sigma}CO{sub 2} as a carbon source by way of either the CO{sub 2} reduction pathway or utilization of autotrophically formed acetate. Freshwater wetlands possess both types of {alpha}{sub c} value distribution. Defining the type of {alpha}{sub c} distributions in different wetlands could reduce uncertainty in estimating the {delta}{sup 13}C value of CH{sub 4} emissions. Hence, the prevalence of type-I vs. type-II {alpha}{sub c} distributions in wetlands may have practical importance for the refinement of global CH{sub 4} budgets that rely on {sup 13}C/{sup 12}C ratios for mass balance.« less
Time-varying value of electric energy efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mims, Natalie A.; Eckman, Tom; Goldman, Charles
Electric energy efficiency resources save energy and may reduce peak demand. Historically, quantification of energy efficiency benefits has largely focused on the economic value of energy savings during the first year and lifetime of the installed measures. Due in part to the lack of publicly available research on end-use load shapes (i.e., the hourly or seasonal timing of electricity savings) and energy savings shapes, consideration of the impact of energy efficiency on peak demand reduction (i.e., capacity savings) has been more limited. End-use load research and the hourly valuation of efficiency savings are used for a variety of electricity planningmore » functions, including load forecasting, demand-side management and evaluation, capacity and demand response planning, long-term resource planning, renewable energy integration, assessing potential grid modernization investments, establishing rates and pricing, and customer service. This study reviews existing literature on the time-varying value of energy efficiency savings, provides examples in four geographically diverse locations of how consideration of the time-varying value of efficiency savings impacts the calculation of power system benefits, and identifies future research needs to enhance the consideration of the time-varying value of energy efficiency in cost-effectiveness screening analysis. Findings from this study include: -The time-varying value of individual energy efficiency measures varies across the locations studied because of the physical and operational characteristics of the individual utility system (e.g., summer or winter peaking, load factor, reserve margin) as well as the time periods during which savings from measures occur. -Across the four locations studied, some of the largest capacity benefits from energy efficiency are derived from the deferral of transmission and distribution system infrastructure upgrades. However, the deferred cost of such upgrades also exhibited the greatest range in value of all the components of avoided costs across the locations studied. -Of the five energy efficiency measures studied, those targeting residential air conditioning in summer-peaking electric systems have the most significant added value when the total time-varying value is considered. -The increased use of rooftop solar systems, storage, and demand response, and the addition of electric vehicles and other major new electricity-consuming end uses are anticipated to significantly alter the load shape of many utility systems in the future. Data used to estimate the impact of energy efficiency measures on electric system peak demands will need to be updated periodically to accurately reflect the value of savings as system load shapes change. -Publicly available components of electric system costs avoided through energy efficiency are not uniform across states and utilities. Inclusion or exclusion of these components and differences in their value affect estimates of the time-varying value of energy efficiency. -Publicly available data on end-use load and energy savings shapes are limited, are concentrated regionally, and should be expanded.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Estimation of descriptive statistics for multiply censored water quality data
Helsel, Dennis R.; Cohn, Timothy A.
1988-01-01
This paper extends the work of Gilliom and Helsel (1986) on procedures for estimating descriptive statistics of water quality data that contain “less than” observations. Previously, procedures were evaluated when only one detection limit was present. Here we investigate the performance of estimators for data that have multiple detection limits. Probability plotting and maximum likelihood methods perform substantially better than simple substitution procedures now commonly in use. Therefore simple substitution procedures (e.g., substitution of the detection limit) should be avoided. Probability plotting methods are more robust than maximum likelihood methods to misspecification of the parent distribution and their use should be encouraged in the typical situation where the parent distribution is unknown. When utilized correctly, less than values frequently contain nearly as much information for estimating population moments and quantiles as would the same observations had the detection limit been below them.
On the validity of time-dependent AUC estimators.
Schmid, Matthias; Kestler, Hans A; Potapov, Sergej
2015-01-01
Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Neuro-fuzzy model for estimating race and gender from geometric distances of human face across pose
NASA Astrophysics Data System (ADS)
Nanaa, K.; Rahman, M. N. A.; Rizon, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Classifying human face based on race and gender is a vital process in face recognition. It contributes to an index database and eases 3D synthesis of the human face. Identifying race and gender based on intrinsic factor is problematic, which is more fitting to utilizing nonlinear model for estimating process. In this paper, we aim to estimate race and gender in varied head pose. For this purpose, we collect dataset from PICS and CAS-PEAL databases, detect the landmarks and rotate them to the frontal pose. After geometric distances are calculated, all of distance values will be normalized. Implementation is carried out by using Neural Network Model and Fuzzy Logic Model. These models are combined by using Adaptive Neuro-Fuzzy Model. The experimental results showed that the optimization of address fuzzy membership. Model gives a better assessment rate and found that estimating race contributing to a more accurate gender assessment.
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
Lopez-Hoffman, Laura; Semmens, Darius J.; Diffendorfer, Jay
2013-01-01
Species that migrate through protected and wilderness areas and utilize their resources, deliver ecosystem services to people in faraway locations. The mismatch between the areas that most support a species and those areas where the species provides most benefits to society can lead to underestimation of the true value of protected areas such as wilderness. We present a method to communicate the “off-site” value of wilderness and protected areas in providing habitat to migratory species that, in turn, provide benefits to people in distant locations. Using northern pintail ducks (Anas acuta) as an example, the article provides a method to estimate the amount of subsidy – the value of the ecosystem services provided by a migratory species in one area versus the cost to support the species and its habitat elsewhere.
Zhang, Kan; Zhang, Jianying; Chen, Yingxu; Zhu, Yinmei
2006-10-01
Based on the Landset TM information of land use/cover change and greenbelt distribution in Hangzhou city in 1994 and 2004, and by using CITYgreen model, this paper estimated the eco-service value of urban greenbelt in the city under the effects of land use change and economic development. The results showed that in the 10 years from 1994 to 2004, the greenbelt area in the city decreased by 20. 4% , while its eco-service value increased by 168 million yuan. The annual increment of greenbelt eco-service value and GDP was 111.92% and 5. 32% , respectively. Suitable adjustment of land use pattern in the city harmonized the relationships between urban economic development and urban eco-function, and achieved higher eco-service efficiency of land utilization.
Meseret, S.; Tamir, B.; Gebreyohannes, G.; Lidauer, M.; Negussie, E.
2015-01-01
The development of effective genetic evaluations and selection of sires requires accurate estimates of genetic parameters for all economically important traits in the breeding goal. The main objective of this study was to assess the relative performance of the traditional lactation average model (LAM) against the random regression test-day model (RRM) in the estimation of genetic parameters and prediction of breeding values for Holstein Friesian herds in Ethiopia. The data used consisted of 6,500 test-day (TD) records from 800 first-lactation Holstein Friesian cows that calved between 1997 and 2013. Co-variance components were estimated using the average information restricted maximum likelihood method under single trait animal model. The estimate of heritability for first-lactation milk yield was 0.30 from LAM whilst estimates from the RRM model ranged from 0.17 to 0.29 for the different stages of lactation. Genetic correlations between different TDs in first-lactation Holstein Friesian ranged from 0.37 to 0.99. The observed genetic correlation was less than unity between milk yields at different TDs, which indicated that the assumption of LAM may not be optimal for accurate evaluation of the genetic merit of animals. A close look at estimated breeding values from both models showed that RRM had higher standard deviation compared to LAM indicating that the TD model makes efficient utilization of TD information. Correlations of breeding values between models ranged from 0.90 to 0.96 for different group of sires and cows and marked re-rankings were observed in top sires and cows in moving from the traditional LAM to RRM evaluations. PMID:26194217
Methods for Performing Survival Curve Quality-of-Life Assessments.
Sumner, Walton; Ding, Eric; Fischer, Irene D; Hagen, Michael D
2014-08-01
Many medical decisions involve an implied choice between alternative survival curves, typically with differing quality of life. Common preference assessment methods neglect this structure, creating some risk of distortions. Survival curve quality-of-life assessments (SQLA) were developed from Gompertz survival curves fitting the general population's survival. An algorithm was developed to generate relative discount rate-utility (DRU) functions from a standard survival curve and health state and an equally attractive alternative curve and state. A least means squared distance algorithm was developed to describe how nearly 3 or more DRU functions intersect. These techniques were implemented in a program called X-Trade and tested. SQLA scenarios can portray realistic treatment choices. A side effect scenario portrays one prototypical choice, to extend life while experiencing some loss, such as an amputation. A risky treatment scenario portrays procedures with an initial mortality risk. A time trade scenario mimics conventional time tradeoffs. Each SQLA scenario yields DRU functions with distinctive shapes, such as sigmoid curves or vertical lines. One SQLA can imply a discount rate or utility if the other value is known and both values are temporally stable. Two SQLA exercises imply a unique discount rate and utility if the inferred DRU functions intersect. Three or more SQLA results can quantify uncertainty or inconsistency in discount rate and utility estimates. Pilot studies suggested that many subjects could learn to interpret survival curves and do SQLA. SQLA confuse some people. Compared with SQLA, standard gambles quantify very low utilities more easily, and time tradeoffs are simpler for high utilities. When discount rates approach zero, time tradeoffs are as informative and easier to do than SQLA. SQLA may complement conventional utility assessment methods. © The Author(s) 2014.
Saini, Komal; Singh, Parminder; Singh, Prabhjot; Bajwa, B S; Sahoo, B K
2017-02-01
A survey was conducted to estimate equilibrium factor and unattached fractions of radon and thoron in different regions of Punjab state, India. Pin hole based twin cup dosimeters and direct progeny sensor techniques have been utilized for estimation of concentration level of radon, thoron and their progenies. Equilibrium factor calculated from radon, thoron and their progenies concentration has been found to vary from 0.15 to 0.80 and 0.008 to 0.101 with an average value of 0.44 and 0.036 for radon and thoron respectively. Equilibrium factor for radon has found to be highest in winter season and lowest in summer season whereas for thoron highest value is observed in winter and rainy season and lowest in summer. Unattached fractions of radon and thoron have been found to vary from 0.022 to 0.205 and 0.013 to 0.212 with an average value of 0.099 and 0.071 respectively. Unattached fractions have found to be highest in winter season and lowest in rainy and summer season. Copyright © 2016 Elsevier Ltd. All rights reserved.
Miyamoto, Shuichi; Atsuyama, Kenji; Ekino, Keisuke; Shin, Takashi
2018-01-01
The isolation of useful microbes is one of the traditional approaches for the lead generation in drug discovery. As an effective technique for microbe isolation, we recently developed a multidimensional diffusion-based gradient culture system of microbes. In order to enhance the utility of the system, it is favorable to have diffusion coefficients of nutrients such as sugars in the culture medium beforehand. We have, therefore, built a simple and convenient experimental system that uses agar-gel to observe diffusion. Next, we performed computer simulations-based on random-walk concepts-of the experimental diffusion system and derived correlation formulas that relate observable diffusion data to diffusion coefficients. Finally, we applied these correlation formulas to our experimentally-determined diffusion data to estimate the diffusion coefficients of sugars. Our values for these coefficients agree reasonably well with values published in the literature. The effectiveness of our simple technique, which has elucidated the diffusion coefficients of some molecules which are rarely reported (e.g., galactose, trehalose, and glycerol) is demonstrated by the strong correspondence between the literature values and those obtained in our experiments.
Canning, Elizabeth A; Harackiewicz, Judith M
2015-03-01
Social-psychological interventions in education have used a variety of "self-persuasion" or "saying-is-believing" techniques to encourage students to articulate key intervention messages. These techniques are used in combination with more overt strategies, such as the direct communication of messages in order to promote attitude change. However, these different strategies have rarely been systematically compared, particularly in controlled laboratory settings. We focus on one intervention based in expectancy-value theory designed to promote perceptions of utility value in the classroom and test different intervention techniques to promote interest and performance. Across three laboratory studies, we used a mental math learning paradigm in which we varied whether students wrote about utility value for themselves or received different forms of directly-communicated information about the utility value of a novel mental math technique. In Study 1, we examined the difference between directly-communicated and self-generated utility-value information and found that directly-communicated utility-value information undermined performance and interest for individuals who lacked confidence, but that self-generated utility had positive effects. However, Study 2 suggests that these negative effects of directly-communicated utility value can be ameliorated when participants are also given the chance to generate their own examples of utility value, revealing a synergistic effect of directly-communicated and self-generated utility value. In Study 3, we found that individuals who lacked confidence benefited more when everyday examples of utility value were communicated, rather than career and school examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Donald; Davidson, Carolyn; Fu, Ran
The price of photovoltaic (PV) systems in the United States (i.e., the cost to the system owner) has continued to decline across all major market sectors. This report provides a Q1 2015 update regarding the prices of residential, commercial, and utility scale PV systems, based on an objective methodology that closely approximates the book value of a PV system. Several cases are benchmarked to represent common variations in business models, labor rates, and system architecture choice. We estimate a weighted-average cash purchase price of $3.09/W for residential scale rooftop systems, $2.15/W for commercial scale rooftop systems, $1.77/W for utility scalemore » systems with fixed mounting structures, and $1.91/W for utility scale systems using single-axis trackers. All systems are modeled assuming standard-efficiency, polycrystalline-silicon PV modules, and further assume installation within the United States.« less
Lieder, Falk; Griffiths, Thomas L; Hsu, Ming
2018-01-01
People's decisions and judgments are disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily. This article explores whether such availability biases can be reconciled with rational information processing by taking into account the fact that decision makers value their time and have limited cognitive resources. Our analysis suggests that to make optimal use of their finite time decision makers should overrepresent the most important potential consequences relative to less important, put potentially more probable, outcomes. To evaluate this account, we derive and test a model we call utility-weighted sampling. Utility-weighted sampling estimates the expected utility of potential actions by simulating their outcomes. Critically, outcomes with more extreme utilities have a higher probability of being simulated. We demonstrate that this model can explain not only people's availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Dilokthornsakul, P; Sawangjit, R; Inprasong, C; Chunhasewee, S; Rattanapan, P; Thoopputra, T; Chaiyakunapruk, N
2016-01-01
Background: Stevens-Johnson syndrome (SJS) and Toxic Epidermal Necrolysis (TEN) are life-threatening dermatologic conditions. Although, the incidence of SJS/TEN in Thailand is high, information on cost of care for SJS/TEN is limited. This study aims to estimate healthcare resource utilization and cost of SJS/TEN in Thailand, using hospital perspective. Methods: A retrospective study using an electronic health database from a university-affiliated hospital in Thailand was undertaken. Patients admitted with SJS/TEN from 2002 to 2007 were included. Direct medical cost was estimated by the cost-to-charge ratio. Cost was converted to 2013 value by consumer price index, and converted to $US using 31 Baht/1 $US. The healthcare resource utilization was also estimated. Results: A total of 157 patients were included with average age of 45.3±23.0 years. About 146 patients (93.0%) were diagnosed as SJS and the remaining (7.0%) were diagnosed as TEN. Most of the patients (83.4%) were treated with systemic corticosteroids. Overall, mortality rate was 8.3%, while the average length of stay (LOS) was 10.1±13.2 days. The average cost of managing SJS/TEN for all patients was $1,064±$2,558. The average cost for SJS patients was $1,019±$2,601 while that for TEN patients was $1,660±$1,887. Conclusions: Healthcare resource utilization and cost of care for SJS/TEN in Thailand were tremendous. The findings are important for policy makers to allocate healthcare resources and develop strategies to prevent SJS/TEN which could decrease length of stay and cost of care. PMID:27089110
A systematic review of utility values for chemotherapy-related adverse events.
Shabaruddin, Fatiha H; Chen, Li-Chia; Elliott, Rachel A; Payne, Katherine
2013-04-01
Chemotherapy offers cancer patients the potential benefits of improved mortality and morbidity but may cause detrimental outcomes due to adverse drug events (ADEs), some of which requiring time-consuming, resource-intensive and costly clinical management. To appropriately assess chemotherapy agents in an economic evaluation, ADE-related parameters such as the incidence, (dis)utility and cost of ADEs should be reflected within the model parameters. To date, there has been no systematic summary of the existing literature that quantifies the utilities of ADEs due to healthcare interventions in general and chemotherapy treatments in particular. This review aimed to summarize the current evidence base of reported utility values for chemotherapy-related ADEs. A structured electronic search combining terms for utility, utility valuation methods and generic terms for cancer treatment was conducted in MEDLINE and EMBASE in June 2011. Inclusion criteria were: (1) elicitation of utility values for chemotherapy-related ADEs and (2) primary data. Two reviewers identified studies and extracted data independently. Any disagreements were resolved by a third reviewer. Eighteen studies met the inclusion criteria from the 853 abstracts initially identified, collectively reporting 218 utility values for chemotherapy-related ADEs. All 18 studies used short descriptions (vignettes) to obtain the utility values, with nine studies presenting the vignettes used in the valuation exercises. Of the 218 utility values, 178 were elicited using standard gamble (SG) or time trade-off (TTO) approaches, while 40 were elicited using visual analogue scales (VAS). There were 169 utility values of specific chemotherapy-related ADEs (with the top ten being anaemia [34 values], nausea and/or vomiting [32 values], neuropathy [21 values], neutropenia [12 values], diarrhoea [12 values], stomatitis [10 values], fatigue [8 values], alopecia [7 values], hand-foot syndrome [5 values] and skin reaction [5 values]) and 49 of non-specific chemotherapy-related adverse events. In most cases, it was difficult to directly compare the utility values as various definitions and study-specific vignettes were used for the ADEs of interest. This review was designed to provide an overall description of existing literature reporting utility values for chemotherapy-related ADEs. The findings were not exhaustive and were limited to publications that could be identified using the search strategy employed and those reported in the English language. This review identified wide ranges in the utility values reported for broad categories of specific chemotherapy-related ADEs. There were difficulties in comparing the values directly as various study-specific definitions were used for these ADEs and most studies did not make the vignettes used in the valuation exercises available. It is recommended that a basic minimum requirement be developed for the transparent reporting of study designs eliciting utility values, incorporating key criteria such as reporting how the vignettes were developed and presenting the vignettes used in the valuation tasks as well as valuing and reporting the utility values of the ADE-free base states. It is also recommended, in the future, for studies valuing the utilities of chemotherapy-related ADEs to define the ADEs according to the National Cancer Institute (NCI) definitions for chemotherapy-related ADEs as the use of the same definition across studies would ease the comparison and selection of utility values and make the overall inclusion of adverse events within economic models of chemotherapy agents much more straightforward.
Kaitaniemi, Pekka
2008-04-09
Allometric equations are widely used in many branches of biological science. The potential information content of the normalization constant b in allometric equations of the form Y = bX(a) has, however, remained largely neglected. To demonstrate the potential for utilizing this information, I generated a large number of artificial datasets that resembled those that are frequently encountered in biological studies, i.e., relatively small samples including measurement error or uncontrolled variation. The value of X was allowed to vary randomly within the limits describing different data ranges, and a was set to a fixed theoretical value. The constant b was set to a range of values describing the effect of a continuous environmental variable. In addition, a normally distributed random error was added to the values of both X and Y. Two different approaches were then used to model the data. The traditional approach estimated both a and b using a regression model, whereas an alternative approach set the exponent a at its theoretical value and only estimated the value of b. Both approaches produced virtually the same model fit with less than 0.3% difference in the coefficient of determination. Only the alternative approach was able to precisely reproduce the effect of the environmental variable, which was largely lost among noise variation when using the traditional approach. The results show how the value of b can be used as a source of valuable biological information if an appropriate regression model is selected.
Dranitsaris, George; Ortega, Ana; Lubbe, Martie S; Truter, Ilse
2012-03-01
Several European governments have recently mandated price cuts in drugs to reduce health care spending. However, such measures without supportive evidence may compromise patient care because manufacturers may withdraw current products or not launch new agents. A value-based pricing scheme may be a better approach for determining a fair drug price and may be a medium for negotiations between the key stakeholders. To demonstrate this approach, pharmacoeconomic (PE) modeling was used from the Spanish health care system perspective to estimate a value-based price for bevacizumab, a drug that provides a 1.4-month survival benefit to patients with metastatic colorectal cancer (mCRC). The threshold used for economic value was three times the Spanish per capita GDP, as recommended by the World Health Organization (WHO). A PE model was developed to simulate outcomes in mCRC patients receiving chemotherapy ± bevacizumab. Clinical data were obtained from randomized trials and costs from a Spanish hospital. Utility estimates were determined by interviewing 24 Spanish oncology nurses and pharmacists. A price per dose of bevacizumab was then estimated using a target threshold of € 78,300 per quality-adjusted life year gained, which is three times the Spanish per capita GDP. For a 1.4-month survival benefit, a price of € 342 per dose would be considered cost effective from the Spanish public health care perspective. The price may be increased to € 733 or € 843 per dose if the drug were able to improve patient quality of life or enhance survival from 1.4 to 3 months. This study demonstrated that a value-based pricing approach using PE modeling and the WHO criteria for economic value is feasible and perhaps a better alternative to government mandated price cuts. The former approach would be a good starting point for opening dialog between European government payers and the pharmaceutical industry.
NASA Astrophysics Data System (ADS)
Perger, K.; Pinter, S.; Frey, S.; Tóth, L. V.
2018-05-01
One of the most certain ways to determine star formation rate in galaxies is based on far infrared (FIR) measurements. To decide the origin of the observed FIR emission, subtracting the Galactic foreground is a crucial step. We utilized Herschel photometric data to determine the hydrogen column densities in three galactic latitude regions, at b = 27°, 50° and -80°. We applied a pixel-by-pixel fit to the spectral energy distribution (SED) for the images aquired from parallel PACS-SPIRE observations in all three sky areas. We determined the column densities with resolutions 45'' and 6', and compared the results with values estimated from the IRAS dust maps. Column densities at 27° and 50° galactic latitudes determined from the Herschel data are in a good agreement with the literature values. However, at the highest galactic latitude we found that the column densities from the Herschel data exceed those derived from the IRAS dust map.
Sruamsiri, Rosarin; Chaiyakunapruk, Nathorn; Pakakasama, Samart; Sirireung, Somtawin; Sripaiboonkij, Nintita; Bunworasate, Udomsak; Hongeng, Suradej
2013-02-05
Hematopoieticic stem cell transplantation is the only therapeutic option that can cure thalassemia disease. Reduced intensity hematopoietic stem cell transplantation (RI-HSCT) has demonstrated a high cure rate with minimal complications compared to other options. Because RI-HSCT is very costly, economic justification for its value is needed. This study aimed to estimate the cost-utility of RI-HSCT compared with blood transfusions combined with iron chelating therapy (BT-ICT) for adolescent and young adult with severe thalassemia in Thailand. A Markov model was used to estimate the relevant costs and health outcomes over the patients' lifetimes using a societal perspective. All future costs and outcomes were discounted at a rate of 3% per annum. The efficacy of RI-HSCT was based a clinical trial including a total of 18 thalassemia patients. Utility values were derived directly from all patients using EQ-5D and SF-6D. Primary outcomes of interest were lifetime costs, quality adjusted life-years (QALYs) gained, and the incremental cost-effectiveness ratio (ICER) in US ($) per QALY gained. One-way and probabilistic sensitivity analyses (PSA) were conducted to investigate the effect of parameter uncertainty. In base case analysis, the RI-HSCT group had a better clinical outcomes and higher lifetime costs. The incremental cost per QALY gained was US $3,236 per QALY. The acceptability curve showed that the probability of RI-HSCT being cost-effective was 71% at the willingness to pay of 1 time of Thai Gross domestic product per capita (GDP per capita), approximately US $4,210 per QALY gained. The most sensitive parameter was utility of severe thalassemia patients without cardiac complication patients. At a societal willingness to pay of 1 GDP per capita, RI-HSCT was a cost-effective treatment for adolescent and young adult with severe thalassemia in Thailand compared to BT-ICT.
Aziz, Kamran M A
2015-01-01
Current study has invented a new method for utilizing spot urine protein among diabetic patients. There have been various efforts and strategies in research internationally to detect, diagnose and monitor nephropathy/DKD. Although 24-hour urine studies are gold standard, however, there exist some controversies about microalbuminuria and spot urine protein. The current study was designed to utilize spot urine protein among diabetic patients and to find its association with routine dipstick urine test for albumin, and microalbuminuria. The study demonstrated significant association of spot urine protein with urine dipstick albumin, and has demonstrated increasing spot urine protein with increasing albumin in urine (p-value < 0.0001). This study also demonstrated significantly higher levels of spot urine protein between the groups with nephropathy/DKD as compared to those without nephropathy/DKD (p-value < 0.0001). Similarly, spot urine protein and spot urine protein/creatinine were also significantly associated with microalbumin and microalbumin/creatinine in urine. Significant regression models for spot urine protein and microalbuminuria were also developed and proposed to detect and estimate microalbumin in urine while utilizing spot urine protein (< 0.0001). Synthesized regression equations and models can be used confidently to detect, rule out and monitor proteinuria and DKD. ROC curves were utilized to detect spot urine protein cutoff points for nephropathy and DKD with high specificity and sensitivity. Some important patents were also discussed in the paper regarding albuminuria/proteinuria detection and management. Current study has demonstrated and concluded, for the first time, that there exists a significant association of spot urine protein with routine dipstick albumin in urine and microalbuminuria. It is also essential to detect early, monitor and manage proteinuria, hypertension and dyslipidemia with good glycemic control to prevent diabetes complications.
2013-01-01
Background Hematopoieticic stem cell transplantation is the only therapeutic option that can cure thalassemia disease. Reduced intensity hematopoietic stem cell transplantation (RI-HSCT) has demonstrated a high cure rate with minimal complications compared to other options. Because RI-HSCT is very costly, economic justification for its value is needed. This study aimed to estimate the cost-utility of RI-HSCT compared with blood transfusions combined with iron chelating therapy (BT-ICT) for adolescent and young adult with severe thalassemia in Thailand. Methods A Markov model was used to estimate the relevant costs and health outcomes over the patients’ lifetimes using a societal perspective. All future costs and outcomes were discounted at a rate of 3% per annum. The efficacy of RI-HSCT was based a clinical trial including a total of 18 thalassemia patients. Utility values were derived directly from all patients using EQ-5D and SF-6D. Primary outcomes of interest were lifetime costs, quality adjusted life-years (QALYs) gained, and the incremental cost-effectiveness ratio (ICER) in US ($) per QALY gained. One-way and probabilistic sensitivity analyses (PSA) were conducted to investigate the effect of parameter uncertainty. Results In base case analysis, the RI-HSCT group had a better clinical outcomes and higher lifetime costs. The incremental cost per QALY gained was US $ 3,236 per QALY. The acceptability curve showed that the probability of RI-HSCT being cost-effective was 71% at the willingness to pay of 1 time of Thai Gross domestic product per capita (GDP per capita), approximately US $ 4,210 per QALY gained. The most sensitive parameter was utility of severe thalassemia patients without cardiac complication patients. Conclusion At a societal willingness to pay of 1 GDP per capita, RI-HSCT was a cost-effective treatment for adolescent and young adult with severe thalassemia in Thailand compared to BT-ICT. PMID:23379888
Wyld, Melanie; Morton, Rachael Lisa; Hayen, Andrew; Howard, Kirsten; Webster, Angela Claire
2012-01-01
Background Chronic kidney disease (CKD) is a common and costly condition to treat. Economic evaluations of health care often incorporate patient preferences for health outcomes using utilities. The objective of this study was to determine pooled utility-based quality of life (the numerical value attached to the strength of an individual's preference for a specific health outcome) by CKD treatment modality. Methods and Findings We conducted a systematic review, meta-analysis, and meta-regression of peer-reviewed published articles and of PhD dissertations published through 1 December 2010 that reported utility-based quality of life (utility) for adults with late-stage CKD. Studies reporting utilities by proxy (e.g., reported by a patient's doctor or family member) were excluded. In total, 190 studies reporting 326 utilities from over 56,000 patients were analysed. There were 25 utilities from pre-treatment CKD patients, 226 from dialysis patients (haemodialysis, n = 163; peritoneal dialysis, n = 44), 66 from kidney transplant patients, and three from patients treated with non-dialytic conservative care. Using time tradeoff as a referent instrument, kidney transplant recipients had a mean utility of 0.82 (95% CI: 0.74, 0.90). The mean utility was comparable in pre-treatment CKD patients (difference = −0.02; 95% CI: −0.09, 0.04), 0.11 lower in dialysis patients (95% CI: −0.15, −0.08), and 0.2 lower in conservative care patients (95% CI: −0.38, −0.01). Patients treated with automated peritoneal dialysis had a significantly higher mean utility (0.80) than those on continuous ambulatory peritoneal dialysis (0.72; p = 0.02). The mean utility of transplant patients increased over time, from 0.66 in the 1980s to 0.85 in the 2000s, an increase of 0.19 (95% CI: 0.11, 0.26). Utility varied by elicitation instrument, with standard gamble producing the highest estimates, and the SF-6D by Brazier et al., University of Sheffield, producing the lowest estimates. The main limitations of this study were that treatment assignments were not random, that only transplant had longitudinal data available, and that we calculated EuroQol Group EQ-5D scores from SF-36 and SF-12 health survey data, and therefore the algorithms may not reflect EQ-5D scores measured directly. Conclusions For patients with late-stage CKD, treatment with dialysis is associated with a significant decrement in quality of life compared to treatment with kidney transplantation. These findings provide evidence-based utility estimates to inform economic evaluations of kidney therapies, useful for policy makers and in individual treatment discussions with CKD patients. PMID:22984353
NASA Astrophysics Data System (ADS)
Damigos, D.; Tentes, G.; Balzarini, M.; Furlanis, F.; Vianello, A.
2017-08-01
Managed aquifer recharge [MAR) is a promising water management tool toward restoring groundwater balance and securing groundwater ecosystem services (i.e., water for drinking, industrial or irrigation use, control of land subsidence, maintenance of environmental flows to groundwater dependent ecosystems, etc.). Obviously, MAR projects can improve the quality of lives of the people by several ways. Thus, from a social perspective, the benefits of MAR cannot and should not be based only on market revenues or costs. Although the value of groundwater, from a social perspective, has been a subject of socio-economic research, literature on the value of MAR per se is very limited. This paper, focusing on Italy which is a country with extensive utilization of MAR, aims to estimate the economic value of MAR and makes a first step toward filling this gap in the literature. For this purpose, the Contingent Valuation method was implemented to provide a monetary estimate and to explore the factors influencing people's attitude and willingness to pay for MAR. The results show that society holds not only use but also significant nonuse values, which are a part of the total economic value (TEV) of groundwater according to related research efforts. To this end, MAR valuation highlights its social importance for groundwater conservation and provides a solid basis for incorporating its nonmarket benefits into groundwater management policies and assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kousba, Ahmed A.; Poet, Torka S.; Timchalk, Chuck
2007-01-01
Chlorpyrifos and diazinon are two commonly used organophosphorus (OP) insecticides, and their primary mechanism of action involves the inhibition of acetylcholinesterase (AChE) by their metabolites chlorpyrifos-oxon (CPO) and diazinon-oxon (DZO), respectively. The study objectives were to assess the in vitro age-related inhibition kinetics of neonatal rat brain cholinesterase (ChE) by estimating the bimolecular inhibitory rate constant (ki) values for CPO and DZO. Brain ChE inhibition and ki values following CPO and DZO incubation with neonatal Sprague-Dawley rats rat brain homogenates were determined at post natal day (PND) -5, -12 and -17 and compared with the corresponding inhibition and ki valuesmore » obtained in the adult rat. A modified Ellman method was utilized for measuring the ChE activity. Chlorpyrifos-oxon resulted in greater ChE inhibition than DZO consistent with the estimated ki values of both compounds. Neonatal brain ChE inhibition kinetics exhibited a marked age-related sensitivity to CPO, where the order of ChE inhibition was PND-5 > PND-7 > PND-17 with ki values of 0.95, 0.50 and 0.22 nM-1hr-1, respectively. In contrast, DZO did not exhibit an age-related inhibition of neonatal brain ChE, and the estimated ki value at all PND ages was 0.02 nM-1hr-1. These results demonstrated an age- and chemical-related OP-selective inhibition of rat brain ChE which may be critically important in understanding the potential sensitivity of juvenile humans to specific OP exposures.« less
Carreon, Leah Y; Glassman, Steven D; McDonough, Christine M; Rampersaud, Raja; Berven, Sigurd; Shainline, Michael
2009-09-01
Cross-sectional cohort. The purpose of this study is to provide a model to allow estimation of utility from the Short Form (SF)-6D using data from the Oswestry Disability Index (ODI), Back Pain Numeric Rating Scale (BPNRS), and the Leg Pain Numeric Rating Scale (LPNRS). Cost-utility analysis provides important information about the relative value of interventions and requires a measure of utility not often available from clinical trial data. The ODI and numeric rating scales for back (BPNRS) and leg pain (LPNRS), are widely used disease-specific measures for health-related quality of life in patients with lumbar degenerative disorders. The purpose of this study is to provide a model to allow estimation of utility from the SF-6D using data from the ODI, BPNRS, and the LPNRS. SF-36, ODI, BPNRS, and LPNRS were prospectively collected before surgery, at 12 and 24 months after surgery in 2640 patients undergoing lumbar fusion for degenerative disorders. Spearman correlation coefficients for paired observations from multiple time points between ODI, BPNRS, and LPNRS, and SF-6D utility scores were determined. Regression modeling was done to compute the SF-6D score from the ODI, BPNRS, and LPNRS. Using a separate, independent dataset of 2174 patients in which actual SF-6D and ODI scores were available, the SF-6D was estimated for each subject and compared to their actual SF-6D. In the development sample, the mean age was 52.5 +/- 15 years and 34% were male. In the validation sample, the mean age was 52.9 +/- 14.2 years and 44% were male. Correlations between the SF-6D and the ODI, BPNRS, and LPNRS were statistically significant (P < 0.0001) with correlation coefficients of 0.82, 0.78, and 0.72, respectively. The regression equation using ODI, BPNRS,and LPNRS to predict SF-6D had an R of 0.69 and a root mean square error of 0.076. The model using ODI alone had an R of 0.67 and a root mean square error of 0.078. The correlation coefficient between the observed and estimated SF-6D score was 0.80. In the validation analysis, there was no statistically significant difference (P = 0.11) between actual mean SF-6D (0.55 +/- 0.12) and the estimated mean SF-6D score (0.55 +/- 0.10) using the ODI regression model. This regression-based algorithm may be used to predict SF-6D scores in studies of lumbar degenerative disease that have collected ODI but not utility scores.
Carreon, Leah Y.; Glassman, Steven D.; McDonough, Christine M.; Rampersaud, Raja; Berven, Sigurd; Shainline, Michael
2012-01-01
Study Design Cross-sectional cohort Objective The purpose of this study is to provide a model to allow estimation of utility from the SF-6D using data from the ODI, BPNRS, and the LPNRS. Summary of Background Data Cost-utility analysis provides important information about the relative value of interventions and requires a measure of utility not often available from clinical trial data. The Oswestry Disability Index (ODI) and numeric rating scales for back (BPNRS) and leg pain (LPNRS), are widely used disease-specific measures for health-related quality of life in patients with lumbar degenerative disorders. The purpose of this study is to provide a model to allow estimation of utility from the SF-6D using data from the ODI, BPNRS, and the LPNRS. Methods SF-36, ODI, BPNRS and LPNRS were prospectively collected pre-operatively, at 12 and 24 months post-operatively in 2640 patients undergoing lumbar fusion for degenerative disorders. Spearman correlation coefficients for paired observations from multiple time points between ODI, BPNRS and LPNRS and SF-6D utility scores were determined. Regression modeling was done to compute the SF-6D score from the ODI, BPNRS and LPNRS. Using a separate, independent dataset of 2174 patients in which actual SF-6D and ODI scores were available, the SF-6D was estimated for each subject and compared to their actual SF-6D. Results In the development sample, the mean age was 52.5 ± 15 years and 34% were male. In the validation sample the mean age was 52.9 ± 14.2 years and 44% were male. Correlations between the SF-6D and the ODI, BPNRS and LPNRS were statistically significant (p<0.0001) with correlation coefficients of 0.82, 0.78, and 0.72 respectively. The regression equation using ODI, BPNRS and LPNRS to predict SF-6D had an R2 of 0.69 and a root mean square error (RMSE) of 0.076. The model using ODI alone had an R2 of 0.67 and a RMSE of 0.078. The correlation coefficient between the observed and estimated SF-6D score was 0.80. In the validation analysis, there was no statistically significant difference (p=0.11) between actual mean SF-6D (0.55 ± 0.12) and the estimated mean SF-6D score (0.55 ± 0.10) using the ODI regression model. Conclusion This regression-based algorithm may be used to predict SF-6D scores in studies of lumbar degenerative disease that have collected ODI but not utility scores. PMID:19730215
Utilization of structural steel in buildings.
Moynihan, Muiris C; Allwood, Julian M
2014-08-08
Over one-quarter of steel produced annually is used in the construction of buildings. Making this steel causes carbon dioxide emissions, which climate change experts recommend be reduced by half in the next 37 years. One option to achieve this is to design and build more efficiently, still delivering the same service from buildings but using less steel to do so. To estimate how much steel could be saved from this option, 23 steel-framed building designs are studied, sourced from leading UK engineering firms. The utilization of each beam is found and buildings are analysed to find patterns. The results for over 10 000 beams show that average utilization is below 50% of their capacity. The primary reason for this low value is 'rationalization'-providing extra material to reduce labour costs. By designing for minimum material rather than minimum cost, steel use in buildings could be drastically reduced, leading to an equivalent reduction in 'embodied' carbon emissions.
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
Canning, Elizabeth A.; Harackiewicz, Judith M.
2015-01-01
Social-psychological interventions in education have used a variety of “self-persuasion” or “saying-is-believing” techniques to encourage students to articulate key intervention messages. These techniques are used in combination with more overt strategies, such as the direct communication of messages in order to promote attitude change. However, these different strategies have rarely been systematically compared, particularly in controlled laboratory settings. We focus on one intervention based in expectancy-value theory designed to promote perceptions of utility value in the classroom and test different intervention techniques to promote interest and performance. Across three laboratory studies, we used a mental math learning paradigm in which we varied whether students wrote about utility value for themselves or received different forms of directly-communicated information about the utility value of a novel mental math technique. In Study 1, we examined the difference between directly-communicated and self-generated utility-value information and found that directly-communicated utility-value information undermined performance and interest for individuals who lacked confidence, but that self-generated utility had positive effects. However, Study 2 suggests that these negative effects of directly-communicated utility value can be ameliorated when participants are also given the chance to generate their own examples of utility value, revealing a synergistic effect of directly-communicated and self-generated utility value. In Study 3, we found that individuals who lacked confidence benefited more when everyday examples of utility value were communicated, rather than career and school examples. PMID:26495326
Park, Sun-Young; Park, Eun-Ja; Suh, Hae Sun; Ha, Dongmun; Lee, Eui-Kyung
2017-08-01
Although nonpreference-based disease-specific measures are widely used in clinical studies, they cannot generate utilities for economic evaluation. A solution to this problem is to estimate utilities from disease-specific instruments using the mapping function. This study aimed to develop a transformation model for mapping the pruritus-visual analog scale (VAS) to the EuroQol 5-Dimension 3-Level (EQ-5D-3L) utility index in pruritus. A cross-sectional survey was conducted with a sample (n = 268) drawn from the general population of South Korea. Data were randomly divided into 2 groups, one for estimating and the other for validating mapping models. To select the best model, we developed and compared 3 separate models using demographic information and the pruritus-VAS as independent variables. The predictive performance was assessed using the mean absolute deviation and root mean square error in a separate dataset. Among the 3 models, model 2 using age, age squared, sex, and the pruritus-VAS as independent variables had the best performance based on the goodness of fit and model simplicity, with a log likelihood of 187.13. The 3 models had similar precision errors based on mean absolute deviation and root mean square error in the validation dataset. No statistically significant difference was observed between the mean observed and predicted values in all models. In conclusion, model 2 was chosen as the preferred mapping model. Outcomes measured as the pruritus-VAS can be transformed into the EQ-5D-3L utility index using this mapping model, which makes an economic evaluation possible when only pruritus-VAS data are available. © 2017 John Wiley & Sons, Ltd.
2014-01-01
Background The utility of self-report measures of physical activity (PA) in youth can be greatly enhanced by calibrating self-report output against objectively measured PA data. This study demonstrates the potential of calibrating self-report output against objectively measured physical activity (PA) in youth by using a commonly used self-report tool called the Physical Activity Questionnaire (PAQ). Methods A total of 148 participants (grades 4 through 12) from 9 schools (during the 2009–2010 school year) wore an Actigraph accelerometer for 7 days and then completed the PAQ. Multiple linear regression modeling was used on 70% of the available sample to develop a calibration equation and this was cross validated on an independent sample of participants (30% of sample). Results A calibration model with age, gender, and PAQ scores explained 40% of the variance in values for the percentage of time in moderate-to-vigorous PA (%MVPA) measured from the accelerometers (%MVPA = 14.56 - (sex*0.98) - (0.84*age) + (1.01*PAQ)). When tested on an independent, hold-out sample, the model estimated %MVPA values that were highly correlated with the recorded accelerometer values (r = .63) and there was no significant difference between the estimated and recorded activity values (mean diff. = 25.3 ± 18.1 min; p = .17). Conclusions These results suggest that the calibrated PAQ may be a valid alternative tool to activity monitoring instruments for estimating %MVPA in groups of youth. PMID:24886625
Saint-Maurice, Pedro F; Welk, Gregory J; Beyler, Nicholas K; Bartee, Roderick T; Heelan, Kate A
2014-05-16
The utility of self-report measures of physical activity (PA) in youth can be greatly enhanced by calibrating self-report output against objectively measured PA data.This study demonstrates the potential of calibrating self-report output against objectively measured physical activity (PA) in youth by using a commonly used self-report tool called the Physical Activity Questionnaire (PAQ). A total of 148 participants (grades 4 through 12) from 9 schools (during the 2009-2010 school year) wore an Actigraph accelerometer for 7 days and then completed the PAQ. Multiple linear regression modeling was used on 70% of the available sample to develop a calibration equation and this was cross validated on an independent sample of participants (30% of sample). A calibration model with age, gender, and PAQ scores explained 40% of the variance in values for the percentage of time in moderate-to-vigorous PA (%MVPA) measured from the accelerometers (%MVPA = 14.56 - (sex*0.98) - (0.84*age) + (1.01*PAQ)). When tested on an independent, hold-out sample, the model estimated %MVPA values that were highly correlated with the recorded accelerometer values (r = .63) and there was no significant difference between the estimated and recorded activity values (mean diff. = 25.3 ± 18.1 min; p = .17). These results suggest that the calibrated PAQ may be a valid alternative tool to activity monitoring instruments for estimating %MVPA in groups of youth.
NASA Astrophysics Data System (ADS)
Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.
2011-10-01
Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.
NASA Astrophysics Data System (ADS)
Zaidi, N. A.; Rosli, Muhamad Farizuan; Effendi, M. S. M.; Abdullah, Mohamad Hariri
2017-09-01
For almost all injection molding applications of Polyethylene Terephthalate (PET) plastic was analyzed the strength, durability and stiffness of properties by using Finite Element Method (FEM) for jointing system of wood furniture. The FEM was utilized for analyzing the PET jointing system for Oak and Pine as wood based material of furniture. The difference pattern design of PET as wood jointing furniture gives the difference value of strength furniture itself. The results show the wood specimen with grooves and eclipse pattern design PET jointing give lower global estimated error is 28.90%, compare to the rectangular and non-grooves wood specimen of global estimated error is 63.21%.
Klaeboe, Ronny
2005-09-01
When Gardermoen replaced Fornebu as the main airport for Oslo, aircraft noise levels increased in recreational areas near Gardermoen and decreased in areas near Fornebu. Krog and Engdahl [J. Acoust. Soc. Am. 116, 323-333 (2004)] estimate that recreationists' annoyance from aircraft noise in these areas changed more than would be anticipated from the actual noise changes. However, the sizes of their estimated "situation" effects are not credible. One possible reason for the anomalous results is that standard regression assumptions become violated when motivational factors are inserted into the regression model. Standardized regression coefficients (beta values) should also not be utilized for comparisons across equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Andrew D.; Barbose, Galen L.; Seel, Joachim
The rapid growth of distributed solar photovoltaics (DPV) has critical implications for U.S. utility planning processes. This report informs utility planning through a comparative analysis of roughly 30 recent utility integrated resource plans or other generation planning studies, transmission planning studies, and distribution system plans. It reveals a spectrum of approaches to incorporating DPV across nine key planning areas, and it identifies areas where even the best current practices might be enhanced. (1) Forecasting DPV deployment: Because it explicitly captures several predictive factors, customer-adoption modeling is the most comprehensive forecasting approach. It could be combined with other forecasting methods tomore » generate a range of potential futures. (2) Ensuring robustness of decisions to uncertain DPV quantities: using a capacity-expansion model to develop least-cost plans for various scenarios accounts for changes in net load and the generation portfolio; an innovative variation of this approach combines multiple per-scenario plans with trigger events, which indicate when conditions have changed sufficiently from the expected to trigger modifications in resource-acquisition strategy. (3) Characterizing DPV as a resource option: Today's most comprehensive plans account for all of DPV's monetary costs and benefits. An enhanced approach would address non-monetary and societal impacts as well. (4) Incorporating the non-dispatchability of DPV into planning: Rather than having a distinct innovative practice, innovation in this area is represented by evolving methods for capturing this important aspect of DPV. (5) Accounting for DPV's location-specific factors: The innovative propensity-to-adopt method employs several factors to predict future DPV locations. Another emerging utility innovation is locating DPV strategically to enhance its benefits. (6) Estimating DPV's impact on transmission and distribution investments: Innovative practices are being implemented to evaluate system needs, hosting capacities, and system investments needed to accommodate DPV deployment. (7) Estimating avoided losses associated with DPV: A time-differentiated marginal loss rate provides the most comprehensive estimate of avoided losses due to DPV, but no studies appear to use it. (8) Considering changes in DPV's value with higher solar penetration: Innovative methods for addressing the value changes at high solar penetrations are lacking among the studies we evaluate. (9) Integrating DPV in planning across generation, transmission, and distribution: A few states and regions have started to develop more comprehensive processes that link planning forums, but there are still many issues to address.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mill, Andrew; Barbose, Galen; Seel, Joachim
The rapid growth of distributed solar photovoltaics (DPV) has critical implications for U.S. utility planning processes. This report informs utility planning through a comparative analysis of roughly 30 recent utility integrated resource plans or other generation planning studies, transmission planning studies, and distribution system plans. It reveals a spectrum of approaches to incorporating DPV across nine key planning areas, and it identifies areas where even the best current practices might be enhanced. 1) Forecasting DPV deployment: Because it explicitly captures several predictive factors, customer-adoption modeling is the most comprehensive forecasting approach. It could be combined with other forecasting methods tomore » generate a range of potential futures. 2) Ensuring robustness of decisions to uncertain DPV quantities: using a capacity-expansion model to develop least-cost plans for various scenarios accounts for changes in net load and the generation portfolio; an innovative variation of this approach combines multiple per-scenario plans with trigger events, which indicate when conditions have changed sufficiently from the expected to trigger modifications in resource-acquisition strategy. 3) Characterizing DPV as a resource option: Today’s most comprehensive plans account for all of DPV’s monetary costs and benefits. An enhanced approach would address non-monetary and societal impacts as well. 4) Incorporating the non-dispatchability of DPV into planning: Rather than having a distinct innovative practice, innovation in this area is represented by evolving methods for capturing this important aspect of DPV. 5) Accounting for DPV’s location-specific factors: The innovative propensity-to-adopt method employs several factors to predict future DPV locations. Another emerging utility innovation is locating DPV strategically to enhance its benefits. 6) Estimating DPV’s impact on transmission and distribution investments: Innovative practices are being implemented to evaluate system needs, hosting capacities, and system investments needed to accommodate DPV deployment. 7) Estimating avoided losses associated with DPV: A time-differentiated marginal loss rate provides the most comprehensive estimate of avoided losses due to DPV, but no studies appear to use it. 8) Considering changes in DPV’s value with higher solar penetration: Innovative methods for addressing the value changes at high solar penetrations are lacking among the studies we evaluate. 9) Integrating DPV in planning across generation, transmission, and distribution: A few states and regions have started to develop more comprehensive processes that link planning forums, but there are still many issues to address.« less
A Framework for Valuing Investments in a Nurturing Society: Opportunities for Prevention Research.
Crowley, Max; Jones, Damon
2017-03-01
Investing in strategies that aim to build a more nurturing society offers tremendous opportunities for the field of prevention science. Yet, scientists struggle to consistently take their research beyond effectiveness evaluations and actually value the impact of preventive strategies. Ultimately, it is clear that convincing policymakers to make meaningful investments in children and youth will require estimates of the fiscal impact of such strategies across public service systems. The framework offered here values such investments. First, we review current public spending on children and families. Then, we describe how to quantify and monetize the impact of preventive interventions. This includes a new measurement strategy for assessing multisystem service utilization and a price list for key service provision from public education, social services, criminal justice, health care and tax systems.
Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M
2016-08-01
Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. Copyright © 2016. Published by Elsevier Ltd.
A nondestructive method to estimate the chlorophyll content of Arabidopsis seedlings
Liang, Ying; Urano, Daisuke; Liao, Kang-Ling; ...
2017-04-14
Chlorophyll content decreases in plants under stress conditions, therefore it is used commonly as an indicator of plant health. Arabidopsis thaliana offers a convenient and fast way to test physiological phenotypes of mutations and treatments. But, chlorophyll measurements with conventional solvent extraction are not applicable to Arabidopsis leaves due to their small size, especially when grown on culture dishes. We provide a nondestructive method for chlorophyll measurement whereby the red, green and blue (RGB) values of a color leaf image is used to estimate the chlorophyll content from Arabidopsis leaves. The method accommodates different profiles of digital cameras by incorporatingmore » the ColorChecker chart to make the digital negative profiles, to adjust the white balance, and to calibrate the exposure rate differences caused by the environment so that this method is applicable in any environment. We chose an exponential function model to estimate chlorophyll content from the RGB values, and fitted the model parameters with physical measurements of chlorophyll contents. As further proof of utility, this method was used to estimate chlorophyll content of G protein mutants grown on different sugar to nitrogen ratios. Our method is a simple, fast, inexpensive, and nondestructive estimation of chlorophyll content of Arabidopsis seedlings. This method lead to the discovery that G proteins are important in sensing the C/N balance to control chlorophyll content in Arabidopsis.« less
A Comparison of Japan and U.K. SF-6D Health-State Valuations Using a Non-Parametric Bayesian Method.
Kharroubi, Samer A
2015-08-01
There is interest in the extent to which valuations of health may differ between different countries and cultures, but few studies have compared preference values of health states obtained in different countries. We sought to estimate and compare two directly elicited valuations for SF-6D health states between the Japan and U.K. general adult populations using Bayesian methods. We analysed data from two SF-6D valuation studies where, using similar standard gamble protocols, values for 241 and 249 states were elicited from representative samples of the Japan and U.K. general adult populations, respectively. We estimate a function applicable across both countries that explicitly accounts for the differences between them, and is estimated using data from both countries. The results suggest that differences in SF-6D health-state valuations between the Japan and U.K. general populations are potentially important. The magnitude of these country-specific differences in health-state valuation depended, however, in a complex way on the levels of individual dimensions. The new Bayesian non-parametric method is a powerful approach for analysing data from multiple nationalities or ethnic groups, to understand the differences between them and potentially to estimate the underlying utility functions more efficiently.
Youssef, Noha; Sheik, Cody S.; Krumholz, Lee R.; Najar, Fares Z.; Roe, Bruce A.; Elshahed, Mostafa S.
2009-01-01
Pyrosequencing-based 16S rRNA gene surveys are increasingly utilized to study highly diverse bacterial communities, with special emphasis on utilizing the large number of sequences obtained (tens to hundreds of thousands) for species richness estimation. However, it is not yet clear how the number of operational taxonomic units (OTUs) and, hence, species richness estimates determined using shorter fragments at different taxonomic cutoffs correlates with the number of OTUs assigned using longer, nearly complete 16S rRNA gene fragments. We constructed a 16S rRNA clone library from an undisturbed tallgrass prairie soil (1,132 clones) and used it to compare species richness estimates obtained using eight pyrosequencing candidate fragments (99 to 361 bp in length) and the nearly full-length fragment. Fragments encompassing the V1 and V2 (V1+V2) region and the V6 region (generated using primer pairs 8F-338R and 967F-1046R) overestimated species richness; fragments encompassing the V3, V7, and V7+V8 hypervariable regions (generated using primer pairs 338F-530R, 1046F-1220R, and 1046F-1392R) underestimated species richness; and fragments encompassing the V4, V5+V6, and V6+V7 regions (generated using primer pairs 530F-805R, 805F-1046R, and 967F-1220R) provided estimates comparable to those obtained with the nearly full-length fragment. These patterns were observed regardless of the alignment method utilized or the parameter used to gauge comparative levels of species richness (number of OTUs observed, slope of scatter plots of pairwise distance values for short and nearly complete fragments, and nonparametric and parametric species richness estimates). Similar results were obtained when analyzing three other datasets derived from soil, adult Zebrafish gut, and basaltic formations in the East Pacific Rise. Regression analysis indicated that these observed discrepancies in species richness estimates within various regions could readily be explained by the proportions of hypervariable, variable, and conserved base pairs within an examined fragment. PMID:19561178
Grosse, Scott D; Chaugule, Shraddha S; Hay, Joel W
2015-01-01
Estimates of preference-weighted health outcomes or health state utilities are needed to assess improvements in health in terms of quality-adjusted life-years. Gains in quality-adjusted life-years are used to assess the cost–effectiveness of prophylactic use of clotting factor compared with on-demand treatment among people with hemophilia, a congenital bleeding disorder. Published estimates of health utilities for people with hemophilia vary, contributing to uncertainty in the estimates of cost–effectiveness of prophylaxis. Challenges in estimating utility weights for the purpose of evaluating hemophilia treatment include selection bias in observational data, difficulty in adjusting for predictors of health-related quality of life and lack of preference-based data comparing adults with lifetime or primary prophylaxis versus no prophylaxis living within the same country and healthcare system. PMID:25585817
Sex estimation from sternal measurements using multidetector computed tomography.
Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Bilgili, Mustafa Gokhan; Solmaz, Dilek; Erdil, Irem; Can, Ismail Ozgur
2014-12-01
We aimed to show the utility and reliability of sternal morphometric analysis for sex estimation.Sex estimation is a very important step in forensic identification. Skeletal surveys are main methods for sex estimation studies. Morphometric analysis of sternum may provide high accuracy rated data in sex discrimination. In this study, morphometric analysis of sternum was evaluated in 1 mm chest computed tomography scans for sex estimation. Four hundred forty 3 subjects (202 female, 241 male, mean age: 44 ± 8.1 [distribution: 30-60 year old]) were included the study. Manubrium length (ML), mesosternum length (2L), Sternebra 1 (S1W), and Sternebra 3 (S3W) width were measured and also sternal index (SI) was calculated. Differences between genders were evaluated by student t-test. Predictive factors of sex were determined by discrimination analysis and receiver operating characteristic (ROC) analysis. Male sternal measurement values are significantly higher than females (P < 0.001) while SI is significantly low in males (P < 0.001). In discrimination analysis, MSL has high accuracy rate with 80.2% in females and 80.9% in males. MSL also has the best sensitivity (75.9%) and specificity (87.6%) values. Accuracy rates were above 80% in 3 stepwise discrimination analysis for both sexes. Stepwise 1 (ML, MSL, S1W, S3W) has the highest accuracy rate in stepwise discrimination analysis with 86.1% in females and 83.8% in males. Our study showed that morphometric computed tomography analysis of sternum might provide important information for sex estimation.
Brandt, Jaden; Alkabanni, Wajd; Alessi-Severini, Silvia; Leong, Christine
2018-04-04
Drug utilization research on benzodiazepines remains important for measuring trends in consumption within and across borders over time for the sake of monitoring prescribing patterns and identifying potential population safety concerns. The defined daily dose (DDD) system by the World Health Organization (WHO) remains the internationally accepted standard for measuring drug consumption; however, beyond consumption, DDD-based results are difficult to interpret when individual agents are compared with one another or are pooled into a total class-based estimate. The diazepam milligram equivalent (DME) system provides approximate conversions between benzodiazepines and Z-drugs (i.e. zopiclone, zolpidem, zaleplon) based on their pharmacologic potency. Despite this, conversion of total dispensed benzodiazepine quantities into DME values retains diazepam milligrams as the total unit of measurement, which is also impractical for population-level interpretation. In this paper, we propose the use of an integrated DME-DDD metric to obviate the limitations encountered when the component metrics are used in isolation. Through a case example, we demonstrate significant change in results between the DDD and DME-DDD method. Unlike the DDD method, the integrated DME-DDD metric offers estimation of population pharmacologic exposure, and enables superior interpretation of drug utilization results, especially for drug class summary reporting.
Beresniak, Ariel; Medina-Lara, Antonieta; Auray, Jean Paul; De Wever, Alain; Praet, Jean-Claude; Tarricone, Rosanna; Torbica, Aleksandra; Dupont, Danielle; Lamure, Michel; Duru, Gerard
2015-01-01
Quality-adjusted life-years (QALYs) have been used since the 1980s as a standard health outcome measure for conducting cost-utility analyses, which are often inadequately labeled as 'cost-effectiveness analyses'. This synthetic outcome, which combines the quantity of life lived with its quality expressed as a preference score, is currently recommended as reference case by some health technology assessment (HTA) agencies. While critics of the QALY approach have expressed concerns about equity and ethical issues, surprisingly, very few have tested the basic methodological assumptions supporting the QALY equation so as to establish its scientific validity. The main objective of the ECHOUTCOME European project was to test the validity of the underlying assumptions of the QALY outcome and its relevance in health decision making. An experiment has been conducted with 1,361 subjects from Belgium, France, Italy, and the UK. The subjects were asked to express their preferences regarding various hypothetical health states derived from combining different health states with time durations in order to compare observed utility values of the couples (health state, time) and calculated utility values using the QALY formula. Observed and calculated utility values of the couples (health state, time) were significantly different, confirming that preferences expressed by the respondents were not consistent with the QALY theoretical assumptions. This European study contributes to establishing that the QALY multiplicative model is an invalid measure. This explains why costs/QALY estimates may vary greatly, leading to inconsistent recommendations relevant to providing access to innovative medicines and health technologies. HTA agencies should consider other more robust methodological approaches to guide reimbursement decisions.
Detection of alpha particles using DNA/Al Schottky junctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Ta'ii, Hassan Maktuff Jaber, E-mail: hassankirkukly@gmail.com, E-mail: vengadeshp@um.edu.my; Department of Physics, Faculty of Science, University of Al-Muthana, Al-Muthana 66001; Periasamy, Vengadesh, E-mail: hassankirkukly@gmail.com, E-mail: vengadeshp@um.edu.my
2015-09-21
Deoxyribonucleic acid or DNA can be utilized in an organic-metallic rectifying structure to detect radiation, especially alpha particles. This has become much more important in recent years due to crucial environmental detection needs in both peace and war. In this work, we fabricated an aluminum (Al)/DNA/Al structure and generated current–voltage characteristics upon exposure to alpha radiation. Two models were utilized to investigate these current profiles; the standard conventional thermionic emission model and Cheung and Cheung's method. Using these models, the barrier height, Richardson constant, ideality factor and series resistance of the metal-DNA-metal structure were analyzed in real time. The barriermore » height, Φ value calculated using the conventional method for non-radiated structure was 0.7149 eV, increasing to 0.7367 eV after 4 min of radiation. Barrier height values were observed to increase after 20, 30 and 40 min of radiation, except for 6, 8, and 10 min, which registered a decrease of about 0.67 eV. This was in comparison using Cheung and Cheung's method, which registered 0.6983 eV and 0.7528 eV for the non-radiated and 2 min of radiation, respectively. The barrier height values, meanwhile, were observed to decrease after 4 (0.61 eV) to 40 min (0.6945 eV). The study shows that conventional thermionic emission model could be practically utilized for estimating the diode parameters including the effect of series resistance. These changes in the electronic properties of the Al/DNA/Al junctions could therefore be utilized in the manufacture of sensitive alpha particle sensors.« less
Tu, H Y V; Pemberton, J; Lorenzo, A J; Braga, L H
2015-10-01
For infants with hydronephrosis, continuous antibiotic prophylaxis (CAP) may reduce urinary tract infections (UTIs); however, its value remains controversial. Recent studies have suggested that neonates with severe obstructive hydronephrosis are at an increased risk of UTIs, and support the use of CAP. Other studies have demonstrated the negligible risk for UTIs in the setting of suspected ureteropelvic junction obstruction and have highlighted the limited role of CAP in hydronephrosis. Furthermore, economic studies in this patient population have been sparse. This study aimed to evaluate whether the use of CAP is an efficient expenditure for preventing UTIs in children with high-grade hydronephrosis within the first 2 years of life. A decision model was used to estimate expected costs, clinical outcomes and quality-adjusted life years (QALYs) of CAP versus no CAP (Fig. 1). Cost data were collected from provincial databases and converted to 2013 Canadian dollars (CAD). Estimates of risks and health utility values were extracted from published literature. The analysis was performed over a time horizon of 2 years. One-way and probabilistic sensitivity analyses were carried out to assess uncertainty and robustness. Overall, CAP use was less costly and provided a minimal increase in health utility when compared to no CAP (Table). The mean cost over two years for CAP and no CAP was CAD$1571.19 and CAD$1956.44, respectively. The use of CAP reduced outpatient-managed UTIs by 0.21 infections and UTIs requiring hospitalization by 0.04 infections over 2 years. Cost-utility analysis revealed an increase of 0.0001 QALYs/year when using CAP. The CAP arm exhibited strong dominance over no CAP in all sensitivity analyses and across all willingness-to-pay thresholds. The use of CAP exhibited strong dominance in the economic evaluation, despite a small gain of 0.0001 QALYs/year. Whether this slight gain is clinically significant remains to be determined. However, small QALY gains have been reported in other pediatric economic evaluations. Strengths of this study included the use of data from a recent systematic review and meta-analysis, in addition to a comprehensive probabilistic sensitivity analysis. Limitations of this study included the use of estimates for UTI probabilities in the second year of life and health utility values, given that they were lacking in the literature. Spontaneous resolution of hydronephrosis and surgical management were also not implemented in this model. To prevent UTIs within the first 2 years of life in infants with high-grade hydronephrosis, this probabilistic model has shown that CAP use is a prudent expenditure of healthcare resources when compared to no CAP. Copyright © 2015 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.
Application of online measures to monitor and evaluate multiplatform fusion performance
NASA Astrophysics Data System (ADS)
Stubberud, Stephen C.; Kowalski, Charlene; Klamer, Dale M.
1999-07-01
A primary concern of multiplatform data fusion is assessing the quality and utility of data shared among platforms. Constraints such as platform and sensor capability and task load necessitate development of an on-line system that computes a metric to determine which other platform can provide the best data for processing. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. Entropy measures quality of processed information such as localization, classification, and ambiguity in measurement-to-track association. Lower entropy scores imply less uncertainty about a particular target. When new information is provided, we compuete the level of improvement a particular track obtains from one measurement to another. The measure permits us to evaluate the utility of the new information. We couple entropy with intelligent agents that provide two main data gathering functions: estimation of another platform's performance and evaluation of the new measurement data's quality. Both functions result from the entropy metric. The intelligent agent on a platform makes an estimate of another platform's measurement and provides it to its own fusion system, which can then incorporate it, for a particular target. A resulting entropy measure is then calculated and returned to its own agent. From this metric, the agent determines a perceived value of the offboard platform's measurement. If the value is satisfactory, the agent requests the measurement from the other platform, usually by interacting with the other platform's agent. Once the actual measurement is received, again entropy is computed and the agent assesses its estimation process and refines it accordingly.
Frick, Kevin D; Clark, Melissa A; Steinwachs, Donald M; Langenberg, Patricia; Stovall, Dale; Munro, Malcolm G; Dickersin, Kay
2009-01-01
In this study, we sought to 1) describe elements of the financial and quality-of-life burden of dysfunctional uterine bleeding (DUB) from the perspective of women who agreed to obtain surgical treatment; 2) explore associations between DUB symptom characteristics and the financial and quality-of-life burden; 3) estimate the annual dollar value of the financial burden; and 4) estimate the most that could be spent on surgery to eliminate DUB symptoms for which medical treatment has been unsuccessful that would result in a $50,000/quality-adjusted life-year incremental cost-effectiveness ratio. We collected baseline data on DUB symptoms and aspects of the financial and quality-of-life burden for 237 women agreeing to surgery for DUB in a randomized trial comparing hysterectomy with endometrial ablation. Measures included out-of-pocket pharmaceutical expenditures, excess expenditures on pads or tampons, the value of time missed from paid work and home management activities, and health utility. We used chi2 and t tests to assess the statistical significance of associations between DUB characteristics and the financial and quality-of-life burden. The annual financial burden was estimated. Pelvic pain and cramps were associated with activity limitations and tiredness was associated with a lower health utility. Excess pharmaceutical and pad and tampon costs were $333 per patient per year (95% confidence interval [CI], $263-$403). Excess paid work and home management loss costs were $2,291 per patient per year (95% CI, $1847-$2752). Effective surgical treatment costing $40,000 would be cost-effective compared with unsuccessful medical treatment. The financial and quality-of-life effects of DUB represent a substantial burden.
Using twig diameters to estimate browse utilization on three shrub species in southeastern Montana
Mark A. Rumble
1987-01-01
Browse utilization estimates based on twig length and twig weight were compared for skunkbush sumac, wax currant, and chokecherry. Linear regression analysis was valid for twig length data; twig weight equations are nonlinear. Estimates of twig weight are more accurate. Problems encountered during development of a utilization model are discussed.
A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine
NASA Astrophysics Data System (ADS)
Guo, T. H.; Musgrave, J.
1992-11-01
In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.
A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine
NASA Technical Reports Server (NTRS)
Guo, T. H.; Musgrave, J.
1992-01-01
In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.
Brown, Ian J.; Dyer, Alan R.; Chan, Queenie; Cogswell, Mary E.; Ueshima, Hirotsugu; Stamler, Jeremiah; Elliott, Paul
2013-01-01
High intakes of dietary sodium are associated with elevated blood pressure levels and an increased risk of cardiovascular disease. National and international guidelines recommend reduced sodium intake in the general population, which necessitates population-wide surveillance. We assessed the utility of casual (spot) urine specimens in estimating 24-hour urinary sodium excretion as a marker of sodium intake in the International Cooperative Study on Salt, Other Factors, and Blood Pressure. There were 5,693 participants recruited in 1984–1987 at the ages of 20–59 years from 29 North American and European samples. Participants were randomly assigned to test or validation data sets. Equations derived from casual urinary sodium concentration and other variables in the test data were applied to the validation data set. Correlations between observed and estimated 24-hour sodium excretion were 0.50 for individual men and 0.51 for individual women; the values were 0.79 and 0.71, respectively, for population samples. Bias in mean values (observed minus estimated) was small; for men and women, the values were −1.6 mmol per 24 hours and 2.3 mmol per 24 hours, respectively, at the individual level and −1.8 mmol per 24 hours and 2.2 mmol per 24 hours, respectively, at the population level. Proportions of individuals with urinary 24-hour sodium excretion above the recommended levels were slightly overestimated by the models. Casual urine specimens may be a useful, low-burden, low-cost alternative to 24-hour urine collections for estimation of population sodium intakes; ongoing calibration with study-specific 24-hour urinary collections is recommended to increase validity. PMID:23673246
Brown, Gary C; Brown, Melissa M; Brown, Heidi C; Kindermann, Sylvia; Sharma, Sanjay
2007-01-01
To evaluate the comparability of articles in the peer-reviewed literature assessing the (1) patient value and (2) cost-utility (cost-effectiveness) associated with interventions for neovascular age-related macular degeneration (ARMD). A search was performed in the National Library of Medicine database of 16 million peer-reviewed articles using the key words cost-utility, cost-effectiveness, value, verteporfin, pegaptanib, laser photocoagulation, ranibizumab, and therapy. All articles that used an outcome of quality-adjusted life-years (QALYs) were studied in regard to (1) percent improvement in quality of life, (2) utility methodology, (3) utility respondents, (4) types of costs included (eg, direct healthcare, direct nonhealthcare, indirect), (5) cost bases (eg, Medicare, National Health Service in the United Kingdom), and (6) study cost perspective (eg, government, societal, third-party insurer). To qualify as a value-based medicine analysis, the patient value had to be measured using the outcome of the QALYs conferred by respective interventions. As with value-based medicine analyses, patient-based time tradeoff utility analysis had to be utilized, patient utility respondents were necessary, and direct medical costs were used. Among 21 cost-utility analyses performed on interventions for neovascular macular degeneration, 15 (71%) met value-based medicine criteria. The 6 others (29%) were not comparable owing to (1) varying utility methodology, (2) varying utility respondents, (3) differing costs utilized, (4) differing cost bases, and (5) varying study perspectives. Among value-based medicine studies, laser photocoagulation confers a 4.4% value gain (improvement in quality of life) for the treatment of classic subfoveal choroidal neovascularization. Intravitreal pegaptanib confers a 5.9% value gain (improvement in quality of life) for classic, minimally classic, and occult subfoveal choroidal neovascularization, and photodynamic therapy with verteporfin confers a 7.8% to 10.7% value gain for the treatment of classic subfoveal choroidal neovascularization. Intravitreal ranibizumab therapy confers greater than a 15% value gain for the treatment of subfoveal occult and minimally classic subfoveal choroidal neovascularization. The majority of cost-utility studies performed on interventions for neovascular macular degeneration are value-based medicine studies and thus are comparable. Value-based analyses of neovascular ARMD monotherapies demonstrate the power of value-based medicine to improve quality of care and concurrently maximize the efficacy of healthcare resource use in public policy. The comparability of value-based medicine cost-utility analyses has important implications for overall practice standards and public policy. The adoption of value-based medicine standards can greatly facilitate the goal of higher-quality care and maximize the best use of healthcare funds.
Brown, Gary C.; Brown, Melissa M.; Brown, Heidi C.; Kindermann, Sylvia; Sharma, Sanjay
2007-01-01
Purpose To evaluate the comparability of articles in the peer-reviewed literature assessing the (1) patient value and (2) cost-utility (cost-effectiveness) associated with interventions for neovascular age-related macular degeneration (ARMD). Methods A search was performed in the National Library of Medicine database of 16 million peer-reviewed articles using the key words cost-utility, cost-effectiveness, value, verteporfin, pegaptanib, laser photocoagulation, ranibizumab, and therapy. All articles that used an outcome of quality-adjusted life-years (QALYs) were studied in regard to (1) percent improvement in quality of life, (2) utility methodology, (3) utility respondents, (4) types of costs included (eg, direct healthcare, direct nonhealthcare, indirect), (5) cost bases (eg, Medicare, National Health Service in the United Kingdom), and (6) study cost perspective (eg, government, societal, third-party insurer). To qualify as a value-based medicine analysis, the patient value had to be measured using the outcome of the QALYs conferred by respective interventions. As with value-based medicine analyses, patient-based time tradeoff utility analysis had to be utilized, patient utility respondents were necessary, and direct medical costs were used. Results Among 21 cost-utility analyses performed on interventions for neovascular macular degeneration, 15 (71%) met value-based medicine criteria. The 6 others (29%) were not comparable owing to (1) varying utility methodology, (2) varying utility respondents, (3) differing costs utilized, (4) differing cost bases, and (5) varying study perspectives. Among value-based medicine studies, laser photocoagulation confers a 4.4% value gain (improvement in quality of life) for the treatment of classic subfoveal choroidal neovascularization. Intravitreal pegaptanib confers a 5.9% value gain (improvement in quality of life) for classic, minimally classic, and occult subfoveal choroidal neovascularization, and photodynamic therapy with verteporfin confers a 7.8% to 10.7% value gain for the treatment of classic subfoveal choroidal neovascularization. Intravitreal ranibizumab therapy confers greater than a 15% value gain for the treatment of subfoveal occult and minimally classic subfoveal choroidal neovascularization. Conclusions The majority of cost-utility studies performed on interventions for neovascular macular degeneration are value-based medicine studies and thus are comparable. Value-based analyses of neovascular ARMD monotherapies demonstrate the power of value-based medicine to improve quality of care and concurrently maximize the efficacy of healthcare resource use in public policy. The comparability of value-based medicine cost-utility analyses has important implications for overall practice standards and public policy. The adoption of value-based medicine standards can greatly facilitate the goal of higher-quality care and maximize the best use of healthcare funds. PMID:18427606
Bartoli, Francesco; Crocamo, Cristina; Biagi, Enrico; Di Carlo, Francesco; Parma, Francesca; Madeddu, Fabio; Capuzzi, Enrico; Colmegna, Fabrizia; Clerici, Massimo; Carrà, Giuseppe
2016-08-01
There is a lack of studies testing accuracy of fast screening methods for alcohol use disorder in mental health settings. We aimed at estimating clinical utility of a standard single-item test for case finding and screening of DSM-5 alcohol use disorder among individuals suffering from anxiety and mood disorders. We recruited adults consecutively referred, in a 12-month period, to an outpatient clinic for anxiety and depressive disorders. We assessed the National Institute on Alcohol Abuse and Alcoholism (NIAAA) single-item test, using the Mini- International Neuropsychiatric Interview (MINI), plus an additional item of Composite International Diagnostic Interview (CIDI) for craving, as reference standard to diagnose a current DSM-5 alcohol use disorder. We estimated sensitivity and specificity of the single-item test, as well as positive and negative Clinical Utility Indexes (CUIs). 242 subjects with anxiety and mood disorders were included. The NIAAA single-item test showed high sensitivity (91.9%) and specificity (91.2%) for DSM-5 alcohol use disorder. The positive CUI was 0.601, whereas the negative one was 0.898, with excellent values also accounting for main individual characteristics (age, gender, diagnosis, psychological distress levels, smoking status). Testing for relevant indexes, we found an excellent clinical utility of the NIAAA single-item test for screening true negative cases. Our findings support a routine use of reliable methods for rapid screening in similar mental health settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Dewitt, Barry; Feeny, David; Fischhoff, Baruch; Cella, David; Hays, Ron D; Hess, Rachel; Pilkonis, Paul A; Revicki, Dennis A; Roberts, Mark S; Tsevat, Joel; Yu, Lan; Hanmer, Janel
2018-06-01
Health-related quality of life (HRQL) preference-based scores are used to assess the health of populations and patients and for cost-effectiveness analyses. The National Institutes of Health Patient-Reported Outcomes Measurement Information System (PROMIS ® ) consists of patient-reported outcome measures developed using item response theory. PROMIS is in need of a direct preference-based scoring system for assigning values to health states. To produce societal preference-based scores for 7 PROMIS domains: Cognitive Function-Abilities, Depression, Fatigue, Pain Interference, Physical Function, Sleep Disturbance, and Ability to Participate in Social Roles and Activities. Online survey of a US nationally representative sample ( n = 983). Preferences for PROMIS health states were elicited with the standard gamble to obtain both single-attribute scoring functions for each of the 7 PROMIS domains and a multiplicative multiattribute utility (scoring) function. The 7 single-attribute scoring functions were fit using isotonic regression with linear interpolation. The multiplicative multiattribute summary function estimates utilities for PROMIS multiattribute health states on a scale where 0 is the utility of being dead and 1 the utility of "full health." The lowest possible score is -0.022 (for a state viewed as worse than dead), and the highest possible score is 1. The online survey systematically excludes some subgroups, such as the visually impaired and illiterate. A generic societal preference-based scoring system is now available for all studies using these 7 PROMIS health domains.
Braun, Sabine; Schindler, Christian; Leuzinger, Sebastian
2010-09-01
For a quantitative estimate of the ozone effect on vegetation reliable models for ozone uptake through the stomata are needed. Because of the analogy of ozone uptake and transpiration it is possible to utilize measurements of water loss such as sap flow for quantification of ozone uptake. This technique was applied in three beech (Fagus sylvatica) stands in Switzerland. A canopy conductance was calculated from sap flow velocity and normalized to values between 0 and 1. It represents mainly stomatal conductance as the boundary layer resistance in forests is usually small. Based on this relative conductance, stomatal functions to describe the dependence on light, temperature, vapour pressure deficit and soil moisture were derived using multivariate nonlinear regression. These functions were validated by comparison with conductance values directly estimated from sap flow. The results corroborate the current flux parameterization for beech used in the DO3SE model. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Experimental study of influence characteristics of flue gas fly ash on acid dew point
NASA Astrophysics Data System (ADS)
Song, Jinhui; Li, Jiahu; Wang, Shuai; Yuan, Hui; Ren, Zhongqiang
2017-12-01
The long-term operation experience of a large number of utility boilers shows that the measured value of acid dew point is generally lower than estimated value. This is because the influence of CaO and MgO on acid dew point in flue gas fly ash is not considered in the estimation formula of acid dew point. On the basis of previous studies, the experimental device for acid dew point measurement was designed and constructed, and the acid dew point under different smoke conditions was measured. The results show that the CaO and MgO in the flue gas fly ash have an obvious influence on the acid dew point, and the content of the fly ash is negatively correlated with the temperature of acid dew point At the same time, the concentration of H2SO4 in flue gas is different, and the acid dew point of flue gas is different, and positively correlated with the acid dew point.
Varela Mallou, J; Rial Boubeta, A; Braña Tobío, T
2001-05-01
Brand is a product attribute that, for many types of goods or services, makes a major contribution to consumer preferences. Conjoint analysis is a useful technique for the assessment of brand values for a given consumer or group of consumers. In this paper, an application of conjoint analysis to the estimation of brand values in the Spanish daily newspaper market is reported. Four newspaper attributes were considered: brand (i.e., newspaper name), price (0.60, 1.05, or 1.50 euros), Sunday supplement (yes/no), and daily pullout (yes/no). A total of 510 regular readers of the national press, stratified by age and sex, were asked to rank 16 profiles representing an orthogonal fraction of the possible attribute-level combinations. Brand was by far the most important attribute, whereas price had negligible effect. More generally, the results confirm the utility of conjoint analysis for assessing brand equity in the newspaper market and for estimating the relative importance of the various attributes to different subgroups of consumers.
Ghotbi, Nader; Iwanaga, Masako; Ohtsuru, Akira; Ogawa, Yoji; Yamashita, Shunichi
2007-01-01
The use of Positron Emission Tomography (PET) or PET/CT for voluntary cancer screening of asymptomatic individuals is becoming common in Japan, though the utility of such screening is still controversial. This study estimated the general test validity and effective radiation dose for PET/CT cancer screening of healthy Japanese people by evaluating four standard indices (sensitivity, specificity, positive/negative predictive values), and predictive values with including prevalence for published literature and simulation-based Japanese data. CT and FDG-related dosage data were gathered from the literature and then extrapolated to the scan parameters at a model PET center. We estimated that the positive predictive value was only 3.3% in the use of PET/CT for voluntary cancer screening of asymptomatic Japanese individuals aged 50-59 years old, whose average cancer prevalence was 0.5%. The total effective radiation dose of a single whole-body PET/CT scan was estimated to be 6.34 to 9.48 mSv for the average Japanese individual, at 60 kg body weight. With PET/CT cancer screening in Japan, many healthy volunteers screened as false positive are exposed to at least 6.34 mSv without getting any real benefit. More evaluation concerning the justification of applying PET/CT for healthy people is necessary.
Developing a clinical utility framework to evaluate prediction models in radiogenomics
NASA Astrophysics Data System (ADS)
Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.
2015-03-01
Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.
A test of ecological optimality for semiarid vegetation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Salvucci, Guido D.; Eagleson, Peter S.; Turner, Edmund K.
1992-01-01
Three ecological optimality hypotheses which have utility in parameter reduction and estimation in a climate-soil-vegetation water balance model are reviewed and tested. The first hypothesis involves short term optimization of vegetative canopy density through equilibrium soil moisture maximization. The second hypothesis involves vegetation type selection again through soil moisture maximization, and the third involves soil genesis through plant induced modification of soil hydraulic properties to values which result in a maximum rate of biomass productivity.
Functional design specification: NASA form 1510
NASA Technical Reports Server (NTRS)
1979-01-01
The 1510 worksheet used to calculate approved facility project cost estimates is explained. Topics covered include data base considerations, program structure, relationship of the 1510 form to the 1509 form, and functions which the application must perform: WHATIF, TENENTER, TENTYPE, and data base utilities. A sample NASA form 1510 printout and a 1510 data dictionary are presented in the appendices along with the cost adjustment table, the floppy disk index, and methods for generating the calculated values (TENCALC) and for calculating cost adjustment (CONSTADJ). Storage requirements are given.
1982-09-01
212 Integrals," Marseille, France, 4y 22-26. 1978) (PublIshed Mengel , Marc, "On Singular Characteristic Initial Value In Springer Verleg Lecture...at the Annual PP 228 meeting of he Anmrican Society for Information Science held Mengel , Marc, "Relaxation at Critical Points: Deterministic In San...Instabilities." PP 258 24 pp., Doec 1978 (Published In Journal of Chemical Physics, Mengel , Marc S. and Thames, Jees A., Jr.. "Analytical Vol. 69, NO. 8. Oct
NASA Astrophysics Data System (ADS)
Muzylev, Eugene; Startseva, Zoya; Uspensky, Alexander; Volkova, Elena
2016-04-01
Presently, physical-mathematical models such as SVAT (Soil-Vegetation-Atmosphere-Transfer) developed with varying degrees of detail are one of the most effective tools to evaluate the characteristics of the water and heat regimes of vegetation covered territories. The produced SVAT model is designed to calculate the soil water content, evapotranspiration (evaporation from bare soil and transpiration), infiltration of water into the soil, vertical latent and sensible heat fluxes and other water and heat regime characteristics as well as vegetation and soil surface temperatures and the temperature and soil moisture distributions in depth. The model is adapted to satellite-derived estimates of precipitation, land surface temperatures and vegetation cover characteristics. The case study has been carried out for the located in the forest-steppe zone territory of part of the agricultural Central Black Earth Region of Russia with coordinates 49° 30'-54° N and 31° -43° E and area of 227 300 km2 for years 2011-2014 vegetation seasons. The soil and vegetation characteristics are used as the model parameters and the meteorological characteristics are considered to be input variables. These values have been obtained from ground-based observations and satellite-based measurements by radiometers AVHRR/NOAA, MODIS/EOS Terra and Aqua, SEVIRI/MSG-2,-3 (Meteosat-9, -10). To provide the retrieval of meteorological and vegetation cover characteristics the new and pre-existing methods and technologies of above radiometer thematic processing data have been developed or refined. From AVHRR data there have been derived estimates of precipitation P, efficient land surface temperature (LST) Ts.eff and emissivity E, surface-air temperature at a level of vegetation cover Ta, normalized difference vegetation index NDVI, leaf area index LAI and vegetation cover fraction B. The remote sensing products LST Tls, E, NDVI, LAI derived from MODIS data and covering the study area have been downloaded from LP DAAC web-site for the same vegetation seasons. The SEVIRI data have been used to retrieve P (every three hours and daily), Tls, E, Ta (at daylight and nighttime), LAI, and B (daily). All named technologies have been adapted to the territory of interest. To verify exactness of assessing AVHRR- and MODIS-based LST (Ts.eff, Ta and Tls) the error statistics of their derivation has been investigated for various samples using comparison with in-situ measurements during the all considered vegetation seasons. When developing the method to derive LST from the SEVIRI data its validation has been carried out through comparison of given Tls retrievals with independent collocated Tls estimates generated at LSA SAF (Lisbon, Portugal).The later check of SEVIRI-derived Tls and Ta estimates has been performed by their comparing with ground-based observation data. Correctness of LAI and B estimates has been confirmed when comparing time behavior of satellite- and ground-based LAI and B during each vegetation season. The all-important part of the study is to improve the developed Multi Threshold Method (MTM) intended for assessing daily and monthly rainfall from AVHRR and SEVIRI data, to check the correctness of carried out calculations for the considered territory and to develop procedures of utilizing obtained satellite-derived estimates of precipitation in the SVAT model. The MTM allows automatic pixel-by-pixel classifying AVHRR- and SEVIRI-measured data for the cloud detection, identification of its types, allocation of precipitation zones, and determination of instantaneous maximum intensities of precipitation in the pixel range around the clock throughout the year independently of land surface type. Measurement data from 5 AVHRR and 11 SEVIRI channels as well as their differences are used in the MTM as predictors. Calibration and verification of the MTM have been carried out using observation data on daily precipitation at agricultural meteorological stations of the region. In the frame of this approach the transition from the rainfall intensity estimation to the calculation of their daily sums has been fulfilled at that two variants of this calculation have been realized which focusing on climate researches and operational monitoring. Such transition has required verifying the accuracy of the estimates obtained in both variants at each time step. This verification has included comparison of area distributions of satellite-derived precipitation estimates and analogous estimates obtained by the interpolation of ground-based observation data. The probability of correct precipitation zone detection from satellite data when comparing with ground-based meteorological observations has amounted 75-85 %. In both variants of calculating precipitation for the region of interest in addition to the fields of daily rainfall the fields of their monthly and annual sums have been built. All three sums are consistent with each other and with a ground-based observation data although the satellite-derived estimates are more "smooth" in comparison with ground-based ones. Their discrepancies are in the range of the rainfall estimation errors using the MTM and they are peculiar to the local maxima for which satellite-derived rainfall is less than ground-measured values. This may be due to different scales of space-averaged satellite and point-wise ground-based estimates. To utilize satellite-derived estimates of meteorological and vegetation characteristics in the SVAT model the procedures of replacing the ground-based values of precipitation, LST, LAI and B by corresponding satellite-derived values have been developed taking into account spatial heterogeneity of their fields. The correctness of such replacement has been confirmed by the results of comparing the values of soil water content W and evapotranspiration Ev modeled and measured at agricultural meteorological stations. In particular, when the difference of precipitation sums for the vegetation season resulted from the model calculation in both above variants having been 20% the discrepancy between corresponding modeled values of W for the same period has not exceeded 8% and the discrepancy between values of E has been within 15%. Such discrepancies are within the limits of the standard W and Ev estimation errors. The final results of the SVAT model calculation utilizing satellite data are the fields of soil water content W, evapotranspiration Ev, vertical water and heat fluxes, land surface temperatures and other water and heat regime characteristics area-distributed over the territory of interest in their dynamics for the year 2011-2014 vegetation seasons. Discrepancies between Ev and W calculation results and observation data (~ 20-25 and 10-15%) have not exceeded the standard error of their estimation which corresponds to the adopted accuracy criteria of such estimates.
The impact of persistent visually disabling vitreous floaters on health status utility values.
Zou, Haidong; Liu, Haiyun; Xu, Xun; Zhang, Xi
2013-08-01
To assess the time trade-off (TTO) utility values in patients with persistent visually disabling vitreous floaters (DVF) and to determine the reliability and validity of TTO methods in DVF patients. Prospective cross-sectional questionnaire survey: Eligible patients with persistent DVF referred to the Shanghai First People's Hospital outpatient service between January 2006 and February 2010, and randomly selected normal vision general population residents, were enrolled. All participants underwent TTO utility value evaluation. After 4-5 weeks, the patients were asked to undergo second TTO utility value evaluation during the follow-up interview. The mean initial utility values of the 107 persistent DVF patients were 0.904 ± 0.054. Regression analyses revealed that length of education, visual acuity in the poorer-vision eye and employment status were associated with utility values (all P < 0.01). All patients took part in the follow-up interview; the intra-class correlation coefficient for TTO utility values at the initial and follow-up interviews was 0.855. In the 91 general population residents, the mean utility value was 0.923 ± 0.032, which was statistically higher than that of active study patients (t = 3.01, P < 0.01). Persistent DVF can substantially diminish the patients' perception of their life, and can be measured by TTO utility values with high reliability and construct validity.
Valuing recreational fishing quality at rivers and streams
NASA Astrophysics Data System (ADS)
Melstrom, Richard T.; Lupi, Frank; Esselman, Peter C.; Stevenson, R. Jan
2015-01-01
This paper describes an economic model that links the demand for recreational stream fishing to fish biomass. Useful measures of fishing quality are often difficult to obtain. In the past, economists have linked the demand for fishing sites to species presence-absence indicators or average self-reported catch rates. The demand model presented here takes advantage of a unique data set of statewide biomass estimates for several popular game fish species in Michigan, including trout, bass and walleye. These data are combined with fishing trip information from a 2008-2010 survey of Michigan anglers in order to estimate a demand model. Fishing sites are defined by hydrologic unit boundaries and information on fish assemblages so that each site corresponds to the area of a small subwatershed, about 100-200 square miles in size. The random utility model choice set includes nearly all fishable streams in the state. The results indicate a significant relationship between the site choice behavior of anglers and the biomass of certain species. Anglers are more likely to visit streams in watersheds high in fish abundance, particularly for brook trout and walleye. The paper includes estimates of the economic value of several quality change and site loss scenarios.
NASA Astrophysics Data System (ADS)
Xu, Liangfei; Hu, Junming; Cheng, Siliang; Fang, Chuan; Li, Jianqiu; Ouyang, Minggao; Lehnert, Werner
2017-07-01
A scheme for designing a second-order sliding-mode (SOSM) observer that estimates critical internal states on the cathode side of a polymer electrolyte membrane (PEM) fuel cell system is presented. A nonlinear, isothermal dynamic model for the cathode side and a membrane electrolyte assembly are first described. A nonlinear observer topology based on an SOSM algorithm is then introduced, and equations for the SOSM observer deduced. Online calculation of the inverse matrix produces numerical errors, so a modified matrix is introduced to eliminate the negative effects of these on the observer. The simulation results indicate that the SOSM observer performs well for the gas partial pressures and air stoichiometry. The estimation results follow the simulated values in the model with relative errors within ± 2% at stable status. Large errors occur during the fast dynamic processes (<1 s). Moreover, the nonlinear observer shows good robustness against variations in the initial values of the internal states, but less robustness against variations in system parameters. The partial pressures are more sensitive than the air stoichiometry to system parameters. Finally, the order of effects of parameter uncertainties on the estimation results is outlined and analyzed.
Multivariate Meta-Analysis of Preference-Based Quality of Life Values in Coronary Heart Disease.
Stevanović, Jelena; Pechlivanoglou, Petros; Kampinga, Marthe A; Krabbe, Paul F M; Postma, Maarten J
2016-01-01
There are numerous health-related quality of life (HRQol) measurements used in coronary heart disease (CHD) in the literature. However, only values assessed with preference-based instruments can be directly applied in a cost-utility analysis (CUA). To summarize and synthesize instrument-specific preference-based values in CHD and the underlying disease-subgroups, stable angina and post-acute coronary syndrome (post-ACS), for developed countries, while accounting for study-level characteristics, and within- and between-study correlation. A systematic review was conducted to identify studies reporting preference-based values in CHD. A multivariate meta-analysis was applied to synthesize the HRQoL values. Meta-regression analyses examined the effect of study level covariates age, publication year, prevalence of diabetes and gender. A total of 40 studies providing preference-based values were detected. Synthesized estimates of HRQoL in post-ACS ranged from 0.64 (Quality of Well-Being) to 0.92 (EuroQol European"tariff"), while in stable angina they ranged from 0.64 (Short form 6D) to 0.89 (Standard Gamble). Similar findings were observed in estimates applying to general CHD. No significant improvement in model fit was found after adjusting for study-level covariates. Large between-study heterogeneity was observed in all the models investigated. The main finding of our study is the presence of large heterogeneity both within and between instrument-specific HRQoL values. Current economic models in CHD ignore this between-study heterogeneity. Multivariate meta-analysis can quantify this heterogeneity and offers the means for uncertainty around HRQoL values to be translated to uncertainty in CUAs.
NASA Astrophysics Data System (ADS)
Curioni, Giulio; Chapman, David N.; Metje, Nicole
2017-06-01
The electromagnetic (EM) soil properties are dynamic variables that can change considerably over time, and they fundamentally affect the performance of Ground Penetrating Radar (GPR). However, long-term field studies are remarkably rare and records of the EM soil properties and their seasonal variation are largely absent from the literature. This research explores the extent of the seasonal variation of the apparent permittivity (Ka) and bulk electrical conductivity (BEC) measured by Time Domain Reflectometry (TDR) and their impact on GPR results, with a particularly important application to utility detection. A bespoke TDR field monitoring station was specifically developed and installed in an anthropogenic sandy soil in the UK for 22 months. The relationship between the temporal variation of the EM soil properties and GPR performance has been qualitatively assessed, highlighting notably degradation of the GPR images during wet periods and a few days after significant rainfall events following dry periods. Significantly, it was shown that by assuming arbitrary average values (i.e. not extreme values) of Ka and BEC which do not often reflect the typical conditions of the soil, it can lead to significant inaccuracies in the estimation of the depth of buried targets, with errors potentially up to approximately 30% even over a depth of 0.50 m (where GPR is expected to be most accurate). It is therefore recommended to measure or assess the soil conditions during GPR surveys, and if this is not possible to use typical wet and dry Ka values reported in the literature for the soil expected at the site, to improve confidence in estimations of target depths.
Zhao, Fei-Li; Yue, Ming; Yang, Hua; Wang, Tian; Wu, Jiu-Hong; Li, Shu-Chuen
2011-03-01
To estimate the willingness to pay (WTP) per quality-adjusted life year (QALY) ratio with the stated preference data and compare the results obtained between chronic prostatitis (CP) patients and general population (GP). WTP per QALY was calculated with the subjects' own health-related utility and the WTP value. Two widely used preference-based health-related quality of life instruments, EuroQol (EQ-5D) and Short Form 6D (SF-6D), were used to elicit utility for participants' own health. The monthly WTP values for moving from participants' current health to a perfect health were elicited using closed-ended iterative bidding contingent valuation method. A total of 268 CP patients and 364 participants from GP completed the questionnaire. We obtained 4 WTP/QALY ratios ranging from $4700 to $7400, which is close to the lower bound of local gross domestic product per capita, a threshold proposed by World Health Organization. Nevertheless, these values were lower than other proposed thresholds and published empirical researches on diseases with mortality risk. Furthermore, the WTP/QALY ratios from the GP were significantly lower than those from the CP patients, and different determinants were associated with the within group variation identified by multiple linear regression. Preference elicitation methods are acceptable and feasible in the socio-cultural context of an Asian environment and the calculation of WTP/QALY ratio produced meaningful answers. The necessity of considering the QALY type or disease-specific QALY in estimating WTP/QALY ratio was highlighted and 1 to 3 times of gross domestic product/capita recommended by World Health Organization could potentially serve as a benchmark for threshold in this Asian context.
Vision-Based Position Estimation Utilizing an Extended Kalman Filter
2016-12-01
POSITION ESTIMATION UTILIZING AN EXTENDED KALMAN FILTER by Joseph B. Testa III December 2016 Thesis Advisor: Vladimir Dobrokhodov Co...TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE VISION-BASED POSITION ESTIMATION UTILIZING AN EXTENDED KALMAN FILTER 5. FUNDING...spots” and network relay between the boarding team and ship. 14. SUBJECT TERMS UAV, ROS, extended Kalman filter , Matlab
Endarti, Dwi; Riewpaiboon, Arthorn; Thavorncharoensap, Montarat; Praditsitthikorn, Naiyana; Hutubessy, Raymond; Kristina, Susi Ari
2018-05-01
To gain insight into the most suitable foreign value set among Malaysian, Singaporean, Thai, and UK value sets for calculating the EuroQol five-dimensional questionnaire index score (utility) among patients with cervical cancer in Indonesia. Data from 87 patients with cervical cancer recruited from a referral hospital in Yogyakarta province, Indonesia, from an earlier study of health-related quality of life were used in this study. The differences among the utility scores derived from the four value sets were determined using the Friedman test. Performance of the psychometric properties of the four value sets versus visual analogue scale (VAS) was assessed. Intraclass correlation coefficients and Bland-Altman plots were used to test the agreement among the utility scores. Spearman ρ correlation coefficients were used to assess convergent validity between utility scores and patients' sociodemographic and clinical characteristics. With respect to known-group validity, the Kruskal-Wallis test was used to examine the differences in utility according to the stages of cancer. There was significant difference among utility scores derived from the four value sets, among which the Malaysian value set yielded higher utility than the other three value sets. Utility obtained from the Malaysian value set had more agreements with VAS than the other value sets versus VAS (intraclass correlation coefficients and Bland-Altman plot tests results). As for the validity, the four value sets showed equivalent psychometric properties as those that resulted from convergent and known-group validity tests. In the absence of an Indonesian value set, the Malaysian value set was more preferable to be used compared with the other value sets. Further studies on the development of an Indonesian value set need to be conducted. Copyright © 2018. Published by Elsevier Inc.
The DEP-6D, a new preference-based measure to assess health states of dependency.
Rodríguez-Míguez, E; Abellán-Perpiñán, J M; Alvarez, X C; González, X M; Sampayo, A R
2016-03-01
In medical literature there are numerous multidimensional scales to measure health states for dependence in activities of daily living. However, these scales are not preference-based and are not able to yield QALYs. On the contrary, the generic preference-based measures are not sensitive enough to measure changes in dependence states. The objective of this paper is to propose a new dependency health state classification system, called DEP-6D, and to estimate its value set in such a way that it can be used in QALY calculations. DEP-6D states are described as a combination of 6 attributes (eat, incontinence, personal care, mobility, housework and cognition problems), with 3-4 levels each. A sample of 312 Spanish citizens was surveyed in 2011 to estimate the DEP-6D preference-scoring algorithm. Each respondent valued six out of the 24 states using time trade-off questions. After excluding those respondents who made two or more inconsistencies (6% out of the sample), each state was valued between 66 and 77 times. The responses present a high internal and external consistency. A random effect model accounting for main effects was the preferred model to estimate the scoring algorithm. The DEP-6D describes, in general, more severe problems than those usually described by means of generic preference-based measures. The minimum score predicted by the DEP-6D algorithm is -0.84, which is considerably lower than the minimum value predicted by the EQ-5D and SF-6D algorithms. The DEP-6D value set is based on community preferences. Therefore it is consistent with the so-called 'societal perspective'. Moreover, DEP-6D preference weights can be used in QALY calculations and cost-utility analysis. Copyright © 2016. Published by Elsevier Ltd.
Vittorazzi, C; Amaral Junior, A T; Guimarães, A G; Viana, A P; Silva, F H L; Pena, G F; Daher, R F; Gerhardt, I F S; Oliveira, G H F; Pereira, M G
2017-09-27
Selection indices commonly utilize economic weights, which become arbitrary genetic gains. In popcorn, this is even more evident due to the negative correlation between the main characteristics of economic importance - grain yield and popping expansion. As an option in the use of classical biometrics as a selection index, the optimal procedure restricted maximum likelihood/best linear unbiased predictor (REML/BLUP) allows the simultaneous estimation of genetic parameters and the prediction of genotypic values. Based on the mixed model methodology, the objective of this study was to investigate the comparative efficiency of eight selection indices estimated by REML/BLUP for the effective selection of superior popcorn families in the eighth intrapopulation recurrent selection cycle. We also investigated the efficiency of the inclusion of the variable "expanded popcorn volume per hectare" in the most advantageous selection of superior progenies. In total, 200 full-sib families were evaluated in two different areas in the North and Northwest regions of the State of Rio de Janeiro, Brazil. The REML/BLUP procedure resulted in higher estimated gains than those obtained with classical biometric selection index methodologies and should be incorporated into the selection of progenies. The following indices resulted in higher gains in the characteristics of greatest economic importance: the classical selection index/values attributed by trial, via REML/BLUP, and the greatest genotypic values/expanded popcorn volume per hectare, via REML. The expanded popcorn volume per hectare characteristic enabled satisfactory gains in grain yield and popping expansion; this characteristic should be considered super-trait in popcorn breeding programs.
Prognostic Value of Pulmonary Vascular Resistance by Magnetic Resonance in Systolic Heart Failure
Fabregat-Andrés, Óscar; Estornell-Erill, Jordi; Ridocci-Soriano, Francisco; Pérez-Boscá, José Leandro; García-González, Pilar; Payá-Serrano, Rafael; Morell, Salvador; Cortijo, Julio
2016-01-01
Background Pulmonary hypertension is associated with poor prognosis in heart failure. However, non-invasive diagnosis is still challenging in clinical practice. Objective We sought to assess the prognostic utility of non-invasive estimation of pulmonary vascular resistances (PVR) by cardiovascular magnetic resonance to predict adverse cardiovascular outcomes in heart failure with reduced ejection fraction (HFrEF). Methods Prospective registry of patients with left ventricular ejection fraction (LVEF) < 40% and recently admitted for decompensated heart failure during three years. PVRwere calculated based on right ventricular ejection fraction and average velocity of the pulmonary artery estimated during cardiac magnetic resonance. Readmission for heart failure and all-cause mortality were considered as adverse events at follow-up. Results 105 patients (average LVEF 26.0 ±7.7%, ischemic etiology 43%) were included. Patients with adverse events at long-term follow-up had higher values of PVR (6.93 ± 1.9 vs. 4.6 ± 1.7estimated Wood Units (eWu), p < 0.001). In multivariate Cox regression analysis, PVR ≥ 5 eWu(cutoff value according to ROC curve) was independently associated with increased risk of adverse events at 9 months follow-up (HR2.98; 95% CI 1.12-7.88; p < 0.03). Conclusions In patients with HFrEF, the presence of PVR ≥ 5.0 Wu is associated with significantly worse clinical outcome at follow-up. Non-invasive estimation of PVR by cardiac magnetic resonance might be useful for risk stratification in HFrEF, irrespective of etiology, presence of late gadolinium enhancement or LVEF. PMID:26840055
NASA Astrophysics Data System (ADS)
Bansal, Sangeeta; Katyal, Deeksha; Saluja, Ridhi; Chakraborty, Monojit; Garg, J. K.
2018-02-01
Temperature and area fluctuations in wetlands greatly influence its various physico-chemical characteristics, nutrients dynamic, rates of biomass generation and decomposition, floral and faunal composition which in turn influence methane (CH4) emission rates. In view of this, the present study attempts to up-scale point CH4 flux from the wetlands of Uttar Pradesh (UP) by modifying two-factor empirical process based CH4 emission model for tropical wetlands by incorporating MODIS derived wetland components viz. wetland areal extent and corresponding temperature factors (Ft). This study further focuses on the utility of remotely sensed temperature response of CH4 emission in terms of Ft. Ft is generated using MODIS land surface temperature products and provides an important semi-empirical input for up-scaling CH4 emissions in wetlands. Results reveal that annual mean Ft values for UP wetlands vary from 0.69 (2010-2011) to 0.71(2011-2012). The total estimated area-wise CH4 emissions from the wetlands of UP varies from 66.47 Gg yr-1with wetland areal extent and Ft value of 2564.04 km2 and 0.69 respectively in 2010-2011 to 88.39 Gg yr-1with wetland areal extent and Ft value of 2720.16 km2 and 0.71 respectively in 2011-2012. Temporal analysis of estimated CH4 emissions showed that in monsoon season estimated CH4 emissions are more sensitive to wetland areal extent while in summer season sensitivity of estimated CH4 emissions is chiefly controlled by augmented methanogenic activities at high wetland surface temperatures.
The costs of coping with poor water supply in rural Kenya
NASA Astrophysics Data System (ADS)
Cook, Joseph; Kimuyu, Peter; Whittington, Dale
2016-02-01
As the disease burden of poor access to water and sanitation declines around the world, the nonhealth benefits-mainly the time burden of water collection - will likely grow in importance in sector funding decisions and investment analyses. We measure the coping costs incurred by households in one area of rural Kenya. Sixty percent of the 387 households interviewed were collecting water outside the home, and household members were spending an average of 2-3 h doing so per day. We value these time costs using an individual-level value of travel time estimate based on a stated preference experiment. We compare these results to estimates obtained assuming that the value of time saved is a fraction of unskilled wage rates. Coping cost estimates also include capital costs for storage and rainwater collection, money paid either to water vendors or at sources that charge volumetrically, costs of treating diarrhea cases, and expenditures on drinking water treatment (primarily boiling in our site). Median total coping costs per month are approximately US$20 per month, higher than average household water bills in many utilities in the United States, or 12% of reported monthly cash income. We estimate that coping costs are greater than 10% of income for over half of households in our sample. They are higher among larger and wealthier households, and households whose primary source is not at home. Even households with unprotected private wells or connections to an intermittent piped network spend money on water storage containers and on treating water they recognize as unsafe.
Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection
NASA Astrophysics Data System (ADS)
Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu
2018-05-01
A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault from a dual-axis stabilized platform and the gear crack from an operating electric locomotive to verify its effectiveness and feasibility.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.
NPP estimation and seasonal change research of Gansu province in northwest China
NASA Astrophysics Data System (ADS)
Han, Tao; Wang, Dawei; Hao, Xiaocui; Jiang, Youyan
2018-03-01
Based on GIS and remote sensing technology, this paper estimates the NPP of the 2015 year-round and every season of Gansu province in northwest China by using the CASA(Carnegie Ames Stanford Approach) light energy utilization model. The result shows that the total annual NPP of Gansu province gradually decline from southeast to northwest in the space, which is in accordance with the water and heat condition in Gansu province. The results show that the summer NPP in Gansu Province is the maximum in each season. The maximum value of summer NPP in Gansu Province reached 695 (gCm-2•season-1), and the maximum value was 473 in spring, and 288 in the autumn, and the NPP in the winter in Gansu province were under 60. The fluctuation range of NPP value is large, this is due to the diversity of ecosystem types in Gansu province, including desert, grassland, farmland and forest, among them, the grassland area is the largest, and the grassland type is very diverse, the grassland coverage is obviously different, especially the low coverage grassland growth is affected by precipitation and temperature and other meteorological factors obviously.
Health-related utility values of patients with primary Sjögren's syndrome and its predictors.
Lendrem, Dennis; Mitchell, Sheryl; McMeekin, Peter; Bowman, Simon; Price, Elizabeth; Pease, Colin T; Emery, Paul; Andrews, Jacqueline; Lanyon, Peter; Hunter, John; Gupta, Monica; Bombardieri, Michele; Sutcliffe, Nurhan; Pitzalis, Costantino; McLaren, John; Cooper, Annie; Regan, Marian; Giles, Ian; Isenberg, David; Vadivelu, Saravanan; Coady, David; Dasgupta, Bhaskar; McHugh, Neil; Young-Min, Steven; Moots, Robert; Gendi, Nagui; Akil, Mohammed; Griffiths, Bridget; Ng, Wan-Fai
2014-07-01
EuroQoL-5 dimension (EQ-5D) is a standardised preference-based tool for measurement of health-related quality of life and EQ-5D utility values can be converted to quality-adjusted life years (QALYs) to aid cost-utility analysis. This study aimed to evaluate the EQ-5D utility values of 639 patients with primary Sjögren's syndrome (PSS) in the UK. Prospective data collected using a standardised pro forma were compared with UK normative data. Relationships between utility values and the clinical and laboratory features of PSS were explored. The proportion of patients with PSS reporting any problem in mobility, self-care, usual activities, pain/discomfort and anxiety/depression were 42.2%, 16.7%, 56.6%, 80.6% and 49.4%, respectively, compared with 5.4%, 1.6%, 7.9%, 30.2% and 15.7% for the UK general population. The median EQ-5D utility value was 0.691 (IQR 0.587-0.796, range -0.239 to 1.000) with a bimodal distribution. Bivariate correlation analysis revealed significant correlations between EQ-5D utility values and many clinical features of PSS, but most strongly with pain, depression and fatigue (R values>0.5). After adjusting for age and sex differences, multiple regression analysis identified pain and depression as the two most important predictors of EQ-5D utility values, accounting for 48% of the variability. Anxiety, fatigue and body mass index were other statistically significant predictors, but they accounted for <5% in variability. This is the first report on the EQ-5D utility values of patients with PSS. These patients have significantly impaired utility values compared with the UK general population. EQ-5D utility values are significantly related to pain and depression scores in PSS. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Mihalopoulos, Cathrine; Engel, Lidia; Le, Long Khanh-Dao; Magnus, Anne; Harris, Meredith; Chatterton, Mary Lou
2018-07-01
High prevalence mental disorders including depression, anxiety and substance use disorders are associated with high economic and disease burden. However, there is little information regarding the health state utility values of such disorders according to their clinical severity using comparable instruments across all disorders. This study reports utility values for high prevalence mental disorders using data from the 2007 Australian National Survey of Mental Health and Wellbeing (NSMHWB). Utility values were derived from the AQoL-4D and analysed by disorder classification (affective only (AD), anxiety-related only (ANX), substance use only (SUB) plus four comorbidity groups), severity level (mild, moderate, severe), symptom recency (reported in the past 30 days), and comorbidity (combination of disorders). The adjusted Wald test was applied to detect statistically significant differences of weighted means and the magnitude of difference between groups was presented as a modified Cohen's d. In total, 1526 individuals met criteria for a 12-month mental disorder. The mean utility value was 0.67 (SD = 0.27), with lower utility values associated with higher severity levels and some comorbidities. Utility values for AD, ANX and SUB were 0.64 (SD = 0.25), 0.71 (SD = 0.25) and 0.81 (SD = 0.19), respectively. No differences in utility values were observed between disorders within disorder groups. Utility values were significantly lower among people with recent symptoms (within past 30 days) than those without; when examined by diagnostic group, this pattern held for people with SUB, but not for people with ANX or AD. Health state utility values of people with high prevalence mental disorders differ significantly by severity level, number of mental health comorbidities and the recency of symptoms, which provide new insights on the burden associated with high prevalence mental disorders in Australia. The derived utility values can be used to populate future economic models.
A Framework for Valuing Investments in a Nurturing Society: Opportunities for Prevention Research
Crowley, Max; Jones, Damon
2017-01-01
Investing in strategies that aim to build a more nurturing society offers tremendous opportunities for the field of prevention science. Yet, scientists struggle to consistently take their research beyond effectiveness evaluations and actually value the impact of preventive strategies. Ultimately, it is clear that convincing policymakers to make meaningful investments in children and youth will require estimates of the fiscal impact of such strategies across public service systems. The framework offered here values such investments. First, we review current public spending on children and families. Then, we describe how to quantify and monetize the impact of preventive interventions. This includes a new measurement strategy for assessing multi-system service utilization and a price list for key service provision from public education, social services, criminal justice, healthcare, and tax systems. PMID:28247294
Rampersaud, Y Raja; Tso, Peggy; Walker, Kevin R; Lewis, Stephen J; Davey, J Roderick; Mahomed, Nizar N; Coyte, Peter C
2014-02-01
Although total hip arthroplasty (THA) and total knee arthroplasty (TKA) have been widely accepted as highly cost-effective procedures, spine surgery for the treatment of degenerative conditions does not share the same perception among stakeholders. In particular, the sustainability of the outcome and cost-effectiveness following lumbar spinal stenosis (LSS) surgery compared with THA/TKA remain uncertain. The purpose of the study was to estimate the lifetime incremental cost-utility ratios for decompression and decompression with fusion for focal LSS versus THA and TKA for osteoarthritis (OA) from the perspective of the provincial health insurance system (predominantly from the hospital perspective) based on long-term health status data at a median of 5 years after surgical intervention. An incremental cost-utility analysis from a hospital perspective was based on a single-center, retrospective longitudinal matched cohort study of prospectively collected outcomes and retrospectively collected costs. Patients who had undergone primary one- to two-level spinal decompression with or without fusion for focal LSS were compared with a matched cohort of patients who had undergone elective THA or TKA for primary OA. Outcome measures included incremental cost-utility ratio (ICUR) ($/quality adjusted life year [QALY]) determined using perioperative costs (direct and indirect) and Short Form-6D (SF-6D) utility scores converted from the SF-36. Patient outcomes were collected using the SF-36 survey preoperatively and annually for a minimum of 5 years. Utility was modeled over the lifetime and QALYs were determined using the median 5-year health status data. The primary outcome measure, cost per QALY gained, was calculated by estimating the mean incremental lifetime costs and QALYs for each diagnosis group after discounting costs and QALYs at 3%. Sensitivity analyses adjusting for +25% primary and revision surgery cost, +25% revision rate, upper and lower confidence interval utility score, variable inpatient rehabilitation rate for THA/TKA, and discounting at 5% were conducted to determine factors affecting the value of each type of surgery. At a median of 5 years (4-7 years), follow-up and revision surgery data was attained for 85%-FLSS, 80%-THA, and 75%-THA of the cohorts. The 5-year ICURs were $21,702/QALY for THA; $28,595/QALY for TKA; $12,271/QALY for spinal decompression; and $35,897/QALY for spinal decompression with fusion. The estimated lifetime ICURs using the median 5-year follow-up data were $5,682/QALY for THA; $6,489/QALY for TKA; $2,994/QALY for spinal decompression; and $10,806/QALY for spinal decompression with fusion. The overall spine (decompression alone and decompression and fusion) ICUR was $5,617/QALY. The estimated best- and worst-case lifetime ICURs varied from $1,126/QALY for the best-case (spinal decompression) to $39,323/QALY for the worst case (spinal decompression with fusion). Surgical management of primary OA of the spine, hip, and knee results in durable cost-utility ratios that are well below accepted thresholds for cost-effectiveness. Despite a significantly higher revision rate, the overall surgical management of FLSS for those who have failed medical management results in similar median 5-year and lifetime cost-utility compared with those of THA and TKA for the treatment of OA from the limited perspective of a public health insurance system. Copyright © 2014 Elsevier Inc. All rights reserved.
Nair, Kavita V; Miller, Kerri; Saseen, Joseph; Wolfe, Pamela; Allen, Richard Read; Park, Jinhee
2009-01-01
To examine the impact of a value-based benefit design on utilization and expenditures. This benefit design involved all diabetes-related drugs and testing supplies placed on the lowest copay tier for 1 employer group. The sample of diabetic members were enrolled from a 9-month preperiod and for 2 years after the benefit design was implemented. Measured outcomes included prescription drug utilization for diabetes and medical utilization. Generalized measures were used to estimate differences between years 1 and 2 and the preperiod adjusting for age, gender, and comorbidity risk. Diabetes prescription drug use increased by 9.5% in year 1 and by 5.5% in year 2, and mean adherence increased by 7% to 8% in year 1 and fell slightly in year 2 compared with the preperiod. Pharmacy expenditures increased by 47% and 53% and expenditures for diabetes services increased by 16% and 32% in years 1 and 2, respectively. Increases in adherence and use of diabetes medications were observed. There were no compensatory cost-savings for the employer through lower utilization of medical expenditures in the first 2 years. Adherent patients had fewer emergency department visits than nonadherent patients after the implementation of this benefit design.
Studies on the estimation of the postmortem interval. 3. Rigor mortis (author's transl).
Suzutani, T; Ishibashi, H; Takatori, T
1978-11-01
The authors have devised a method for classifying rigor mortis into 10 types based on its appearance and strength in various parts of a cadaver. By applying the method to the findings of 436 cadavers which were subjected to medico-legal autopsies in our laboratory during the last 10 years, it has been demonstrated that the classifying method is effective for analyzing the phenomenon of onset, persistence and disappearance of rigor mortis statistically. The investigation of the relationship between each type of rigor mortis and the postmortem interval has demonstrated that rigor mortis may be utilized as a basis for estimating the postmortem interval but the values have greater deviation than those described in current textbooks.
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Least squares techniques were applied for parameter estimation of functions to predict winter wheat phenological stage with daily maximum temperature, minimum temperature, daylength, and precipitation as independent variables. After parameter estimation, tests were conducted using independent data. It may generally be concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson triquadratic form, in general use for spring wheat, yielded good results, but special techniques and care are required. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with averaged daily environmental values as independent variables.
Color constancy using bright-neutral pixels
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Luo, Yupin
2014-03-01
An effective illuminant-estimation approach for color constancy is proposed. Bright and near-neutral pixels are selected to jointly represent the illuminant color and utilized for illuminant estimation. To assess the representing capability of pixels, bright-neutral strength (BNS) is proposed by combining pixel chroma and brightness. Accordingly, a certain percentage of pixels with the largest BNS is selected to be the representative set. For every input image, a proper percentage value is determined via an iterative strategy by seeking the optimal color-corrected image. To compare various color-corrected images of an input image, image color-cast degree (ICCD) is devised using means and standard deviations of RGB channels. Experimental evaluation on standard real-world datasets validates the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Xiao, Fan; Chen, Zhijun; Chen, Jianguo; Zhou, Yongzhang
2016-05-01
In this study, a novel batch sliding window (BSW) based singularity mapping approach was proposed. Compared to the traditional sliding window (SW) technique with disadvantages of the empirical predetermination of a fixed maximum window size and outliers sensitivity of least-squares (LS) linear regression method, the BSW based singularity mapping approach can automatically determine the optimal size of the largest window for each estimated position, and utilizes robust linear regression (RLR) which is insensitive to outlier values. In the case study, tin geochemical data in Gejiu, Yunnan, have been processed by BSW based singularity mapping approach. The results show that the BSW approach can improve the accuracy of the calculation of singularity exponent values due to the determination of the optimal maximum window size. The utilization of RLR method in the BSW approach can smoothen the distribution of singularity index values with few or even without much high fluctuate values looking like noise points that usually make a singularity map much roughly and discontinuously. Furthermore, the student's t-statistic diagram indicates a strong spatial correlation between high geochemical anomaly and known tin polymetallic deposits. The target areas within high tin geochemical anomaly could probably have much higher potential for the exploration of new tin polymetallic deposits than other areas, particularly for the areas that show strong tin geochemical anomalies whereas no tin polymetallic deposits have been found in them.
Identifying Thresholds for Ecosystem-Based Management
Samhouri, Jameal F.; Levin, Phillip S.; Ainsworth, Cameron H.
2010-01-01
Background One of the greatest obstacles to moving ecosystem-based management (EBM) from concept to practice is the lack of a systematic approach to defining ecosystem-level decision criteria, or reference points that trigger management action. Methodology/Principal Findings To assist resource managers and policymakers in developing EBM decision criteria, we introduce a quantitative, transferable method for identifying utility thresholds. A utility threshold is the level of human-induced pressure (e.g., pollution) at which small changes produce substantial improvements toward the EBM goal of protecting an ecosystem's structural (e.g., diversity) and functional (e.g., resilience) attributes. The analytical approach is based on the detection of nonlinearities in relationships between ecosystem attributes and pressures. We illustrate the method with a hypothetical case study of (1) fishing and (2) nearshore habitat pressure using an empirically-validated marine ecosystem model for British Columbia, Canada, and derive numerical threshold values in terms of the density of two empirically-tractable indicator groups, sablefish and jellyfish. We also describe how to incorporate uncertainty into the estimation of utility thresholds and highlight their value in the context of understanding EBM trade-offs. Conclusions/Significance For any policy scenario, an understanding of utility thresholds provides insight into the amount and type of management intervention required to make significant progress toward improved ecosystem structure and function. The approach outlined in this paper can be applied in the context of single or multiple human-induced pressures, to any marine, freshwater, or terrestrial ecosystem, and should facilitate more effective management. PMID:20126647
A History-based Estimation for LHCb job requirements
NASA Astrophysics Data System (ADS)
Rauschmayr, Nathalie
2015-12-01
The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.
Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan
2016-04-28
This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
ANNz2: Photometric Redshift and Probability Distribution Function Estimation using Machine Learning
NASA Astrophysics Data System (ADS)
Sadeh, I.; Abdalla, F. B.; Lahav, O.
2016-10-01
We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister & Lahav, which now includes generation of full probability distribution functions (PDFs). ANNz2 utilizes multiple machine learning methods, such as artificial neural networks and boosted decision/regression trees. The objective of the algorithm is to optimize the performance of the photo-z estimation, to properly derive the associated uncertainties, and to produce both single-value solutions and PDFs. In addition, estimators are made available, which mitigate possible problems of non-representative or incomplete spectroscopic training samples. ANNz2 has already been used as part of the first weak lensing analysis of the Dark Energy Survey, and is included in the experiment's first public data release. Here we illustrate the functionality of the code using data from the tenth data release of the Sloan Digital Sky Survey and the Baryon Oscillation Spectroscopic Survey. The code is available for download at http://github.com/IftachSadeh/ANNZ.
Transient Stability Output Margin Estimation Based on Energy Function Method
NASA Astrophysics Data System (ADS)
Miwa, Natsuki; Tanaka, Kazuyuki
In this paper, a new method of estimating critical generation margin (CGM) in power systems is proposed from the viewpoint of transient stability diagnostic. The proposed method has the capability to directly compute the stability limit output for a given contingency based on transient energy function method (TEF). Since CGM can be directly obtained by the limit output using estimated P-θ curves and is easy to understand, it is more useful rather than conventional critical clearing time (CCT) of energy function method. The proposed method can also estimate CGM as its negative value that means unstable in present load profile, then negative CGM can be directly utilized as generator output restriction. The proposed method is verified its accuracy and fast solution ability by applying to simple 3-machine model and IEEJ EAST10-machine standard model. Furthermore the useful application to severity ranking of transient stability for a lot of contingency cases is discussed by using CGM.
Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia
NASA Astrophysics Data System (ADS)
Hussin, F. N.; Rahman, H. A.; Bahar, A.
2017-09-01
Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.
Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.
Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I
2018-06-26
The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.
NASA Astrophysics Data System (ADS)
Burgin, M. S.; van Zyl, J. J.
2017-12-01
Traditionally, substantial ancillary data is needed to parametrize complex electromagnetic models to estimate soil moisture from polarimetric radar data. The Soil Moisture Active Passive (SMAP) baseline radar soil moisture retrieval algorithm uses a data cube approach, where a cube of radar backscatter values is calculated using sophisticated models. In this work, we utilize the empirical approach by Kim and van Zyl (2009) which is an optional SMAP radar soil moisture retrieval algorithm; it expresses radar backscatter of a vegetated scene as a linear function of soil moisture, hence eliminating the need for ancillary data. We use 2.5 years of L-band Aquarius radar and radiometer derived soil moisture data to determine two coefficients of a linear model function on a global scale. These coefficients are used to estimate soil moisture with 2.5 months of L-band SMAP and L-band PALSAR-2 data. The estimated soil moisture is compared with the SMAP Level 2 radiometer-only soil moisture product; the global unbiased RMSE of the SMAP derived soil moisture corresponds to 0.06-0.07 cm3/cm3. In this study, we leverage the three diverse L-band radar data sets to investigate the impact of pixel size and pixel heterogeneity on soil moisture estimation performance. Pixel sizes range from 100 km for Aquarius, over 3, 9, 36 km for SMAP, to 10m for PALSAR-2. Furthermore, we observe seasonal variation in the radar sensitivity to soil moisture which allows the identification and quantification of seasonally changing vegetation. Utilizing this information, we further improve the estimation performance. The research described in this paper is supported by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. Copyright 2017. All rights reserved.
NASA Astrophysics Data System (ADS)
Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong
2012-03-01
Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.
Brown, Elizabeth R; Smith, Jessi L; Thoman, Dustin B; Allen, Jill M; Muragishi, Gregg
2015-11-01
Motivating students to pursue science careers is a top priority among many science educators. We add to the growing literature by examining the impact of a utility value intervention to enhance student's perceptions that biomedical science affords important utility work values. Using an expectancy-value perspective we identify and test two types of utility value: communal (other-oriented) and agentic (self-oriented). The culture of science is replete with examples emphasizing high levels of agentic value, but communal values are often (stereotyped as) absent from science. However, people in general want an occupation that has communal utility. We predicted and found that an intervention emphasizing the communal utility value of biomedical research increased students' motivation for biomedical science (Studies 1-3). We refined whether different types of communal utility value (working with, helping, and forming relationships with others) might be more or less important, demonstrating that helping others was an especially important predictor of student motivation (Study 2). Adding agentic utility value to biomedical research did not further increase student motivation (Study 3). Furthermore, the communal value intervention indirectly impacted students' motivation because students believed that biomedical research was communal and thus subsequently more important (Studies 1-3). This is key, because enhancing student communal value beliefs about biomedical research (Studies 1-3) and science (Study 4) was associated both with momentary increases in motivation in experimental settings (Studies 1-3) and increased motivation over time among students highly identified with biomedicine (Study 4). We discuss recommendations for science educators, practitioners, and faculty mentors who want to broaden participation in science.
Estimating Power Outage Cost based on a Survey for Industrial Customers
NASA Astrophysics Data System (ADS)
Yoshida, Yoshikuni; Matsuhashi, Ryuji
A survey was conducted on power outage cost for industrial customers. 5139 factories, which are designated energy management factories in Japan, answered their power consumption and the loss of production value due to the power outage in an hour in summer weekday. The median of unit cost of power outage of whole sectors is estimated as 672 yen/kWh. The sector of services for amusement and hobbies and the sector of manufacture of information and communication electronics equipment relatively have higher unit cost of power outage. Direct damage cost from power outage in whole sectors reaches 77 billion yen. Then utilizing input-output analysis, we estimated indirect damage cost that is caused by the repercussion of production halt. Indirect damage cost in whole sectors reaches 91 billion yen. The sector of wholesale and retail trade has the largest direct damage cost. The sector of manufacture of transportation equipment has the largest indirect damage cost.
Satellites for the study of ocean primary productivity
NASA Technical Reports Server (NTRS)
Smith, R. C.; Baker, K. S.
1983-01-01
The use of remote sensing techniques for obtaining estimates of global marine primary productivity is examined. It is shown that remote sensing and multiplatform (ship, aircraft, and satellite) sampling strategies can be used to significantly lower the variance in estimates of phytoplankton abundance and of population growth rates from the values obtained using the C-14 method. It is noted that multiplatform sampling strategies are essential to assess the mean and variance of phytoplankton biomass on a regional or on a global basis. The relative errors associated with shipboard and satellite estimates of phytoplankton biomass and primary productivity, as well as the increased statistical accuracy possible from the utilization of contemporaneous data from both sampling platforms, are examined. It is shown to be possible to follow changes in biomass and the distribution patterns of biomass as a function of time with the use of satellite imagery.
Sparse Bayesian learning for DOA estimation with mutual coupling.
Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi
2015-10-16
Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.
Examining the short-run price elasticity of gasoline demand in the United States
NASA Astrophysics Data System (ADS)
Brannan, Michael James
Estimating the consumer demand response to changes in the price of gasoline has important implications regarding fuel tax policies and environmental concerns. There are reasons to believe that the short-run price elasticity of gasoline demand fluctuates due to changing structural and behavioral factors. In this paper I estimate the short-run price elasticity of gasoline demand in two time periods, from 2001 to 2006 and from 2007 to 2010. This study utilizes data at both the national and state levels to produce estimates. The short-run price elasticities range from -0.034 to -0.047 during 2001 to 2006, compared to -0.058 to -0.077 in the 2007 to 2010 period. This paper also examines whether there are regional differences in the short-run price elasticity of gasoline demand in the United States. However, there appears to only be modest variation in price elasticity values across regions.
Regression analysis of sparse asynchronous longitudinal data.
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P
2015-09-01
We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.
Kim, Jung-Wook; Lee, Chang Kyun; Rhee, Sang Youl; Oh, Chi Hyuck; Shim, Jae-Jun; Kim, Hyo Jong
2018-04-01
Data regarding health-care costs and utilization for inflammatory bowel disease (IBD) at the population level are limited in Asia. We aimed to investigate the nationwide prevalence and health-care cost and utilization of IBD in Korea. We tracked the IBD-attributable health-care costs and utilization from 2010 to 2014 using the public dataset obtained from Korean National Health Insurance Service claims. We estimated the nationwide prevalence of IBD using population census data from Statistics Korea during the same period. In total, 236 106 IBD patients were analyzed. The estimated IBD prevalence significantly increased from 85.1/100 000 in 2010 to 106/100 000 in 2014. The overall annual health-care costs for IBD increased from $23.2 million (US dollars) in 2010 to $49.7 million in 2014 (P < 0.001). During the same period, the health-care cost per capita also increased from $572.3 to $983.7 (P < 0.001). The outpatient to total cost ratio increased from 45.5% in 2010 to 66.6% in 2014. Regarding health-care utilization, the outpatient to total days of service use ratio increased from 73.1% in 2010 to 76.9% in 2014. Of the total days of service used, the proportions of tertiary, general, and community hospitals increased significantly with a concomitant decrease in that of primary clinics (all P values < 0.001). This population-based study confirmed the steadily rising rate of prevalence of IBD in Korea. It also demonstrated that the shifting to outpatient care and advanced care settings are drivers for the dramatic increase in IBD-related health-care costs in Korea. © 2017 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Kim, Hye-Lin; Kim, Dam; Jang, Eun Jin; Lee, Min-Young; Song, Hyun Jin; Park, Sun-Young; Cho, Soo-Kyung; Sung, Yoon-Kyoung; Choi, Chan-Bum; Won, Soyoung; Bang, So-Young; Cha, Hoon-Suk; Choe, Jung-Yoon; Chung, Won Tae; Hong, Seung-Jae; Jun, Jae-Bum; Kim, Jinseok; Kim, Seong-Kyu; Kim, Tae-Hwan; Kim, Tae-Jong; Koh, Eunmi; Lee, Hwajeong; Lee, Hye-Soon; Lee, Jisoo; Lee, Shin-Seok; Lee, Sung Won; Park, Sung-Hoon; Shim, Seung-Cheol; Yoo, Dae-Hyun; Yoon, Bo Young; Bae, Sang-Cheol; Lee, Eui-Kyung
2016-04-01
The aim of this study was to estimate the mapping model for EuroQol-5D (EQ-5D) utility values using the health assessment questionnaire disability index (HAQ-DI), pain visual analog scale (VAS), and disease activity score in 28 joints (DAS28) in a large, nationwide cohort of rheumatoid arthritis (RA) patients in Korea. The KORean Observational study Network for Arthritis (KORONA) registry data on 3557 patients with RA were used. Data were randomly divided into a modeling set (80 % of the data) and a validation set (20 % of the data). The ordinary least squares (OLS), Tobit, and two-part model methods were employed to construct a model to map to the EQ-5D index. Using a combination of HAQ-DI, pain VAS, and DAS28, four model versions were examined. To evaluate the predictive accuracy of the models, the root-mean-square error (RMSE) and mean absolute error (MAE) were calculated using the validation dataset. A model that included HAQ-DI, pain VAS, and DAS28 produced the highest adjusted R (2) as well as the lowest Akaike information criterion, RMSE, and MAE, regardless of the statistical methods used in modeling set. The mapping equation of the OLS method is given as EQ-5D = 0.95-0.21 × HAQ-DI-0.24 × pain VAS/100-0.01 × DAS28 (adjusted R (2) = 57.6 %, RMSE = 0.1654 and MAE = 0.1222). Also in the validation set, the RMSE and MAE were shown to be the smallest. The model with HAQ-DI, pain VAS, and DAS28 showed the best performance, and this mapping model enabled the estimation of an EQ-5D value for RA patients in whom utility values have not been measured.
Huang, Xiangdong; Xue, Dong; Xue, Lian
2015-08-01
A greenhouse experiment was conducted to investigate the impact of sewage sludge compost application on functional diversity of soil microbial communities, based on carbon source utilization, and biochemical characteristics of tree peony (Paeonia suffruticosa). Functional diversity was estimated with incubations in Biolog EcoPlates and well color development was used as the functional trait for carbon source utilization. The average well color development and Shannon index based on the carbon source utilization pattern in Biolog EcoPlates significantly increased with the increasing sludge compost application in the range of 0-45%, with a decreasing trend above 45%. Principal component analysis of carbon source utilization pattern showed that sludge compost application stimulated the utilization rate of D-cellobiose and α-D-lactose, while the utilization rate of β-methyl-D-glucoside, L-asparagine, L-serine, α-cyclodextrin, γ-hydroxybutyric acid, and itaconic acid gradually increased up to a sludge compost amendment dosage of 45% and then decreased above 45%. The chlorophyll content, antioxidase (superoxide dismutase, catalase, and peroxidase) activities, plant height, flower diameter, and flower numbers per plant of tree peony increased significantly with sludge compost dosage, reaching a peak value at 45 %, and then decreased with the exception that activity of superoxide dismutase and catalase did not vary significantly.
Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale
NASA Astrophysics Data System (ADS)
Hakala, K. A.; Hay, L.; Markstrom, S. L.
2014-12-01
The US Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental US. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units (HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khatonabadi, Maryam; Kim, Hyun J.; Lu, Peiyun
Purpose: In AAPM Task Group 204, the size-specific dose estimate (SSDE) was developed by providing size adjustment factors which are applied to the Computed Tomography (CT) standardized dose metric, CTDI{sub vol}. However, that work focused on fixed tube current scans and did not specifically address tube current modulation (TCM) scans, which are currently the majority of clinical scans performed. The purpose of this study was to extend the SSDE concept to account for TCM by investigating the feasibility of using anatomic and organ specific regions of scanner output to improve accuracy of dose estimates. Methods: Thirty-nine adult abdomen/pelvis and 32more » chest scans from clinically indicated CT exams acquired on a multidetector CT using TCM were obtained with Institutional Review Board approval for generating voxelized models. Along with image data, raw projection data were obtained to extract TCM functions for use in Monte Carlo simulations. Patient size was calculated using the effective diameter described in TG 204. In addition, the scanner-reported CTDI{sub vol} (CTDI{sub vol,global}) was obtained for each patient, which is based on the average tube current across the entire scan. For the abdomen/pelvis scans, liver, spleen, and kidneys were manually segmented from the patient datasets; for the chest scans, lungs and for female models only, glandular breast tissue were segmented. For each patient organ doses were estimated using Monte Carlo Methods. To investigate the utility of regional measures of scanner output, regional and organ anatomic boundaries were identified from image data and used to calculate regional and organ-specific average tube current values. From these regional and organ-specific averages, CTDI{sub vol} values, referred to as regional and organ-specific CTDI{sub vol}, were calculated for each patient. Using an approach similar to TG 204, all CTDI{sub vol} values were used to normalize simulated organ doses; and the ability of each normalized dose to correlate with patient size was investigated. Results: For all five organs, the correlations with patient size increased when organ doses were normalized by regional and organ-specific CTDI{sub vol} values. For example, when estimating dose to the liver, CTDI{sub vol,global} yielded a R{sup 2} value of 0.26, which improved to 0.77 and 0.86, when using the regional and organ-specific CTDI{sub vol} for abdomen and liver, respectively. For breast dose, the global CTDI{sub vol} yielded a R{sup 2} value of 0.08, which improved to 0.58 and 0.83, when using the regional and organ-specific CTDI{sub vol} for chest and breasts, respectively. The R{sup 2} values also increased once the thoracic models were separated for the analysis into females and males, indicating differences between genders in this region not explained by a simple measure of effective diameter. Conclusions: This work demonstrated the utility of regional and organ-specific CTDI{sub vol} as normalization factors when using TCM. It was demonstrated that CTDI{sub vol,global} is not an effective normalization factor in TCM exams where attenuation (and therefore tube current) varies considerably throughout the scan, such as abdomen/pelvis and even thorax. These exams can be more accurately assessed for dose using regional CTDI{sub vol} descriptors that account for local variations in scanner output present when TCM is employed.« less
Cranor, W.L.; Alvarez, D.A.; Huckins, J.N.; Petty, J.D.
2009-01-01
To fully utilize semipermeable membrane devices (SPMDs) as passive samplers in air monitoring, data are required to accurately estimate airborne concentrations of environmental contaminants. Limited uptake rate constants (kua) and no SPMD air partitioning coefficient (Ksa) existed for vapor-phase contaminants. This research was conducted to expand the existing body of kinetic data for SPMD air sampling by determining kua and Ksa for a number of airborne contaminants including the chemical classes: polycyclic aromatic hydrocarbons, organochlorine pesticides, brominated diphenyl ethers, phthalate esters, synthetic pyrethroids, and organophosphate/organosulfur pesticides. The kuas were obtained for 48 of 50 chemicals investigated and ranged from 0.03 to 3.07??m3??g-1??d-1. In cases where uptake was approaching equilibrium, Ksas were approximated. Ksa values (no units) were determined or estimated for 48 of the chemicals investigated and ranging from 3.84E+5 to 7.34E+7. This research utilized a test system (United States Patent 6,877,724 B1) which afforded the capability to generate and maintain constant concentrations of vapor-phase chemical mixtures. The test system and experimental design employed gave reproducible results during experimental runs spanning more than two years. This reproducibility was shown by obtaining mean kua values (n??=??3) of anthracene and p,p???-DDE at 0.96 and 1.57??m3??g-1??d-1 with relative standard deviations of 8.4% and 8.6% respectively.
NASA Astrophysics Data System (ADS)
Kim, R. S.; Durand, M. T.; Li, D.; Baldo, E.; Margulis, S. A.; Dumont, M.; Morin, S.
2017-12-01
This paper presents a newly-proposed snow depth retrieval approach for mountainous deep snow using airborne multifrequency passive microwave (PM) radiance observation. In contrast to previous snow depth estimations using satellite PM radiance assimilation, the newly-proposed method utilized single flight observation and deployed the snow hydrologic models. This method is promising since the satellite-based retrieval methods have difficulties to estimate snow depth due to their coarse resolution and computational effort. Indeed, this approach consists of particle filter using combinations of multiple PM frequencies and multi-layer snow physical model (i.e., Crocus) to resolve melt-refreeze crusts. The method was performed over NASA Cold Land Processes Experiment (CLPX) area in Colorado during 2002 and 2003. Results showed that there was a significant improvement over the prior snow depth estimates and the capability to reduce the prior snow depth biases. When applying our snow depth retrieval algorithm using a combination of four PM frequencies (10.7,18.7, 37.0 and 89.0 GHz), the RMSE values were reduced by 48 % at the snow depth transects sites where forest density was less than 5% despite deep snow conditions. This method displayed a sensitivity to different combinations of frequencies, model stratigraphy (i.e. different number of layering scheme for snow physical model) and estimation methods (particle filter and Kalman filter). The prior RMSE values at the forest-covered areas were reduced by 37 - 42 % even in the presence of forest cover.
Israel, J A; May, B
2010-03-01
The utility of genetic measures for kinship reconstruction in polysomic species is not well evaluated. We developed a framework to test hypotheses about estimating breeding population size indirectly from collections of outmigrating green sturgeon juveniles. We evaluated a polysomic dataset, in allelic frequency and phenotypic formats, from green sturgeon to describe the relationship among known progeny from experimental families. The distributions of relatedness values for kin classes were used for reconstructing green sturgeon pedigrees from juveniles of unknown relationship. We compared three rarefaction functions that described the relationship between the number of kin groups and number of samples in a pedigree to estimate the annual abundance of spawners contributing to the threatened green sturgeon Southern Distinct Population Segment in the upper Sacramento River. Results suggested the estimated abundance of breeding green sturgeon remained roughly constant in the upper Sacramento River over a 5-year period, ranging from 10 to 28 individuals depending on the year and rarefaction method. These results demonstrate an empirical understanding for the distribution of relatedness values among individuals is a benefit for assessing pedigree reconstruction methods and identifying misclassification rates. Monitoring of rare species using these indirect methods is feasible and can provide insight into breeding and ontogenetic behaviour. While this framework was developed for specific application to studying fish populations in a riverscape, the framework could be advanced to improve genetic estimation of breeding population size and to identify important breeding habitats of rare species when combined with finer-scaled sampling of offspring.
Fujiwara, Yasuhiro; Maruyama, Hirotoshi; Toyomaru, Kanako; Nishizaka, Yuri; Fukamatsu, Masahiro
2018-06-01
Magnetic resonance imaging (MRI) is widely used to detect carotid atherosclerotic plaques. Although it is important to evaluate vulnerable carotid plaques containing lipids and intra-plaque hemorrhages (IPHs) using T 1 -weighted images, the image contrast changes depending on the imaging settings. Moreover, to distinguish between a thrombus and a hemorrhage, it is useful to evaluate the iron content of the plaque using both T 1 -weighted and T 2 *-weighted images. Therefore, a quantitative evaluation of carotid atherosclerotic plaques using T 1 and T 2 * values may be necessary for the accurate evaluation of plaque components. The purpose of this study was to determine whether the multi-echo phase-sensitive inversion recovery (mPSIR) sequence can improve T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of an IPH. T 1 and T 2 * values measured using mPSIR were compared to values from conventional methods in phantom and in vivo studies. In the phantom study, the T 1 and T 2 * values estimated using mPSIR were linearly correlated with those of conventional methods. In the in vivo study, mPSIR demonstrated higher T 1 contrast between the IPH phantom and sternocleidomastoid muscle than the conventional method. Moreover, the T 1 and T 2 * values of the blood vessel wall and sternocleidomastoid muscle estimated using mPSIR were correlated with values measured by conventional methods and with values reported previously. The mPSIR sequence improved T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of the neck region. Although further study is required to evaluate the clinical utility, mPSIR may improve carotid atherosclerotic plaque detection and provide detailed information about plaque components.
Petroleum Refinery Jobs and Economic Development Impact (JEDI) Model User Reference Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Marshall
The Jobs and Economic Development Impact (JEDI) models, developed through the National Renewable Energy Laboratory (NREL), are user-friendly tools utilized to estimate the economic impacts at the local level of constructing and operating fuel and power generation projects for a range of conventional and renewable energy technologies. The JEDI Petroleum Refinery Model User Reference Guide was developed to assist users in employing and understanding the model. This guide provides information on the model's underlying methodology, as well as the parameters and references used to develop the cost data utilized in the model. This guide also provides basic instruction on modelmore » add-in features, operation of the model, and a discussion of how the results should be interpreted. Based on project-specific inputs from the user, the model estimates job creation, earning and output (total economic activity) for a given petroleum refinery. This includes the direct, indirect and induced economic impacts to the local economy associated with the refinery's construction and operation phases. Project cost and job data used in the model are derived from the most current cost estimations available. Local direct and indirect economic impacts are estimated using economic multipliers derived from IMPLAN software. By determining the regional economic impacts and job creation for a proposed refinery, the JEDI Petroleum Refinery model can be used to field questions about the added value refineries may bring to the local community.« less
An atlas-based organ dose estimator for tomosynthesis and radiography
NASA Astrophysics Data System (ADS)
Hoye, Jocelyn; Zhang, Yakun; Agasthya, Greeshma; Sturgeon, Greg; Kapadia, Anuj; Segars, W. Paul; Samei, Ehsan
2017-03-01
The purpose of this study was to provide patient-specific organ dose estimation based on an atlas of human models for twenty tomosynthesis and radiography protocols. The study utilized a library of 54 adult computational phantoms (age: 18-78 years, weight 52-117 kg) and a validated Monte-Carlo simulation (PENELOPE) of a tomosynthesis and radiography system to estimate organ dose. Positioning of patient anatomy was based on radiographic positioning handbooks. The field of view for each exam was calculated to include relevant organs per protocol. Through simulations, the energy deposited in each organ was binned to estimate normalized organ doses into a reference database. The database can be used as the basis to devise a dose calculator to predict patient-specific organ dose values based on kVp, mAs, exposure in air, and patient habitus for a given protocol. As an example of the utility of this tool, dose to an organ was studied as a function of average patient thickness in the field of view for a given exam and as a function of Body Mass Index (BMI). For tomosynthesis, organ doses can also be studied as a function of x-ray tube position. This work developed comprehensive information for organ dose dependencies across tomosynthesis and radiography. There was a general exponential decrease dependency with increasing patient size that is highly protocol dependent. There was a wide range of variability in organ dose across the patient population, which needs to be incorporated in the metrology of organ dose.
Potential Size of and Value Proposition for H2@Scale Concept
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruth, Mark F; Jadun, Paige; Pivovar, Bryan S
The H2@Scale concept is focused on developing hydrogen as an energy carrier and using hydrogen's properties to improve the national energy system. Specifically hydrogen has the abilities to (1) supply a clean energy source for industry and transportation and (2) increase the profitability of variable renewable electricity generators such as wind turbines and solar photovoltaic (PV) farms by providing value for otherwise potentially-curtailed electricity. Thus the concept also has the potential to reduce oil dependency by providing a low-carbon fuel for fuel cell electric vehicles (FCEVs), reduce emissions of carbon dioxide and pollutants such as NOx, and support domestic energymore » production, manufacturing, and U.S. economic competitiveness. The analysis reported here focuses on the potential market size and value proposition for the H2@Scale concept. It involves three analysis phases: 1. Initial phase estimating the technical potential for hydrogen markets and the resources required to meet them; 2. National-scale analysis of the economic potential for hydrogen and the interactions between willingness to pay by hydrogen users and the cost to produce hydrogen from various sources; and 3. In-depth analysis of spatial and economic issues impacting hydrogen production and utilization and the markets. Preliminary analysis of the technical potential indicates that the technical potential for hydrogen use is approximately 60 million metric tons (MMT) annually for light duty FCEVs, heavy duty vehicles, ammonia production, oil refining, biofuel hydrotreating, metals refining, and injection into the natural gas system. The technical potential of utility-scale PV and wind generation independently are much greater than that necessary to produce 60 MMT / year hydrogen. Uranium, natural gas, and coal reserves are each sufficient to produce 60 MMT / year hydrogen in addition to their current uses for decades to centuries. National estimates of the economic potential of hydrogen production using steam methane reforming of natural gas, high temperature electrolysis coupled with nuclear power plants, and low temperature electrolysis are reported. To generate the estimates, supply curves for those technologies are used. They are compared to demand curves that describe the market size for hydrogen uses and willingness to pay for that hydrogen. Scenarios are developed at prices where supply meets demand and are used to estimate energy use, emissions, and economic impacts.« less
Determination of the mean inner potential of cadmium telluride via electron holography
NASA Astrophysics Data System (ADS)
Cassidy, C.; Dhar, A.; Shintake, T.
2017-04-01
Mean inner potential is a fundamental material parameter in solid state physics and electron microscopy and has been experimentally measured in CdTe, a technologically important semiconductor. As a first step, the inelastic mean free path for electron scattering in CdTe was determined, using electron energy loss spectroscopy, to enable precise thickness mapping of thin CdTe lamellae. The obtained value was λi(CdTe, 300 kV) = 192 ± 10 nm. This value is relatively large, given the high density of the material, and is discussed in the text. Next, electron diffraction and specimen tilting were employed to identify weakly diffracting lattice orientations, to enable the straightforward measurement of the electron phase shift. Finally, electron holography was utilized to quantitatively map the phase shift experienced by electron waves passing through a CdTe crystal, with several different propagation vectors. Utilization of both thickness and phase data allowed computation of mean inner potential as V0 (CdTe) = 14.0 ± 0.9 V, within the range of previous theoretical estimates.
Sex Estimation From Sternal Measurements Using Multidetector Computed Tomography
Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Bilgili, Mustafa Gokhan; Solmaz, Dilek; Erdil, Irem; Can, Ismail Ozgur
2014-01-01
Abstract We aimed to show the utility and reliability of sternal morphometric analysis for sex estimation. Sex estimation is a very important step in forensic identification. Skeletal surveys are main methods for sex estimation studies. Morphometric analysis of sternum may provide high accuracy rated data in sex discrimination. In this study, morphometric analysis of sternum was evaluated in 1 mm chest computed tomography scans for sex estimation. Four hundred forty 3 subjects (202 female, 241 male, mean age: 44 ± 8.1 [distribution: 30–60 year old]) were included the study. Manubrium length (ML), mesosternum length (2L), Sternebra 1 (S1W), and Sternebra 3 (S3W) width were measured and also sternal index (SI) was calculated. Differences between genders were evaluated by student t-test. Predictive factors of sex were determined by discrimination analysis and receiver operating characteristic (ROC) analysis. Male sternal measurement values are significantly higher than females (P < 0.001) while SI is significantly low in males (P < 0.001). In discrimination analysis, MSL has high accuracy rate with 80.2% in females and 80.9% in males. MSL also has the best sensitivity (75.9%) and specificity (87.6%) values. Accuracy rates were above 80% in 3 stepwise discrimination analysis for both sexes. Stepwise 1 (ML, MSL, S1W, S3W) has the highest accuracy rate in stepwise discrimination analysis with 86.1% in females and 83.8% in males. Our study showed that morphometric computed tomography analysis of sternum might provide important information for sex estimation. PMID:25501090
Alcohol consumption, beverage prices and measurement error.
Young, Douglas J; Bielinska-Kwapisz, Agnieszka
2003-03-01
Alcohol price data collected by the American Chamber of Commerce Researchers Association (ACCRA) have been widely used in studies of alcohol consumption and related behaviors. A number of problems with these data suggest that they contain substantial measurement error, which biases conventional statistical estimators toward a finding of little or no effect of prices on behavior. We test for measurement error, assess the magnitude of the bias and provide an alternative estimator that is likely to be superior. The study utilizes data on per capita alcohol consumption across U.S. states and the years 1982-1997. State and federal alcohol taxes are used as instrumental variables for prices. Formal tests strongly confim the hypothesis of measurement error. Instrumental variable estimates of the price elasticity of demand range from -0.53 to -1.24. These estimates are substantially larger in absolute value than ordinary least squares estimates, which sometimes are not significantly different from zero or even positive. The ACCRA price data are substantially contaminated with measurement error, but using state and federal taxes as instrumental variables mitigates the problem.
Baker, Robert L; Leong, Wen Fung; An, Nan; Brock, Marcus T; Rubin, Matthew J; Welch, Stephen; Weinig, Cynthia
2018-02-01
We develop Bayesian function-valued trait models that mathematically isolate genetic mechanisms underlying leaf growth trajectories by factoring out genotype-specific differences in photosynthesis. Remote sensing data can be used instead of leaf-level physiological measurements. Characterizing the genetic basis of traits that vary during ontogeny and affect plant performance is a major goal in evolutionary biology and agronomy. Describing genetic programs that specifically regulate morphological traits can be complicated by genotypic differences in physiological traits. We describe the growth trajectories of leaves using novel Bayesian function-valued trait (FVT) modeling approaches in Brassica rapa recombinant inbred lines raised in heterogeneous field settings. While frequentist approaches estimate parameter values by treating each experimental replicate discretely, Bayesian models can utilize information in the global dataset, potentially leading to more robust trait estimation. We illustrate this principle by estimating growth asymptotes in the face of missing data and comparing heritabilities of growth trajectory parameters estimated by Bayesian and frequentist approaches. Using pseudo-Bayes factors, we compare the performance of an initial Bayesian logistic growth model and a model that incorporates carbon assimilation (A max ) as a cofactor, thus statistically accounting for genotypic differences in carbon resources. We further evaluate two remotely sensed spectroradiometric indices, photochemical reflectance (pri2) and MERIS Terrestrial Chlorophyll Index (mtci) as covariates in lieu of A max , because these two indices were genetically correlated with A max across years and treatments yet allow much higher throughput compared to direct leaf-level gas-exchange measurements. For leaf lengths in uncrowded settings, including A max improves model fit over the initial model. The mtci and pri2 indices also outperform direct A max measurements. Of particular importance for evolutionary biologists and plant breeders, hierarchical Bayesian models estimating FVT parameters improve heritabilities compared to frequentist approaches.
Rain attenuation studies from radiometric and rain DSD measurements at two tropical locations
NASA Astrophysics Data System (ADS)
Halder, Tuhina; Adhikari, Arpita; Maitra, Animesh
2018-05-01
Efficient use of satellite communication in tropical regions demands proper characterization of rain attenuation, particularly, in view of the available popular propagation models which are mostly based on temperate climatic data. Thus rain attenuations at frequencies 22.234, 23.834 and 31.4/30 GHz over two tropical locations Kolkata (22.57°N, 88.36°E, India) and Belem (1.45°S, 48.49° W, Brazil), have been estimated for the year 2010 and 2011, respectively. The estimation has been done utilizing ground-based disdrometer observations and radiometric measurements over Earth-space path. The results show that rain attenuation estimations from radiometric data are reliable only at low rain rates (<30 mm/h). However, the rain attenuation estimations from disdrometer measurements show good agreement with the ITU-R model, even at high rain rates (upto100 mm/h). Despite having significant variability in terms of drop size distribution (DSD), the attenuation values calculated from DSD data (disdrometer measurements) at Kolkata and Belem differ a little for the rain rates below 30 mm/h. However, the attenuation values, obtained from radiometric measurements at the two places, show significant deviations ranging from 0.54 dB to 3.2 dB up to a rain rate of 30 mm/h, on account of different rain heights, mean atmospheric temperatures and climatology of the two locations.
Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu
2015-04-01
An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively. © 2015 Institute of Food Technologists®
Wolowacz, Sorrel E; Briggs, Andrew; Belozeroff, Vasily; Clarke, Philip; Doward, Lynda; Goeree, Ron; Lloyd, Andrew; Norman, Richard
Cost-utility models are increasingly used in many countries to establish whether the cost of a new intervention can be justified in terms of health benefits. Health-state utility (HSU) estimates (the preference for a given state of health on a cardinal scale where 0 represents dead and 1 represents full health) are typically among the most important and uncertain data inputs in cost-utility models. Clinical trials represent an important opportunity for the collection of health-utility data. However, trials designed primarily to evaluate efficacy and safety often present challenges to the optimal collection of HSU estimates for economic models. Careful planning is needed to determine which of the HSU estimates may be measured in planned trials; to establish the optimal methodology; and to plan any additional studies needed. This report aimed to provide a framework for researchers to plan the collection of health-utility data in clinical studies to provide high-quality HSU estimates for economic modeling. Recommendations are made for early planning of health-utility data collection within a research and development program; design of health-utility data collection during protocol development for a planned clinical trial; design of prospective and cross-sectional observational studies and alternative study types; and statistical analyses and reporting. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Warren, Daniel; Andres, Tate; Hoelscher, Christian; Ricart-Hoffiz, Pedro; Bendo, John; Goldstein, Jeffrey
2013-01-01
Background Patients with cervical disc herniations resulting in radiculopathy or myelopathy from single level disease have traditionally been treated with Anterior Cervical Discectomy and Fusion (ACDF), yet Cervical Disc Arthroplasty (CDA) is a new alternative. Expert suggestion of reduced adjacent segment degeneration is a promising future result of CDA. A cost-utility analysis of these procedures with long-term follow-up has not been previously reported. Methods We reviewed single institution prospective data from a randomized trial comparing single-level ACDF and CDA in cervical disc disease. Both Medicare reimbursement schedules and actual hospital cost data for peri-operative care were separately reviewed and analyzed to estimate the cost of treatment of each patient. QALYs were calculated at 1 and 2 years based on NDI and SF-36 outcome scores, and incremental cost effectiveness ratio (ICER) analysis was performed to determine relative cost-effectiveness. Results Patients of both groups showed improvement in NDI and SF-36 outcome scores. Medicare reimbursement rates to the hospital were $11,747 and $10,015 for ACDF and CDA, respectively; these figures rose to $16,162 and $13,171 when including physician and anesthesiologist reimbursement. The estimated actual cost to the hospital of ACDF averaged $16,108, while CDA averaged $16,004 (p = 0.97); when including estimated physicians fees, total hospital costs came to $19,811 and $18,440, respectively. The cost/QALY analyses therefore varied widely with these discrepancies in cost values. The ICERs of ACDF vs CDA with Medicare reimbursements were $18,593 (NDI) and $19,940 (SF-36), while ICERs based on actual total hospital cost were $13,710 (NDI) and $9,140 (SF-36). Conclusions We confirm the efficacy of ACDF and CDA in the treatment of cervical disc disease, as our results suggest similar clinical outcomes at one and two year follow-up. The ICER suggests that the non-significant added benefit via ACDF comes at a reasonable cost, whether we use actual hospital costs or Medicare reimbursement values, though the actual ICER values vary widely depending upon the CUA modality used. Long term follow-up may illustrate a different profile for CDA due to reduced cost and greater long-term utility scores. It is crucial to note that financial modeling plays an important role in how economic treatment dominance is portrayed. PMID:25694905
Brown, Elizabeth R.; Smith, Jessi L.; Thoman, Dustin B.; Allen, Jill M.; Muragishi, Gregg
2015-01-01
Motivating students to pursue science careers is a top priority among many science educators. We add to the growing literature by examining the impact of a utility value intervention to enhance student’s perceptions that biomedical science affords important utility work values. Using an expectancy-value perspective we identify and test two types of utility value: communal (other-oriented) and agentic (self-oriented). The culture of science is replete with examples emphasizing high levels of agentic value, but communal values are often (stereotyped as) absent from science. However, people in general want an occupation that has communal utility. We predicted and found that an intervention emphasizing the communal utility value of biomedical research increased students’ motivation for biomedical science (Studies 1–3). We refined whether different types of communal utility value (working with, helping, and forming relationships with others) might be more or less important, demonstrating that helping others was an especially important predictor of student motivation (Study 2). Adding agentic utility value to biomedical research did not further increase student motivation (Study 3). Furthermore, the communal value intervention indirectly impacted students’ motivation because students believed that biomedical research was communal and thus subsequently more important (Studies 1–3). This is key, because enhancing student communal value beliefs about biomedical research (Studies 1–3) and science (Study 4) was associated both with momentary increases in motivation in experimental settings (Studies 1–3) and increased motivation over time among students highly identified with biomedicine (Study 4). We discuss recommendations for science educators, practitioners, and faculty mentors who want to broaden participation in science. PMID:26617417
NASA Astrophysics Data System (ADS)
Kurihara, Ryuji; Furue, Hirokazu; Takahashi, Taiju; Yamashita, Tomo-o; Xu, Jun; Kobayashi, Shunsuke
2001-07-01
A photoalignment technique has been utilized for fabricating zigzag-defect-free ferroelectric liquid crystal displays (FLCDs) using polyimide RN-1199, -1286, -1266 (Nissan Chem. Ind.) and adopting oblique irradiation of unpolarized UV light. A rubbing technique was also utilized for comparison. It is shown that among these polyimide materials, RN-1199 is the best for fabricating defect-free cells with C-1 uniform states, but RN-1286 requires low energy to produce a photoaligned FLC phase. We have conducted an analytical investigation to clarify the conditions for obtaining zigzag-defect-free C-1 states, and it is theoretically shown that zigzag-defect-free C-1 state is obtained using a low azimuthal anchoring energy at a low pretilt angle, while a zigzag-defect-free C-2 state is obtained by increasing azimuthal anchoring energy above a critical value, also at a low pretilt angle. The estimated critical value of the azimuthal anchoring energy at which a transition from the C-1 state to the C-2 state occurs is 3×10-6 J/m2 for the FLC material FELIX M4654/100 (Clariant) used in this research; this value is shown to fall in a favorable range which is measured in an independent experiment.
Exercise reduces depressive symptoms in adults with arthritis: Evidential value.
Kelley, George A; Kelley, Kristi S
2016-07-12
To determine whether evidential value exists that exercise reduces depression in adults with arthritis and other rheumatic conditions. Utilizing data derived from a prior meta-analysis of 29 randomized controlled trials comprising 2449 participants (1470 exercise, 979 control) with fibromyalgia, osteoarthritis, rheumatoid arthritis or systemic lupus erythematosus, a new method, P -curve, was utilized to assess for evidentiary worth as well as dismiss the possibility of discriminating reporting of statistically significant results regarding exercise and depression in adults with arthritis and other rheumatic conditions. Using the method of Stouffer, Z -scores were calculated to examine selective-reporting bias. An alpha ( P ) value < 0.05 was deemed statistically significant. In addition, average power of the tests included in P -curve, adjusted for publication bias, was calculated. Fifteen of 29 studies (51.7%) with exercise and depression results were statistically significant ( P < 0.05) while none of the results were statistically significant with respect to exercise increasing depression in adults with arthritis and other rheumatic conditions. Right-skew to dismiss selective reporting was identified ( Z = -5.28, P < 0.0001). In addition, the included studies did not lack evidential value ( Z = 2.39, P = 0.99), nor did they lack evidential value and were P -hacked ( Z = 5.28, P > 0.99). The relative frequencies of P -values were 66.7% at 0.01, 6.7% each at 0.02 and 0.03, 13.3% at 0.04 and 6.7% at 0.05. The average power of the tests included in P -curve, corrected for publication bias, was 69%. Diagnostic plot results revealed that the observed power estimate was a better fit than the alternatives. Evidential value results provide additional support that exercise reduces depression in adults with arthritis and other rheumatic conditions.
Exercise reduces depressive symptoms in adults with arthritis: Evidential value
Kelley, George A; Kelley, Kristi S
2016-01-01
AIM To determine whether evidential value exists that exercise reduces depression in adults with arthritis and other rheumatic conditions. METHODS Utilizing data derived from a prior meta-analysis of 29 randomized controlled trials comprising 2449 participants (1470 exercise, 979 control) with fibromyalgia, osteoarthritis, rheumatoid arthritis or systemic lupus erythematosus, a new method, P-curve, was utilized to assess for evidentiary worth as well as dismiss the possibility of discriminating reporting of statistically significant results regarding exercise and depression in adults with arthritis and other rheumatic conditions. Using the method of Stouffer, Z-scores were calculated to examine selective-reporting bias. An alpha (P) value < 0.05 was deemed statistically significant. In addition, average power of the tests included in P-curve, adjusted for publication bias, was calculated. RESULTS Fifteen of 29 studies (51.7%) with exercise and depression results were statistically significant (P < 0.05) while none of the results were statistically significant with respect to exercise increasing depression in adults with arthritis and other rheumatic conditions. Right-skew to dismiss selective reporting was identified (Z = −5.28, P < 0.0001). In addition, the included studies did not lack evidential value (Z = 2.39, P = 0.99), nor did they lack evidential value and were P-hacked (Z = 5.28, P > 0.99). The relative frequencies of P-values were 66.7% at 0.01, 6.7% each at 0.02 and 0.03, 13.3% at 0.04 and 6.7% at 0.05. The average power of the tests included in P-curve, corrected for publication bias, was 69%. Diagnostic plot results revealed that the observed power estimate was a better fit than the alternatives. CONCLUSION Evidential value results provide additional support that exercise reduces depression in adults with arthritis and other rheumatic conditions. PMID:27489782
State energy data report: Statistical tables and technical documentation 1960 through 1979
NASA Astrophysics Data System (ADS)
1981-09-01
All the data of the State Energy Data System (SEDS) is given. The data is used to estimate annual energy consumption by principal energy sources (coal, natural gas, petroleum, electricity), by major end-use sectors (residential, commercial, industrial, transportation, and electric utilities), and by state (50 states, the District of Columbia, and the United States). Data is organized alphabetically by energy source (fuel), by end-use sector or energy activity, by type of data and by state. Twenty data values are associated with each fuel-sector-type state grouping representing positionally the years 1960 through 1979. Data values in the file are expressed either as physical units, British thermal units, physical to Btu conversion factors or share factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, E.; Alleman, T. L.; McCormick, R. L.
Total acid value titration has long been used to estimate corrosive potential of petroleum crude oil and fuel oil products. The method commonly used for this measurement, ASTM D664, utilizes KOH in isopropanol as the titrant with potentiometric end point determination by pH sensing electrode and Ag/AgCl reference electrode with LiCl electrolyte. A natural application of the D664 method is titration of pyrolysis-derived bio-oil, which is a candidate for refinery upgrading to produce drop in fuels. Determining the total acid value of pyrolysis derived bio-oil has proven challenging and not necessarily amenable to the methodology employed for petroleum products duemore » to the different nature of acids present. We presented an acid value titration for bio-oil products in our previous publication which also utilizes potentiometry using tetrabutylammonium hydroxide in place of KOH as the titrant and tetraethylammonium bromide in place of LiCl as the reference electrolyte to improve the detection of these types of acids. This method was shown to detect numerous end points in samples of bio-oil that were not detected by D664. These end points were attributed to carboxylic acids and phenolics based on the results of HPLC and GC-MS studies. Additional work has led to refinement of the method and it has been established that both carboxylic acids and phenolics can be determined accurately. Use of pH buffer calibration to determine half-neutralization potentials of acids in conjunction with the analysis of model compounds has allowed us to conclude that this titration method is suitable for the determination of total acid value of pyrolysis oil and can be used to differentiate and quantify weak acid species. The measurement of phenolics in bio-oil is subject to a relatively high limit of detection, which may limit the utility of titrimetric methodology for characterizing the acidic potential of pyrolysis oil and products.« less
A photographic utilization guide for key riparian graminoids
John W. Kinney; Warren P. Clary
1994-01-01
Photographic guides are presented to help estimate grazing utilization of important riparian grasses and grasslike plants. Graphs showing the percent of a plant's weight that has been consumed based on the percent of its height left after grazing allow utilization estimates to be refined further.
Utilization of structural steel in buildings
Moynihan, Muiris C.; Allwood, Julian M.
2014-01-01
Over one-quarter of steel produced annually is used in the construction of buildings. Making this steel causes carbon dioxide emissions, which climate change experts recommend be reduced by half in the next 37 years. One option to achieve this is to design and build more efficiently, still delivering the same service from buildings but using less steel to do so. To estimate how much steel could be saved from this option, 23 steel-framed building designs are studied, sourced from leading UK engineering firms. The utilization of each beam is found and buildings are analysed to find patterns. The results for over 10 000 beams show that average utilization is below 50% of their capacity. The primary reason for this low value is ‘rationalization’—providing extra material to reduce labour costs. By designing for minimum material rather than minimum cost, steel use in buildings could be drastically reduced, leading to an equivalent reduction in ‘embodied’ carbon emissions. PMID:25104911
Optimal Sizing of a Solar-Plus-Storage System for Utility Bill Savings and Resiliency Benefits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpkins, Travis; Anderson, Kate; Cutler, Dylan
Solar-plus-storage systems can achieve significant utility savings in behind-the-meter deployments in buildings, campuses, or industrial sites. Common applications include demand charge reduction, energy arbitrage, time-shifting of excess photovoltaic (PV) production, and selling ancillary services to the utility grid. These systems can also offer some energy resiliency during grid outages. It is often difficult to quantify the amount of resiliency that these systems can provide, however, and this benefit is often undervalued or omitted during the design process. We propose a method for estimating the resiliency that a solar-plus-storage system can provide at a given location. We then present an optimizationmore » model that can optimally size the system components to minimize the lifecycle cost of electricity to the site, including the costs incurred during grid outages. The results show that including the value of resiliency during the feasibility stage can result in larger systems and increased resiliency.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpkins, Travis; Anderson, Kate; Cutler, Dylan
Solar-plus-storage systems can achieve significant utility savings in behind-the-meter deployments in buildings, campuses, or industrial sites. Common applications include demand charge reduction, energy arbitrage, time-shifting of excess photovoltaic (PV) production, and selling ancillary services to the utility grid. These systems can also offer some energy resiliency during grid outages. It is often difficult to quantify the amount of resiliency that these systems can provide, however, and this benefit is often undervalued or omitted during the design process. We propose a method for estimating the resiliency that a solar-plus-storage system can provide at a given location. We then present an optimizationmore » model that can optimally size the system components to minimize the lifecycle cost of electricity to the site, including the costs incurred during grid outages. The results show that including the value of resiliency during the feasibility stage can result in larger systems and increased resiliency.« less
Ranking and averaging independent component analysis by reproducibility (RAICAR).
Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping
2008-06-01
Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.
Relations among passive electrical properties of lumbar alpha-motoneurones of the cat.
Gustafsson, B; Pinter, M J
1984-01-01
The relations among passive membrane properties have been examined in cat motoneurones utilizing exclusively electrophysiological techniques. A significant relation was found to exist between the input resistance and the membrane time constant. The estimated electrotonic length showed no evident tendency to vary with input resistance but did show a tendency to decrease with increasing time constant. Detailed analysis of this trend suggests, however, that a variation in dendritic geometry is likely to exist among cat motoneurones, such that the dendritic trees of motoneurones projecting to fast-twitch muscle units are relatively more expansive than those of motoneurones projecting to slow-twitch units. Utilizing an expression derived from the Rall neurone model, the total capacitance of the equivalent cylinder corresponding to a motoneurone has been estimated. With the assumption of a constant and uniform specific capacitance of 1 mu F/cm2, the resulting values have been used as estimates of cell surface area. These estimates agree well with morphologically obtained measurements from cat motoneurones reported by others. Both membrane time constant (and thus likely specific membrane resistivity) and electrotonic length showed little tendency to vary with surface area. However, after-hyperpolarization (a.h.p.) duration showed some tendency to vary such that cells with brief a.h.p. duration were, on average, larger than those with longer a.h.p. durations. Apart from motoneurones with the lowest values, axonal conduction velocity was only weakly related to variations in estimated surface area. Input resistance and membrane time constant were found to vary systematically with the a.h.p. duration. Analysis suggested that the major part of the increase in input resistance with a.h.p. duration was related to an increase in membrane resistivity and a variation in dendritic geometry rather than to differences in surface area among the motoneurones. The possible effects of imperfect electrode seals have been considered. According to an analysis of a passive membrane model, soma leaks caused by impalement injury will result in underestimates of input resistance and time constant and over-estimates of electrotonic length and total capacitance. Assuming a non-injured resting potential of -80 mV, a comparison of membrane potentials predicted by various relative leaks (leak conductance/input conductance) with those actually observed suggests that the magnitude of these errors in the present material will not unduly affect the presented results.+4 PMID:6520792
Saving lives and saving money: hospital-based violence intervention is cost-effective.
Juillard, Catherine; Smith, Randi; Anaya, Nancy; Garcia, Arturo; Kahn, James G; Dicker, Rochelle A
2015-02-01
Victims of violence are at significant risk for injury recidivism, including fatality. We previously demonstrated that our hospital-based violence intervention program (VIP) resulted in a fourfold reduction in injury recidivism, avoiding trauma care costs of $41,000 per injury. Given limited trauma center resources, assessing cost-effectiveness of interventions is fundamental to inform use of these programs in other institutions. This study examines the cost-effectiveness of hospital-based VIP. We used a decision tree and Markov disease state modeling to analyze cost utility for a hypothetical cohort of violently injured subjects, comparing VIP versus no VIP at a trauma center. Quality-adjusted life-years (QALYs) were calculated using differences in mortality and published health state utilities. Costs of trauma care and VIP were obtained from institutional data, and risk of recidivism with and without VIP were obtained from our trial. Outcomes were QALYs gained and net costs over a 5-year horizon. Sensitivity analyses examined the impact of uncertainty in input values on results. VIP results in an estimated 25.58 QALYs and net costs (program plus trauma care) of $5,892 per patient. Without VIP, these values are 25.34 and $5,923, respectively, suggesting that VIP yields substantial health benefits (24 QALYs) and savings ($4,100) if implemented for 100 individuals. In the sensitivity analysis, net QALYs gained with VIP nearly triple when the injury recidivism rate without VIP is highest. Cost-effectiveness remained robust over a range of values; $6,000 net cost savings occur when 5-year recidivism rate without VIP is at 7%. VIP costs less than having no VIP with significant gains in QALYs especially at anticipated program scale. Across a range of plausible values at which VIP would be less cost-effective (lower injury recidivism, cost of injury, and program effectiveness), VIP still results in acceptable cost per health outcome gained. VIP is effective and cost-effective and should be considered in any trauma center that takes care of violently injured patients. Our analyses can be used to estimate VIP costs and results in different settings. Economic and value-based evaluation, level 2.
Value-based contracting innovated Medicare advantage healthcare delivery and improved survival.
Mandal, Aloke K; Tagomori, Gene K; Felix, Randell V; Howell, Scott C
2017-02-01
In Medicare Advantage (MA) with its CMS Hierarchical Condition Categories (CMS-HCC) payment model, CMS reimburses private plans (Medicare Advantage Organizations [MAOs]) with prospective, monthly, health-based or risk-adjusted, capitated payments. The effect of this payment methodology on healthcare delivery remains debatable. How value-based contracting generates cost efficiencies and improves clinical outcomes in MA is studied. A difference in contracting arrangements between an MAO and 2 provider groups facilitated an intervention-control, preintervention-postintervention, difference-in-differences approach among statistically similar, elderly, community-dwelling MA enrollees within one metropolitan statistical area. Starting in 2009, for intervention-group MA enrollees, the MAO and a provider group agreed to full-risk capitation combined with a revenue gainshare. The gainshare was based on increases in the Risk Adjustment Factor (RAF), which modified the CMS-HCC payments. For the control group, the MAO continued to reimburse another provider group through fee-for-service. RAF, utilization, and survival were followed until December 31, 2012. The intervention group's mean RAF increased significantly (P <.001), estimating $2,519,544 per 1000 members of additional revenue. The intervention increased office-based visits (P <.001). Emergency department visits (P <.001) and inpatient hospital admissions (P = .002) decreased. This change in utilization saved $2,071,293 per 1000 enrollees. By intensifying office-based care for these MA enrollees with multiple comorbidities, a 6% survival benefit with a 32.8% lower hazard of death (P <.001) was achieved. Value-based contracting can drive utilization patterns and improve clinical outcomes among chronically ill, elderly MA members.
Frederix, Gerardus W J; Quadri, Nuz; Hövels, Anke M; van de Wetering, Fleur T; Tamminga, Hans; Schellens, Jan H M; Lloyd, Andrew J
2013-04-01
This study aimed to estimate utility values in laypeople and productivity loss for women with breast cancer in Sweden and the Netherlands. To capture utilities, validated health state vignettes were used, which were translated into Dutch and Swedish. They described progressive disease, stable disease, and 7 grade 3/4 adverse events. One hundred members of the general public in each country rated the states using the visual analog scale and time trade-off method. To assess productivity, women who had recently completed or were currently receiving treatment for early or advanced breast cancer (the Netherlands, n = 161; Sweden, n = 52) completed the Work Productivity and Activity Impairment-General Health (WPAI-GH) questionnaire. Data were analyzed using means (SD). The utility study showed that the Swedish sample rated progressive and stable disease (mean, 0.61 [0.07] and 0.81 [0.05], respectively) higher than did the Dutch sample (0.49 [0.06] and 0.69 [0.05]). The health states incorporating the toxicities in both countries produced similar mean scores. Results of the WPAI-GH showed that those currently receiving treatment reported productivity reductions of 69% (the Netherlands) and 72% (Sweden); those who had recently completed therapy reported reductions of 41% (the Netherlands) and 40% (Sweden). The differences in the utility scores between the 2 countries underline the importance of capturing country-specific values. The significant impact of adverse events on health-related quality of life was also highlighted. The WPAI-GH results demonstrated how the negative impact of breast cancer on productivity persists after women have completed their treatment. Copyright © 2013 Elsevier HS Journals, Inc. All rights reserved.
Sheingold, Steven; Nguyen, Nguyen Xuan
2014-01-01
This study estimates the effects of generic competition, increased cost-sharing, and benefit practices on utilization and spending for prescription drugs. We examined changes in Medicare price and utilization from 2007 to 2009 of all drugs in 28 therapeutic classes. The classes accounted for 80% of Medicare Part D spending in 2009 and included the 6 protected classes and 6 classes with practically no generic competition. All variables were constructed to measure each drug relative to its class at a specific plan sponsor. We estimated that the shift toward generic utilization had cut in half the rate of increase in the price of a prescription during 2007-2009. Specifically, the results showed that (1) rapid generic penetration had significantly held down costs per prescription, (2) copayment and other benefit practices shifted utilization to generics and favored brands, and (3) price increases were generally greater in less competitive classes of drugs. In many ways, Part D was implemented at a fortuitous time; since 2006, there have been relatively few new blockbuster drugs introduced, and many existing high-volume drugs used by beneficiaries were in therapeutic classes with multiple brands and generic alternatives. Under these conditions, our paper showed that plan sponsors have been able to contain costs by encouraging use of generics or drugs offering greater value within therapeutic classes. It is less clear what will happen to future Part D costs if a number of new and effective drugs for beneficiaries enter the market with no real competitors.
Pataky, Reka; Gulati, Roman; Etzioni, Ruth; Black, Peter; Chi, Kim N.; Coldman, Andrew J.; Pickles, Tom; Tyldesley, Scott; Peacock, Stuart
2015-01-01
Prostate-specific antigen (PSA) screening for prostate cancer may reduce mortality, but it incurs considerable risk of overdiagnosis and potential harm to quality of life. Our objective was to evaluate the cost-effectiveness of PSA screening, with and without adjustment for quality of life, for the British Columbia (BC) population. We adapted an existing natural history model using BC incidence, treatment, cost and mortality patterns. The modeled mortality benefit of screening derives from a stage-shift mechanism, assuming mortality reduction consistent with the European Study of Randomized Screening for Prostate Cancer. The model projected outcomes for 40 year-old men under 14 combinations of screening ages and frequencies. Cost and utility estimates were explored with deterministic sensitivity analysis. The incremental cost-effectiveness of regular screening ranged from $36,300/LYG, for screening every four years from ages 55-69, to $588,300/LYG, for screening every two years from ages 40-74. The marginal benefits of increasing screening frequency to two years or starting screening at age 40 were small and came at significant cost. After utility adjustment, all screening strategies resulted in a loss of QALYs; however, this result was very sensitive to utility estimates. Plausible outcomes under a range of screening strategies inform discussion of prostate cancer screening policy in BC and similar jurisdictions. Screening may be cost-effective but the sensitivity of results to utility values suggests individual preferences for quality versus quantity of life should be a key consideration. PMID:24443367
Wang, Xuetao; Salters, Kate A.; Zhang, Wen; McCandless, Lawrence; Money, Deborah; Pick, Neora; Montaner, Julio S. G.; Hogg, Robert S.; Kaida, Angela
2012-01-01
Background. HIV-infected women are disproportionately burdened by gynaecological complications, psychological disorders, and certain sexually transmitted infections that may not be adequately addressed by HIV-specific care. We estimate the prevalence and covariates of women's health care (WHC) utilization among harder-to-reach, treatment-experienced HIV-infected women in British Columbia (BC), Canada. Methods. We used survey data from 231 HIV-infected, treatment-experienced women enrolled in the Longitudinal Investigations into Supportive and Ancillary Health Services (LISA) study, which recruited harder-to-reach populations, including aboriginal people and individuals using injection drugs. Independent covariates of interest included sociodemographic, psychosocial, behavioural, individual health status, structural factors, and HIV clinical variables. Logistic regression was used to generate adjusted estimates of associations between use of WHC and covariates of interest. Results. Overall, 77% of women reported regularly utilizing WHC. WHC utilization varied significantly by region of residence (P value <0.01). In addition, women with lower annual income (AOR (95% CI) = 0.14 (0.04–0.54)), who used illicit drugs (AOR (95% CI) = 0.42 (0.19–0.92)) and who had lower provider trust (AOR (95% CI) = 0.97 (0.95–0.99)), were significantly less likely to report using WHC. Conclusion. A health service gap exists along geographical and social axes for harder-to-reach HIV-infected women in BC. Women-centered WHC and HIV-specific care should be streamlined and integrated to better address women's holistic health. PMID:23227316
A systematic literature review of health state utility values in head and neck cancer.
Meregaglia, Michela; Cairns, John
2017-09-02
Health state utility values (HSUVs) are essential parameters in model-based economic evaluations. This study systematically identifies HSUVs in head and neck cancer and provides guidance for selecting them from a growing body of health-related quality of life studies. We systematically reviewed the published literature by searching PubMed, EMBASE and The Cochrane Library using a pre-defined combination of keywords. The Tufts Cost-Effectiveness Analysis Registry and the School of Health and Related Research Health Utilities Database (ScHARRHUD) specifically containing health utilities were also queried, in addition to the Health Economics Research Centre database of mapping studies. Studies were considered for inclusion if reporting original HSUVs assessed using established techniques. The characteristics of each study including country, design, sample size, cancer subsite addressed and demographics of responders were summarized narratively using a data extraction form. Quality scoring and critical appraisal of the included studies were performed based on published recommendations. Of a total 1048 records identified by the search, 28 studies qualified for data extraction and 346 unique HSUVs were retrieved from them. HSUVs were estimated using direct methods (e.g. standard gamble; n = 10 studies), multi-attribute utility instruments (MAUIs; n = 13) and mapping techniques (n = 3); two studies adopted both direct and indirect approaches. Within the MAUIs, the EuroQol 5-dimension questionnaire (EQ-5D) was the most frequently used (n = 11), followed by the Health Utility Index Mark 3 (HUI3; n = 2), the 15D (n = 2) and the Short Form-Six Dimension (SF-6D; n = 1). Different methods and types of responders (i.e. patients, healthy subjects, clinical experts) influenced the magnitude of HSUVs for comparable health states. Only one mapping study developed an original algorithm using head and neck cancer data. The identified studies were considered of intermediate quality. This review provides a dataset of HSUVs systematically retrieved from published studies in head and neck cancer. There is currently a lack of research for some disease phases including recurrent and metastatic cancer, and treatment-related complications. In selecting HSUVs for cost-effectiveness modeling purposes, preference should be given to EQ-5D utility values; however, mapping to EQ-5D is a potentially valuable technique that should be further developed in this cancer population.
Stochastic Dominance and Analysis of ODI Batting Performance: the Indian Cricket Team, 1989-2005.
Damodaran, Uday
2006-01-01
Relative to other team games, the contribution of individual team members to the overall team performance is more easily quantifiable in cricket. Viewing players as securities and the team as a portfolio, cricket thus lends itself better to the use of analytical methods usually employed in the analysis of securities and portfolios. This paper demonstrates the use of stochastic dominance rules, normally used in investment management, to analyze the One Day International (ODI) batting performance of Indian cricketers. The data used span the years 1989 to 2005. In dealing with cricketing data the existence of 'not out' scores poses a problem while processing the data. In this paper, using a Bayesian approach, the 'not-out' scores are first replaced with a conditional average. The conditional average that is used represents an estimate of the score that the player would have gone on to score, if the 'not out' innings had been completed. The data thus treated are then used in the stochastic dominance analysis. To use stochastic dominance rules we need to characterize the 'utility' of a batsman. The first derivative of the utility function, with respect to runs scored, of an ODI batsman can safely be assumed to be positive (more runs scored are preferred to less). However, the second derivative needs not be negative (no diminishing marginal utility for runs scored). This means that we cannot clearly specify whether the value attached to an additional run scored is lesser at higher levels of scores. Because of this, only first-order stochastic dominance is used to analyze the performance of the players under consideration. While this has its limitation (specifically, we cannot arrive at a complete utility value for each batsman), the approach does well in describing player performance. Moreover, the results have intuitive appeal. Key PointsThe problem of dealing with 'not out' scores in cricket is tackled using a Bayesian approach.Stochastic dominance rules are used to characterize the utility of a batsman.Since the marginal utility of runs scored is not diminishing in nature, only first order stochastic dominance rules are used.The results, demonstrated using data for the Indian cricket team are intuitively appealing.The limitation of the approach is that it cannot arrive at a complete utility value for the batsman.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartsock, J.H.; Gruy, H.J.
Fair market value has its origin in law and is defined as that price that a willing buyer will pay and a willing seller will sell at some point in time, with neither the buyer nor the seller under any compulsion to buy or sell, aid both having equal and reasonable knowledge of the facts. In reality, a perfect sale probably never occurs in which a willing buyer and a willing seller are under no compulsion to buy or sell and both are equally familiar with all the facts. Nonetheless, it is necessary to prepare fair market value estimates formore » oil and gas properties for the purpose of gift taxes, estate taxes, condemnation cases, mergers and divorce settlements. For the estimation of the fair market value of oil and gas properties, there are basically two approaches; namely, the income approach and the market data approach. The income approach requires the estimation of reserves, identification of their categories (proved, probable and possible), a detailed cash flow projection and the proper application of risk factors. The market data approach utilizes the comparable sales of properties The comparable sales approach is preferred, but for producing oil and gas properties it is difficult to identify sales comparable in net reserves, product prices, location, operating expenses, operator expertise, etc. Consequently, for proved, probable and possible reserves, the income approach has been accepted by the courts and is more generally applied. For nonproducing mineral interests the comparable sales approach is applied using multiples of lease bonuses in the area.« less
ERIC Educational Resources Information Center
Brown, Elizabeth R.; Smith, Jessi L.; Thoman, Dustin B.; Allen, Jill M.; Muragishi, Gregg
2015-01-01
Motivating students to pursue science careers is a top priority among many science educators. We add to the growing literature by examining the impact of a utility value intervention to enhance student's perceptions that biomedical science affords important utility work values. Using an expectancy-value perspective, we identified and tested 2…
Payande, Abolfazl; Tabesh, Hamed; Shakeri, Mohammad Taghi; Saki, Azadeh; Safarian, Mohammad
2013-01-14
Growth charts are widely used to assess children's growth status and can provide a trajectory of growth during early important months of life. The objectives of this study are going to construct growth charts and normal values of weight-for-age for children aged 0 to 5 years using a powerful and applicable methodology. The results compare with the World Health Organization (WHO) references and semi-parametric LMS method of Cole and Green. A total of 70737 apparently healthy boys and girls aged 0 to 5 years were recruited in July 2004 for 20 days from those attending community clinics for routine health checks as a part of a national survey. Anthropometric measurements were done by trained health staff using WHO methodology. The nonparametric quantile regression method obtained by local constant kernel estimation of conditional quantiles curves using for estimation of curves and normal values. The weight-for-age growth curves for boys and girls aged from 0 to 5 years were derived utilizing a population of children living in the northeast of Iran. The results were similar to the ones obtained by the semi-parametric LMS method in the same data. Among all age groups from 0 to 5 years, the median values of children's weight living in the northeast of Iran were lower than the corresponding values in WHO reference data. The weight curves of boys were higher than those of girls in all age groups. The differences between growth patterns of children living in the northeast of Iran versus international ones necessitate using local and regional growth charts. International normal values may not properly recognize the populations at risk for growth problems in Iranian children. Quantile regression (QR) as a flexible method which doesn't require restricted assumptions, proposed for estimation reference curves and normal values.
Payande, Abolfazl; Tabesh, Hamed; Shakeri, Mohammad Taghi; Saki, Azadeh; Safarian, Mohammad
2013-01-01
Introduction: Growth charts are widely used to assess children’s growth status and can provide a trajectory of growth during early important months of life. The objectives of this study are going to construct growth charts and normal values of weight-for-age for children aged 0 to 5 years using a powerful and applicable methodology. The results compare with the World Health Organization (WHO) references and semi-parametric LMS method of Cole and Green. Methods: A total of 70737 apparently healthy boys and girls aged 0 to 5 years were recruited in July 2004 for 20 days from those attending community clinics for routine health checks as a part of a national survey. Anthropometric measurements were done by trained health staff using WHO methodology. The nonparametric quantile regression method obtained by local constant kernel estimation of conditional quantiles curves using for estimation of curves and normal values. Results: The weight-for-age growth curves for boys and girls aged from 0 to 5 years were derived utilizing a population of children living in the northeast of Iran. The results were similar to the ones obtained by the semi-parametric LMS method in the same data. Among all age groups from 0 to 5 years, the median values of children’s weight living in the northeast of Iran were lower than the corresponding values in WHO reference data. The weight curves of boys were higher than those of girls in all age groups. Conclusion: The differences between growth patterns of children living in the northeast of Iran versus international ones necessitate using local and regional growth charts. International normal values may not properly recognize the populations at risk for growth problems in Iranian children. Quantile regression (QR) as a flexible method which doesn’t require restricted assumptions, proposed for estimation reference curves and normal values. PMID:23618470
The Emerging Business Models and Value Proposition of Mobile Health Clinics
AUNG, KHIN-KYEMON; HILL, CATERINA; BENNET, JENNIFER; SONG, ZIRUI; ORIOL, NANCY E.
2018-01-01
Mobile health clinics are increasingly used to deliver healthcare to urban and rural populations. An estimated 2000 vehicles in the United States are now delivering between 5 and 6 million visits annually; however, despite this growth, mobile health clinics represent an underutilized resource that could transform the way healthcare is delivered, especially in underserved areas. Preliminary research has shown that mobile health clinics have the potential to reduce costs and improve health outcomes. Their value lies primarily in their mobility, their ability to be flexibly deployed and customized to fit the evolving needs of populations and health systems, and their ability to link clinical and community settings. Few studies have identified how mobile health clinics can be sustainably utilized. We discuss the value proposition of mobile health clinics and propose 3 potential business models for them—adoption by accountable care organizations, payers, and employers. PMID:29516055
The Emerging Business Models and Value Proposition of Mobile Health Clinics.
Aung, Khin-Kyemon; Hill, Caterina; Bennet, Jennifer; Song, Zirui; Oriol, Nancy E
2015-12-01
Mobile health clinics are increasingly used to deliver healthcare to urban and rural populations. An estimated 2000 vehicles in the United States are now delivering between 5 and 6 million visits annually; however, despite this growth, mobile health clinics represent an underutilized resource that could transform the way healthcare is delivered, especially in underserved areas. Preliminary research has shown that mobile health clinics have the potential to reduce costs and improve health outcomes. Their value lies primarily in their mobility, their ability to be flexibly deployed and customized to fit the evolving needs of populations and health systems, and their ability to link clinical and community settings. Few studies have identified how mobile health clinics can be sustainably utilized. We discuss the value proposition of mobile health clinics and propose 3 potential business models for them-adoption by accountable care organizations, payers, and employers.
Ock, Minsu; Park, Jeong-Yeol; Son, Woo-Seung; Lee, Hyeon-Jeong; Kim, Seon-Ha; Jo, Min-Woo
2016-11-28
A cost-utility study of a human papilloma virus (HPV) vaccine requires that the utility weights for HPV-related health states (i.e., cervical intraepithelial neoplasia (CIN), cervical cancer, and condyloma) be evaluated. The aim of the present study was to determine the utility weights for HPV-related health states. Hypothetical standardised health states related to HPV were developed based on patient education material and previous publications. To fully reflect disease progression from diagnosis to prognosis, each health state comprised four parts (diagnosis, symptoms, treatment, and progression and prognosis). Nine-hundred members from the Korean general population evaluated the HPV-related health states using a visual analogue scale (VAS) and a standard gamble (SG) approach, which were administered face-to-face via computer-assisted interview. The mean utility values were calculated for each HPV-related health state. According to the VAS, the highest utility (0.73) was HPV-positive status, followed by condyloma (0.66), and CIN grade I (0.61). The lowest utility (0.18) was cervical cancer requiring chemotherapy without surgery, followed by cervical cancer requiring chemoradiation therapy (0.42). SG revealed that the highest utility (0.83) was HPV-positive status, followed by condyloma (0.78), and CIN grade I (0.77). The lowest utility (0.43) was cervical cancer requiring chemotherapy without surgery, followed by cervical cancer requiring chemoradiation therapy (0.60). This study was based on a large sample derived from the general Korean population; therefore, the calculated utility weights might be useful for evaluating the economic benefit of cancer screening and HPV vaccination programs.
Visualizing value for money in public health interventions.
Leigh-Hunt, Nicholas; Cooper, Duncan; Furber, Andrew; Bevan, Gwyn; Gray, Muir
2018-01-23
The Socio-Technical Allocation of Resources (STAR) has been developed for value for money analysis of health services through stakeholder workshops. This article reports on its application for prioritization of interventions within public health programmes. The STAR tool was used by identifying costs and service activity for interventions within commissioned public health programmes, with benefits estimated from the literature on economic evaluations in terms of costs per Quality-Adjusted Life Years (QALYs); consensus on how these QALY values applied to local services was obtained with local commissioners. Local cost-effectiveness estimates could be made for some interventions. Methodological issues arose from gaps in the evidence base for other interventions, inability to closely match some performance monitoring data with interventions, and disparate time horizons of published QALY data. Practical adjustment for these issues included using population prevalences and utility states where intervention specific evidence was lacking, and subdivision of large contracts into specific intervention costs using staffing ratios. The STAR approach proved useful in informing commissioning decisions and understanding the relative value of local public health interventions. Further work is needed to improve robustness of the process and develop a visualization tool for use by public health departments. © The Author(s) 2018. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Applications of harvesting system simulation to timber management and utilization analyses
John E. Baumgras; Chris B. LeDoux
1990-01-01
Applications of timber harvesting system simulation to the economic analysis of forest management and wood utilization practices are presented. These applications include estimating thinning revenue by stand age, estimating impacts of minimum merchantable tree diameter on harvesting revenue, and evaluating wood utilization alternatives relative to pulpwood quotas and...
Inter-Comparison of CHARM Data and WSR-88D Storm Integrated Rainfall
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Meyer, Paul J.; Guillory, Anthony R.; Stellman, Keith; Limaye, Ashutosh; Arnold, James E. (Technical Monitor)
2002-01-01
A localized precipitation network has been established over a 4000 sq km region of northern Alabama in support of local weather and climate research at the Global Hydrology and Climate Center (GHCC) in Huntsville. This Cooperative Huntsville-Area Rainfall Measurement (CHARM) network is comprised of over 80 volunteers who manually take daily rainfall measurements from 85 sites. The network also incorporates 20 automated gauges that report data at 1-5 minute intervals on a 24 h a day basis. The average spacing of the gauges in the network is about 6 kin, however coverage in some regions benefit from gauges every 1-2 km. The 24 h rainfall totals from the CHARM network have been used to validate Stage III rainfall estimates of daily and storm totals derived from the WSR-88D radars that cover northern Alabama. The Stage III rainfall product is produced by the Lower Mississippi River Forecast Center (LMRFC) in support of their daily forecast operations. The intercomparisons between the local rain gauge and the radar estimates have been useful to understand the accuracy and utility of the Stage III data. Recently, the Stage III and CHARM rainfall measurements have been combined to produce an hourly rainfall dataset at each CHARM observation site. The procedure matches each CHARM site with a time sequence of Stage III radar estimates of precipitation. Hourly stage III rainfall estimates were used to partition the rain gauge values to the time interval over which they occurred. The new hourly rain gauge dataset is validated at selected points where 1-5 minute rainfall measurements have been made. This procedure greatly enhances the utility of the CHARM data for local weather and hydrologic modeling studies. The conference paper will present highlights of the Stage III intercomparison and some examples of the combined radar / rain gauge product demonstrating its accuracy and utility in deriving an hourly rainfall product from the 24 h CHARM totals.
Evaluation of a Soil Moisture Data Assimilation System Over West Africa
NASA Astrophysics Data System (ADS)
Bolten, J. D.; Crow, W.; Zhan, X.; Jackson, T.; Reynolds, C.
2009-05-01
A crucial requirement of global crop yield forecasts by the U.S. Department of Agriculture (USDA) International Production Assessment Division (IPAD) is the regional characterization of surface and sub-surface soil moisture. However, due to the spatial heterogeneity and dynamic nature of precipitation events and resulting soil moisture, accurate estimation of regional land surface-atmosphere interactions based sparse ground measurements is difficult. IPAD estimates global soil moisture using daily estimates of minimum and maximum temperature and precipitation applied to a modified Palmer two-layer soil moisture model which calculates the daily amount of soil moisture withdrawn by evapotranspiration and replenished by precipitation. We attempt to improve upon the existing system by applying an Ensemble Kalman filter (EnKF) data assimilation system to integrate surface soil moisture retrievals from the NASA Advanced Microwave Scanning Radiometer (AMSR-E) into the USDA soil moisture model. This work aims at evaluating the utility of merging satellite-retrieved soil moisture estimates with the IPAD two-layer soil moisture model used within the DBMS. We present a quantitative analysis of the assimilated soil moisture product over West Africa (9°N- 20°N; 20°W-20°E). This region contains many key agricultural areas and has a high agro- meteorological gradient from desert and semi-arid vegetation in the North, to grassland, trees and crops in the South, thus providing an ideal location for evaluating the assimilated soil moisture product over multiple land cover types and conditions. A data denial experimental approach is utilized to isolate the added utility of integrating remotely-sensed soil moisture by comparing assimilated soil moisture results obtained using (relatively) low-quality precipitation products obtained from real-time satellite imagery to baseline model runs forced with higher quality rainfall. An analysis of root-zone anomalies for each model simulation suggests that the assimilation of AMSR-E surface soil moisture retrievals can add significant value to USDA root-zone predictions derived from real-time satellite precipitation products.
Health-related quality of life measured using the EQ-5D-5L: South Australian population norms.
McCaffrey, Nikki; Kaambwa, Billingsley; Currow, David C; Ratcliffe, Julie
2016-09-20
Although a five level version of the widely-used EuroQol 5 dimensions (EQ-5D) instrument has been developed, population norms are not yet available for Australia to inform the future valuation of health in economic evaluations. The aim of this study was to estimate HrQOL normative values for the EQ-5D-5L preference-based measure in a large, randomly selected, community sample in South Australia. The EQ-5D-5L instrument was included in the 2013 South Australian Health Omnibus Survey, an interviewer-administered, face-to-face, cross-sectional survey. Respondents rated their level of impairment across dimensions (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression) and global health rating on a visual analogue scale (EQ-VAS). Utility scores were derived using the newly-developed UK general population-based algorithm and relationships between utility and EQ-VAS scores and socio-demographic factors were also explored using multivariate regression analyses. Ultimately, 2,908 adults participated in the survey (63.4 % participation rate). The mean utility and EQ-VAS scores were 0.91 (95 CI 0.90, 0.91) and 78.55 (95 % CI 77.95, 79.15), respectively. Almost half of respondents reported no problems across all dimensions (42.8 %), whereas only 7.2 % rated their health >90 on the EQ-VAS (100 = the best health you can imagine). Younger age, male gender, longer duration of education, higher annual household income, employment and marriage/de facto relationships were all independent, statistically significant predictors of better health status (p < 0.01) measured with the EQ-VAS. Only age and employment status were associated with higher utility scores, indicating fundamental differences between these measures of health status. This is the first Australian study to apply the EQ-5D-5L in a large, community sample. Overall, findings are consistent with EQ-5D-5L utility and VAS scores reported for other countries and indicate that the majority of South Australian adults report themselves in full health. When valuing health in Australian economic evaluations, the utility population norms can be used to estimate HrQOL. More generally, the EQ-VAS score may be a better measure of population health given the smaller ceiling effect and broader coverage of HrQOL dimensions. Further research is recommended to update EQ-5D-5L population norms using the Australian general population specific scoring algorithm once this becomes publically available.
Methods for measuring risk-aversion: problems and solutions
NASA Astrophysics Data System (ADS)
Thomas, P. J.
2013-09-01
Risk-aversion is a fundamental parameter determining how humans act when required to operate in situations of risk. Its general applicability has been discussed in a companion presentation, and this paper examines methods that have been used in the past to measure it and their attendant problems. It needs to be borne in mind that risk-aversion varies with the size of the possible loss, growing strongly as the possible loss becomes comparable with the decision maker's assets. Hence measuring risk-aversion when the potential loss or gain is small will produce values close to the risk-neutral value of zero, irrespective of who the decision maker is. It will also be shown how the generally accepted practice of basing a measurement on the results of a three-term Taylor series will estimate a limiting value, minimum or maximum, rather than the value utilised in the decision. A solution is to match the correct utility function to the results instead.
Predicting Stability Constants for Uranyl Complexes Using Density Functional Theory
Vukovic, Sinisa; Hay, Benjamin P.; Bryantsev, Vyacheslav S.
2015-04-02
The ability to predict the equilibrium constants for the formation of 1:1 uranyl:ligand complexes (log K 1 values) provides the essential foundation for the rational design of ligands with enhanced uranyl affinity and selectivity. We also use density functional theory (B3LYP) and the IEFPCM continuum solvation model to compute aqueous stability constants for UO 2 2+ complexes with 18 donor ligands. Theoretical calculations permit reasonably good estimates of relative binding strengths, while the absolute log K 1 values are significantly overestimated. Accurate predictions of the absolute log K 1 values (root mean square deviation from experiment < 1.0 for logmore » K 1 values ranging from 0 to 16.8) can be obtained by fitting the experimental data for two groups of mono and divalent negative oxygen donor ligands. The utility of correlations is demonstrated for amidoxime and imide dioxime ligands, providing a useful means of screening for new ligands with strong chelate capability to uranyl.« less
Elbasha, Elamin H
2005-05-01
The availability of patient-level data from clinical trials has spurred a lot of interest in developing methods for quantifying and presenting uncertainty in cost-effectiveness analysis (CEA). Although the majority has focused on developing methods for using sample data to estimate a confidence interval for an incremental cost-effectiveness ratio (ICER), a small strand of the literature has emphasized the importance of incorporating risk preferences and the trade-off between the mean and the variance of returns to investment in health and medicine (mean-variance analysis). This paper shows how the exponential utility-moment-generating function approach is a natural extension to this branch of the literature for modelling choices from healthcare interventions with uncertain costs and effects. The paper assumes an exponential utility function, which implies constant absolute risk aversion, and is based on the fact that the expected value of this function results in a convenient expression that depends only on the moment-generating function of the random variables. The mean-variance approach is shown to be a special case of this more general framework. The paper characterizes the solution to the resource allocation problem using standard optimization techniques and derives the summary measure researchers need to estimate for each programme, when the assumption of risk neutrality does not hold, and compares it to the standard incremental cost-effectiveness ratio. The importance of choosing the correct distribution of costs and effects and the issues related to estimation of the parameters of the distribution are also discussed. An empirical example to illustrate the methods and concepts is provided. Copyright 2004 John Wiley & Sons, Ltd
Bamrungsawad, Naruemon; Chaiyakunapruk, Nathorn; Upakdee, Nilawan; Pratoomsoot, Chayanin; Sruamsiri, Rosarin; Dilokthornsakul, Piyameth
2015-05-01
Intravenous immunoglobulin (IVIG) has been shown to be effective in treating steroid-refractory dermatomyositis (DM). There remains no evidence of its cost-effectiveness in Thailand. Our objective was to estimate the cost utility of IVIG as a second-line therapy in steroid-refractory DM in Thailand. A Markov model was developed to estimate the relevant costs and health benefits for IVIG plus corticosteroids in comparison with immunosuppressant plus corticosteroids in steroid-refractory DM from a societal perspective over a patient's lifetime. The effectiveness and utility parameters were obtained from clinical literature, meta-analyses, medical record reviews, and patient interviews, whereas cost data were obtained from an electronic hospital database and patient interviews. Costs are presented in $US, year 2012 values. All future costs and outcomes were discounted at a rate of 3% per annum. One-way and probabilistic sensitivity analyses were also performed. Over a lifetime horizon, the model estimated treatment under IVIG plus corticosteroids to be cost saving compared with immunosuppressant plus corticosteroids, where the saving of costs and incremental quality-adjusted life-years (QALYs) were $US4738.92 and 1.96 QALYs, respectively. Sensitivity analyses revealed that probability of response of immunosuppressant plus corticosteroids was the most influential parameter on incremental QALYs and costs. At a societal willingness-to-pay threshold in Thailand of $US5148 per QALY gained, the probability of IVIG being cost effective was 97.6%. The use of IVIG plus corticosteroids is cost saving compared with treatment with immunosuppressant plus corticosteroids in Thai patients with steroid-refractory DM. Policy makers should consider using our findings in their decision-making process for adding IVIG to corticosteroids as the second-line therapy for steroid-refractory DM patients.
Gulácsi, László; Rencz, Fanni; Péntek, Márta; Brodszky, Valentin; Lopert, Ruth; Hevér, Noémi V; Baji, Petra
2014-05-01
Several Central and Eastern European (CEE) countries require cost-utility analyses (CUAs) to support reimbursement formulary listing. However, CUAs informed by local evidence are often unavailable, and the cost-effectiveness of the several currently reimbursed biologicals is unclear. To estimate the cost-effectiveness as multiples of per capita GDP/quality adjusted life years (QALY) of four biologicals (infliximab, etanercept, adalimumab, golimumab) currently reimbursed in six CEE countries in six inflammatory rheumatoid and bowel disease conditions. Systematic literature review of published cost-utility analyses in the selected conditions, using the United Kingdom (UK) as reference country and with study selection criteria set to optimize the transfer of results to the CEEs. Prices in each CEE country were pro-rated against UK prices using purchasing power parity (PPP)-adjusted per capita GDP, and local GDP per capita/QALY ratios estimated. Central and Eastern European countries list prices were 144-333% higher than pro rata prices. Out of 85 CUAs identified by previous systematic literature reviews, 15 were selected as a convenience sample for estimating the cost-effectiveness of biologicals in the CEE countries in terms of per capita GDP/QALY. Per capita GDP/QALY values varied from 0.42 to 6.4 across countries and conditions (Bulgaria: 0.97-6.38; Czech Republic: 0.42-2.76; Hungary: 0.54-3.54; Poland: 0.59-3.90; Romania: 0.77-5.07; Slovakia: 0.55-3.61). While results must be interpreted with caution, calculating pro rata (cost-effective) prices and per capita GDP/QALY ratios based on CUAs can aid reimbursement decision-making in the absence of analyses using local data.
Global health resource utilization associated with pacemaker complications.
Waweru, Catherine; Steenrod, Anna; Wolff, Claudia; Eggington, Simon; Wright, David Jay; Wyrwich, Kathleen W
2017-07-01
To estimate health resource utilization (HRU) associated with the management of pacemaker complications in various healthcare systems. Electrophysiologists (EPs) from four geographical regions (Western Europe, Australia, Japan, and North America) were invited to participate. Survey questions focused on HRU in the management of three chronic pacemaker complications (i.e. pacemaker infections requiring extraction, lead fractures/insulation breaches requiring replacement, and upper extremity deep venous thrombosis [DVT]). Panelists completed a maximum of two web-based surveys (iterative rounds). Mean, median values, and interquartile ranges were calculated and used to establish consensus. Overall, 32 and 29 panelists participated in the first and second rounds of the Delphi panel, respectively. Consensus was reached on treatment and HRU associated with a typical pacemaker implantation and complications. HRU was similar across regions, except for Japan, where panelists reported the longest duration of hospital stay in all scenarios. Infections were the most resource-intensive complications and were characterized by intravenous antibiotics days of 9.6?13.5 days and 21.3?29.2 days for pocket and lead infections respectively; laboratory and diagnostic tests, and system extraction and replacement procedures. DVT, on the other hand, was the least resource intensive complication. The results of the panel represent the views of the respondents who participated and may not be generalizable outside of this panel. The surveys were limited in scope and, therefore, did not include questions on management of acute complications (e.g. hematoma, pneumothorax). The Delphi technique provided a reliable and efficient approach to estimating resource utilization associated with chronic pacemaker complications. Estimates from the Delphi panel can be used to generate costs of pacemaker complications in various regions.
Dental health state utility values associated with tooth loss in two contrasting cultures.
Nassani, M Z; Locker, D; Elmesallati, A A; Devlin, H; Mohammadi, T M; Hajizamani, A; Kay, E J
2009-08-01
The study aimed to assess the value placed on oral health states by measuring the utility of mouths in which teeth had been lost and to explore variations in utility values within and between two contrasting cultures, UK and Iran. One hundred and fifty eight patients, 84 from UK and 74 from Iran, were recruited from clinics at University-based faculties of dentistry. All had experienced tooth loss and had restored or unrestored dental spaces. They were presented with 19 different scenarios of mouths with missing teeth. Fourteen involved the loss of one tooth and five involved shortened dental arches (SDAs) with varying numbers of missing posterior teeth. Each written description was accompanied by a verbal explanation and digital pictures of mouth models. Participants were asked to indicate on a standardized Visual Analogue Scale how they would value the health of their mouth if they had lost the tooth/teeth described and the resulting space was left unrestored. With a utility value of 0.0 representing the worst possible health state for a mouth and 1.0 representing the best, the mouth with the upper central incisor missing attracted the lowest utility value in both samples (UK = 0.16; Iran = 0.06), while the one with a missing upper second molar the highest utility values (0.42, 0.39 respectively). In both countries the utility value increased as the tooth in the scenario moved from the anterior towards the posterior aspect of the mouth. There were significant differences in utility values between UK and Iranian samples for four scenarios all involving the loss of anterior teeth. These differences remained after controlling for gender, age and the state of the dentition. With respect to the SDA scenarios, a mouth with a SDA with only the second molar teeth missing in all quadrants attracted the highest utility values, while a mouth with an extreme SDA with both missing molar and premolar teeth in all quadrants attracted the lowest utility values. The study provided further evidence of the validity of the scaling approach to utility measurement in mouths with missing teeth. Some cross-cultural variations in values were observed but these should be viewed with due caution because the magnitude of the differences was small.
Kharroubi, Samer A; Brazier, John E; McGhee, Sarah
2014-06-01
There is interest in the extent to which valuations of health may differ between different countries and cultures, but few studies have compared preference values of health states obtained in different countries. The present study applies a nonparametric model to estimate and compare two HK and UK standard gamble values for six-dimensional health state short form (derived from short-form 36 health survey) (SF-6D) health states using Bayesian methods. The data set is the HK and UK SF-6D valuation studies in which two samples of 197 and 249 states defined by the SF-6D were valued by representative samples of the HK and UK general populations, respectively, both using the standard gamble technique. We estimated a function applicable across both countries that explicitly accounts for the differences between them, and is estimated using the data from both countries. The results suggest that differences in SF-6D health state valuations between the UK and HK general populations are potentially important. In particular, the valuations of Hong Kong were meaningfully higher than those of the United Kingdom for most of the selected SF-6D health states. The magnitude of these country-specific differences in health state valuation depended, however, in a complex way on the levels of individual dimensions. The new Bayesian nonparametric method is a powerful approach for analyzing data from multiple nationalities or ethnic groups to understand the differences between them and potentially to estimate the underlying utility functions more efficiently. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Cao, Luodan; Li, Jialin; Ye, Mengyao; Pu, Ruiliang; Liu, Yongchao; Guo, Qiandong; Feng, Baixiang; Song, Xiayun
2018-06-21
Gains and losses in ecosystem service values (ESV) in coastal zones in Zhejiang Province during rapid urbanization were analyzed in terms of land-use changes. Decision-making on coastal development based on ESV estimation is significant for the sustainable utilization of coastal resource. In this study, coastal land-use changes in Zhejiang Province during rapid urbanization were discussed based on remote-sensing derived land-use maps created in the years 1990, 2000 and 2010. The ESV changes in coastal zones in Zhejiang Province from 1990 to 2010 were estimated by using the established ESV estimation model. The analysis results demonstrate the following: (1) with the continuous acceleration of urbanization, land-use types in coastal zones in Zhejiang Province changed significantly from 1990 to 2010, demonstrated by considerable growth of urban construction land and reduction of forest land and farmland; (2) in the study period, the total ESV in coastal zones in Zhejiang Province continuously decreased in value from RMB 35.278 billion to 29.964 billion, a reduction of 15.06%; (3) in terms of the spatial distribution of ESV, the ESVs in coastal zones in Zhejiang Province were generally converted from a higher ESV to a lower ESV; (4) estimates of ESV for the three years 1990, 2000 and 2010 appear to be relatively stable; and (5) land-use intensity in coastal zones in Zhejiang Province continuously increased during the 20 years. The spatial distribution of land-use intensity was consistent with that of the ESV change rate. Disordered land-use changes from forestland and farmland to urban construction land was a major cause of ESV loss.
Prigent, Amélie; Kamendje-Tchokobou, Blaise; Chevreul, Karine
2017-11-01
Health-related quality of life (HRQoL) is a widely used concept in the assessment of health care. Some generic HRQoL instruments, based on specific algorithms, can generate utility scores which reflect the preferences of the general population for the different health states described by the instrument. This study aimed to investigate the relationships between utility scores and potentially associated factors in patients with mental disorders followed in inpatient and/or outpatient care settings using two statistical methods. Patients were recruited in four psychiatric sectors in France. Patient responses to the SF-36 generic HRQoL instrument were used to calculate SF-6D utility scores. The relationships between utility scores and patient socio-demographic, clinical characteristics, and mental health care utilization, considered as potentially associated factors, were studied using OLS and quantile regressions. One hundred and seventy six patients were included. Women, severely ill patients and those hospitalized full-time tended to report lower utility scores, whereas psychotic disorders (as opposed to mood disorders) and part-time care were associated with higher scores. The quantile regression highlighted that the size of the associations between the utility scores and some patient characteristics varied along with the utility score distribution, and provided more accurate estimated values than OLS regression. The quantile regression may constitute a relevant complement for the analysis of factors associated with utility scores. For policy decision-making, the association of full-time hospitalization with lower utility scores while part-time care was associated with higher scores supports the further development of alternatives to full-time hospitalizations.
NASA Astrophysics Data System (ADS)
Pankow, James F.
Gas-particle partitioning is examined using a partitioning constant Kp = ( F/ TSP)/ A, where F (ng m -3) and A (ng m -3) are the particulate-associated and gas-phase concentrations, respectively, and TSP is the total suspended particulate matter level (μg m -3). Compound-dependent values of Kp depend on temperature ( T) according to Kp = mp/ T + bp. Limitations in data quality can cause errors in estimates of mp and bp obtained by simple linear regression (SLR). However, within a group of similar compounds, the bp values will be similar. By pooling data, an improved set of mp and a single bp can be obtained by common y-intercept regression (CYIR). SLR estimates for mp and bp for polycyclic aromatic hydrocarbons (PAHs) sorbing to urban Osaka particulate matter are available (Yamasaki et al., 1982, Envir. Sci. Technol.16, 189-194), as are CYIR estimates for the same particulate matter (Pankow, 1991, Atmospheric Environment25A, 2229-2239). In this work, a comparison was conducted of the ability of these two sets of mp and bp to predict A/ F ratios for PAHs based on measured T and TSP values for data obtained in other urban locations, specifically: (1) in and near the Baltimore Harbor Tunnel by Benner (1988, Ph.D thesis, University of Maryland) and Benner et al. (1989, Envir. Sci. Technol.23, 1269-1278); and (2) in Chicago by Cotham (1990, Ph.D. thesis, University of South Carolina). In general, the CYIR estimates for mp and bp obtained for Osaka particulate matter were found to be at least as reliable, and for some compounds more reliable than their SLR counterparts in predicting gas-particle ratios for PAHs. This result provides further evidence of the utility of the CYIR approach in quantitating the dependence of log Kp values on 1/ T.
Obesity and fast food in urban markets: a new approach using geo-referenced micro data.
Chen, Susan Elizabeth; Florax, Raymond J; Snyder, Samantha D
2013-07-01
This paper presents a new method of assessing the relationship between features of the built environment and obesity, particularly in urban areas. Our empirical application combines georeferenced data on the location of fast-food restaurants with data about personal health, behavioral, and neighborhood characteristics. We define a 'local food environment' for every individual utilizing buffers around a person's home address. Individual food landscapes are potentially endogenous because of spatial sorting of the population and food outlets, and the body mass index (BMI) values for individuals living close to each other are likely to be spatially correlated because of observed and unobserved individual and neighborhood effects. The potential biases associated with endogeneity and spatial correlation are handled using spatial econometric estimation techniques. Our application provides quantitative estimates of the effect of proximity to fast-food restaurants on obesity in an urban food market. We also present estimates of a policy simulation that focuses on reducing the density of fast-food restaurants in urban areas. In the simulations, we account for spatial heterogeneity in both the policy instruments and individual neighborhoods and find a small effect for the hypothesized relationships between individual BMI values and the density of fast-food restaurants. Copyright © 2012 John Wiley & Sons, Ltd.
a Comparison Between Two Ols-Based Approaches to Estimating Urban Multifractal Parameters
NASA Astrophysics Data System (ADS)
Huang, Lin-Shan; Chen, Yan-Guang
Multifractal theory provides a new spatial analytical tool for urban studies, but many basic problems remain to be solved. Among various pending issues, the most significant one is how to obtain proper multifractal dimension spectrums. If an algorithm is improperly used, the parameter spectrums will be abnormal. This paper is devoted to investigating two ordinary least squares (OLS)-based approaches for estimating urban multifractal parameters. Using empirical study and comparative analysis, we demonstrate how to utilize the adequate linear regression to calculate multifractal parameters. The OLS regression analysis has two different approaches. One is that the intercept is fixed to zero, and the other is that the intercept is not limited. The results of comparative study show that the zero-intercept regression yields proper multifractal parameter spectrums within certain scale range of moment order, while the common regression method often leads to abnormal multifractal parameter values. A conclusion can be reached that fixing the intercept to zero is a more advisable regression method for multifractal parameters estimation, and the shapes of spectral curves and value ranges of fractal parameters can be employed to diagnose urban problems. This research is helpful for scientists to understand multifractal models and apply a more reasonable technique to multifractal parameter calculations.
NASA Technical Reports Server (NTRS)
Waters, Eric D.
2013-01-01
Recent high level interest in the capability of small launch vehicles has placed significant demand on determining the trade space these vehicles occupy. This has led to the development of a zero level analysis tool that can quickly determine the minimum expected vehicle gross liftoff weight (GLOW) in terms of vehicle stage specific impulse (Isp) and propellant mass fraction (pmf) for any given payload value. Utilizing an extensive background in Earth to orbit trajectory experience a total necessary delta v the vehicle must achieve can be estimated including relevant loss terms. This foresight into expected losses allows for more specific assumptions relating to the initial estimates of thrust to weight values for each stage. This tool was further validated against a trajectory model, in this case the Program to Optimize Simulated Trajectories (POST), to determine if the initial sizing delta v was adequate to meet payload expectations. Presented here is a description of how the tool is setup and the approach the analyst must take when using the tool. Also, expected outputs which are dependent on the type of small launch vehicle being sized will be displayed. The method of validation will be discussed as well as where the sizing tool fits into the vehicle design process.
Marasco, Daniel E; Hunter, Betsy N; Culligan, Patricia J; Gaffin, Stuart R; McGillis, Wade R
2014-09-02
Quantifying green roof evapotranspiration (ET) in urban climates is important for assessing environmental benefits, including stormwater runoff attenuation and urban heat island mitigation. In this study, a dynamic chamber method was developed to quantify ET on two extensive green roofs located in New York City, NY. Hourly chamber measurements taken from July 2009 to December 2009 and April 2012 to October 2013 illustrate both diurnal and seasonal variations in ET. Observed monthly total ET depth ranged from 0.22 cm in winter to 15.36 cm in summer. Chamber results were compared to two predictive methods for estimating ET; namely the Penman-based ASCE Standardized Reference Evapotranspiration (ASCE RET) equation, and an energy balance model, both parametrized using on-site environmental conditions. Dynamic chamber ET results were similar to ASCE RET estimates; however, the ASCE RET equation overestimated bottommost ET values during the winter months, and underestimated peak ET values during the summer months. The energy balance method was shown to underestimate ET compared the ASCE RET equation. The work highlights the utility of the chamber method for quantifying green roof evapotranspiration and indicates green roof ET might be better estimated by Penman-based evapotranspiration equations than energy balance methods.
NASA Astrophysics Data System (ADS)
Mori, Taketoshi; Ishino, Takahito; Noguchi, Hiroshi; Shimosaka, Masamichi; Sato, Tomomasa
2011-06-01
We propose a life pattern estimation method and an anomaly detection method for elderly people living alone. In our observation system for such people, we deploy some pyroelectric sensors into the house and measure the person's activities all the time in order to grasp the person's life pattern. The data are transferred successively to the operation center and displayed to the nurses in the center in a precise way. Then, the nurses decide whether the data is the anomaly or not. In the system, the people whose features in their life resemble each other are categorized as the same group. Anomalies occurred in the past are shared in the group and utilized in the anomaly detection algorithm. This algorithm is based on "anomaly score." The "anomaly score" is figured out by utilizing the activeness of the person. This activeness is approximately proportional to the frequency of the sensor response in a minute. The "anomaly score" is calculated from the difference between the activeness in the present and the past one averaged in the long term. Thus, the score is positive if the activeness in the present is higher than the average in the past, and the score is negative if the value in the present is lower than the average. If the score exceeds a certain threshold, it means that an anomaly event occurs. Moreover, we developed an activity estimation algorithm. This algorithm estimates the residents' basic activities such as uprising, outing, and so on. The estimation is shown to the nurses with the "anomaly score" of the residents. The nurses can understand the residents' health conditions by combining these two information.
Suicide risk assessment: Trust an implicit probe or listen to the patient?
Harrison, Dominique P; Stritzke, Werner G K; Fay, Nicolas; Hudaib, Abdul-Rahman
2018-05-21
Previous research suggests implicit cognition can predict suicidal behavior. This study examined the utility of the death/suicide implicit association test (d/s-IAT) in acute and prospective assessment of suicide risk and protective factors, relative to clinician and patient estimates of future suicide risk. Patients (N = 128; 79 female; 111 Caucasian) presenting to an emergency department were recruited if they reported current suicidal ideation or had been admitted because of an acute suicide attempt. Patients completed the d/s-IAT and self-report measures assessing three death-promoting (e.g., suicide ideation) and two life-sustaining (e.g., zest for life) factors, with self-report measures completed again at 3- and 6-month follow-ups. The clinician and patient provided risk estimates of that patient making a suicide attempt within the next 6 months. Results showed that among current attempters, the d/s-IAT differentiated between first time and multiple attempters; with multiple attempters having significantly weaker self-associations with life relative to death. The d/s-IAT was associated with concurrent suicidal ideation and zest for life, but only predicted the desire to die prospectively at 3 months. By contrast, clinician and patient estimates predicted suicide risk at 3- and 6-month follow-up, with clinician estimates predicting death-promoting factors, and only patient estimates predicting life-sustaining factors. The utility of the d/s-IAT was more pronounced in the assessment of concurrent risk. Prospectively, clinician and patient predictions complemented each other in predicting suicide risk and resilience, respectively. Our findings indicate collaborative rather than implicit approaches add greater value to the management of risk and recovery in suicidal patients. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Rao, Kareti Srinivasa; Kumar, Keshar Nargesh; Joydeep, Datta
2011-01-01
A simple stability indicating reversed-phase HPLC method was developed and subsequently validated for estimation of Cefpirome sulphate (CPS) present in pharmaceutical dosage forms. The proposed RP-HPLC method utilizes a LiChroCART-Lichrosphere100, C18 RP column (250 mm × 4mm × 5 μm) in an isocratic separation mode with mobile phase consisting of methanol and water in the proportion of 50:50 % (v/v), at a flow rate 1ml/min, and the effluent was monitored at 270 nm. The retention time of CPS was 2.733 min and its formulation was exposed to acidic, alkaline, photolytic, thermal and oxidative stress conditions, and the stressed samples were analyzed by the proposed method. The described method was linear over a range of 0.5-200μg/ml. The percentage recovery was 99.46. F-test and t-test at 95% confidence level were used to check the intermediate precision data obtained under different experimental setups; the calculated value was found to be less than the critical value.
Advanced purification of petroleum refinery wastewater by catalytic vacuum distillation.
Yan, Long; Ma, Hongzhu; Wang, Bo; Mao, Wei; Chen, Yashao
2010-06-15
In our work, a new process, catalytic vacuum distillation (CVD) was utilized for purification of petroleum refinery wastewater that was characteristic of high chemical oxygen demand (COD) and salinity. Moreover, various common promoters, like FeCl(3), kaolin, H(2)SO(4) and NaOH were investigated to improve the purification efficiency of CVD. Here, the purification efficiency was estimated by COD testing, electrolytic conductivity, UV-vis spectrum, gas chromatography-mass spectrometry (GC-MS) and pH value. The results showed that NaOH promoted CVD displayed higher efficiency in purification of refinery wastewater than other systems, where the pellucid effluents with low salinity and high COD removal efficiency (99%) were obtained after treatment, and the corresponding pH values of effluents varied from 7 to 9. Furthermore, environment estimation was also tested and the results showed that the effluent had no influence on plant growth. Thus, based on satisfied removal efficiency of COD and salinity achieved simultaneously, NaOH promoted CVD process is an effective approach to purify petroleum refinery wastewater. Copyright 2010 Elsevier B.V. All rights reserved.
Multifractal diffusion entropy analysis: Optimal bin width of probability histograms
NASA Astrophysics Data System (ADS)
Jizba, Petr; Korbel, Jan
2014-11-01
In the framework of Multifractal Diffusion Entropy Analysis we propose a method for choosing an optimal bin-width in histograms generated from underlying probability distributions of interest. The method presented uses techniques of Rényi’s entropy and the mean squared error analysis to discuss the conditions under which the error in the multifractal spectrum estimation is minimal. We illustrate the utility of our approach by focusing on a scaling behavior of financial time series. In particular, we analyze the S&P500 stock index as sampled at a daily rate in the time period 1950-2013. In order to demonstrate a strength of the method proposed we compare the multifractal δ-spectrum for various bin-widths and show the robustness of the method, especially for large values of q. For such values, other methods in use, e.g., those based on moment estimation, tend to fail for heavy-tailed data or data with long correlations. Connection between the δ-spectrum and Rényi’s q parameter is also discussed and elucidated on a simple example of multiscale time series.
Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale
NASA Astrophysics Data System (ADS)
Hakala, Kirsti; Markstrom, Steven; Hay, Lauren
2015-04-01
The U.S. Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental U.S. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.
Ščasný, Milan; Alberini, Anna
2012-01-01
The health impact attributable to climate change has been identified as one of the priority areas for impact assessment. The main goal of this paper is to estimate the monetary value of one key health effect, which is premature mortality. Specifically, our goal is to derive the value of a statistical life from people’s willingness to pay for avoiding the risk of dying in one post-transition country in Europe, i.e., the Czech Republic. We carried out a series of conjoint choice experiments in order to value mortality risk reductions. We found the responses to the conjoint choice questions to be reasonable and consistent with the economic paradigm. The VSL is about EUR 2.4 million, and our estimate is comparable with the value of preventing a fatality as used in one of the integrated assessment models. To investigate whether carrying out the survey through the internet may violate the welfare estimate, we administered our questionnaire to two independent samples of respondents using two different modes of survey administration. The results show that the VSLs for the two groups of respondents are €2.25 and €2.55 million, and these figures are statistically indistinguishable. However, the key parameters of indirect utility between the two modes of survey administration are statistically different when specific subgroups of population, such as older respondents, are concerned. Based on this evidence, we conclude that properly designed and administered on-line surveys are a reliable method for administering questionnaires, even when the latter are cognitively challenging. However, attention should be paid to sampling and choice regarding the mode of survey administration if the preference of specific segments of the population is elicited. PMID:23249861
NWP model forecast skill optimization via closure parameter variations
NASA Astrophysics Data System (ADS)
Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.
2012-04-01
We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.
Impacts of Light Use Efficiency and fPAR Parameterization on Gross Primary Production Modeling
NASA Technical Reports Server (NTRS)
Cheng, Yen-Ben; Zhang, Qingyuan; Lyapustin, Alexei I.; Wang, Yujie; Middleton, Elizabeth M.
2014-01-01
This study examines the impact of parameterization of two variables, light use efficiency (LUE) and the fraction of absorbed photosynthetically active radiation (fPAR or fAPAR), on gross primary production(GPP) modeling. Carbon sequestration by terrestrial plants is a key factor to a comprehensive under-standing of the carbon budget at global scale. In this context, accurate measurements and estimates of GPP will allow us to achieve improved carbon monitoring and to quantitatively assess impacts from cli-mate changes and human activities. Spaceborne remote sensing observations can provide a variety of land surface parameterizations for modeling photosynthetic activities at various spatial and temporal scales. This study utilizes a simple GPP model based on LUE concept and different land surface parameterizations to evaluate the model and monitor GPP. Two maize-soybean rotation fields in Nebraska, USA and the Bartlett Experimental Forest in New Hampshire, USA were selected for study. Tower-based eddy-covariance carbon exchange and PAR measurements were collected from the FLUXNET Synthesis Dataset. For the model parameterization, we utilized different values of LUE and the fPAR derived from various algorithms. We adapted the approach and parameters from the MODIS MOD17 Biome Properties Look-Up Table (BPLUT) to derive LUE. We also used a site-specific analytic approach with tower-based Net Ecosystem Exchange (NEE) and PAR to estimate maximum potential LUE (LUEmax) to derive LUE. For the fPAR parameter, the MODIS MOD15A2 fPAR product was used. We also utilized fAPAR chl, a parameter accounting for the fAPAR linked to the chlorophyll-containing canopy fraction. fAPAR chl was obtained by inversion of a radiative transfer model, which used the MODIS-based reflectances in bands 1-7 produced by Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm. fAPAR chl exhibited seasonal dynamics more similar with the flux tower based GPP than MOD15A2 fPAR, especially in the spring and fall at the agricultural sites. When using the MODIS MOD17-based parameters to estimate LUE, fAPAR chl generated better agreements with GPP (r2= 0.79-0.91) than MOD15A2 fPAR (r2= 0.57-0.84).However, underestimations of GPP were also observed, especially for the crop fields. When applying the site-specific LUE max value to estimate in situ LUE, the magnitude of estimated GPP was closer to in situ GPP; this method produced a slight overestimation for the MOD15A2 fPAR at the Bartlett forest. This study highlights the importance of accurate land surface parameterizations to achieve reliable carbon monitoring capabilities from remote sensing information.
NASA Astrophysics Data System (ADS)
Epps, S. A.
2017-12-01
Suspended particulate matter (SPM) is an important agent in generating marine light conditions which in turn have strong influences on biogeochemical systems. SPM also behaves as a vehicle for contaminant migration and is of interest to the estimation of bulk material transport in the marine environment. The measurement of inherent optical properties (IOPs) and apparent optical properties (AOPs) is becoming increasingly important in the prediction of SPM concentration. To more fully utilize data generated in bathymetric lidar surveys, modern systems such as CZMIL (the Coastal Zone Mapping Imaging LIDAR) include a hyperspectral sensor to collect data necessary for remote sensing reflectance (Rrs), an AOP. Some IOPs can be estimated can be estimated from Rrs. Additionally, a bathymetric lidar return signal contains both absorption and backscattering components (IOPs) at 532 nm which may be utilized for SPM prediction. This research utilizes IOP measurements using AC-9, AC-S, BB-9, and LISST-100X-B sensors deployed in the Northern Gulf of Mexico concurrent with SPM collection via filtration. Concomitant Rrs values were collected using a hand held hyperspectral sensor. Several hundred linearly regressed single-parameter estimates are created to predict SPM concentration using the IOPs attenuation, total scatter, backscatter, absorption and significant amalgamations thereof. Multiple wavelengths of light are analyzed for each IOP or IOP combination. Consideration is given to the suitability of each IOP type to SPM concentration prediction. Several criteria are assessed to winnow out the best predictors. These include sensor, data, and environmental limitations. The quantitative analyses of this research assist to identify the best types of IOPs (and wavelengths) for SPM prediction. Rrs at multiple wavelengths is also considered for SPM prediction. This research is focused on the functionality of IOP and AOP based SPM concentration predictions made available from the data products of bathymetric lidar surveys. It has applications for researchers with interest in IOPs, AOPs and SPM. There are also implications for monitoring estuarine, coastal, and offshore environments using bathymetric lidar and in-situ optical sensor suites to estimate SPM.
Chan, Christina K W; Gangwani, Rita A; McGhee, Sarah M; Lian, JinXiao; Wong, David S H
2015-11-01
To determine whether screening for age-related macular degeneration (AMD) during a diabetic retinopathy (DR) screening program would be cost effective in Hong Kong. We compared and evaluated the impacts of screening, grading, and vitamin treatment for intermediate AMD compared with no screening using a Markov model. It was based on the natural history of AMD in a cohort with a mean age of 62 years, followed up until 100 years of age or death. Subjects attending a DR screening program were recruited. A cost-effectiveness analysis was undertaken from a public provider perspective. It included grading for AMD using the photographs obtained for DR screening and treatment with vitamin therapy for those with intermediate AMD. The measures of effectiveness were obtained largely from a local study, but the transition probabilities and utility values were from overseas data. Costs were all from local sources. The main assumptions and estimates were tested in sensitivity analyses. The outcome was cost per quality-adjusted life year (QALY) gained. Both costs and benefits were discounted at 3%. All costs are reported in United States dollars ($). The cost per QALY gained through screening for AMD and vitamin treatment for appropriate cases was $12,712 after discounting. This would be considered highly cost effective based on the World Health Organization's threshold of willingness to pay (WTP) for a QALY, that is, less than the annual per capita gross domestic product of $29,889. Because of uncertainty regarding the utility value for those with advanced AMD, we also tested an extreme, conservative value for utility under which screening remained cost effective. One-way sensitivity analyses revealed that, besides utility values, the cost per QALY was most sensitive to the progression rate from intermediate to advanced AMD. The cost-effectiveness acceptability curve showed a WTP for a QALY of $29,000 or more has a more than 86% probability of being cost effective compared with no screening. Our analysis demonstrated that AMD screening carried out simultaneously with DR screening for patients with diabetes would be cost effective in a Hong Kong public healthcare setting. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Stephens, J Mark; Li, Xiaoyan; Reiner, Maureen; Tzivelekis, Spiros
2016-01-01
Prophylactic treatment with granulocyte-colony stimulating factors (G-CSFs) is indicated for chemotherapy patients with a significant risk of febrile neutropenia. This study estimates the annual economic burden on patients and caregivers of clinic visits for prophylactic G-CSF injections in the US. Annual clinic visits for prophylactic G-CSF injections (all cancers) were estimated from national cancer incidence, chemotherapy treatment and G-CSF utilization data, and G-CSF sales and pricing information. Patient travel times, plus time spent in the clinic, were estimated from patient survey responses collected during a large prospective cohort study (the Prospective Study of the Relationship between Chemotherapy Dose Intensity and Mortality in Early-Stage (I-III) Breast Cancer Patients). Economic models were created to estimate travel costs, patient co-pays and the economic value of time spent by patients and caregivers in G-CSF clinic visits. Estimated total clinic visits for prophylactic G-CSF injections in the US were 1.713 million for 2015. Mean (SD) travel time per visit was 62 (50) min; mean (SD) time in the clinic was 41 (68) min. Total annual time for travel to and from the clinic, plus time at the clinic, is estimated at 4.9 million hours, with patient and caregiver time valued at $91.8 million ($228 per patient). The estimated cumulative annual travel distance for G-CSF visits is 60.2 million miles, with a total transportation cost of $28.9 million ($72 per patient). Estimated patient co-pays were $61.1 million, ∼$36 per visit, $152 per patient. The total yearly economic impact on patients and caregivers is $182 million, ∼$450 per patient. Data to support model parameters were limited. Study estimates are sensitive to the assumptions used. The burden of clinic visits for G-CSF therapy is a significant addition to the total economic burden borne by cancer patients and their families.
Hutchins, Robert; Pignone, Michael P; Sheridan, Stacey L; Viera, Anthony J
2015-01-01
Objectives The utility value attributed to taking pills for prevention can have a major effect on the cost-effectiveness of interventions, but few published studies have systematically quantified this value. We sought to quantify the utility value of taking pills used for prevention of cardiovascular disease (CVD). Design Cross-sectional survey. Setting Central North Carolina. Participants 708 healthcare employees aged 18 years and older. Primary and secondary outcomes Utility values for taking 1 pill/day, assessed using time trade-off, modified standard gamble and willingness-to-pay methods. Results Mean age of respondents was 43 years (19–74). The majority of the respondents were female (83%) and Caucasian (80%). Most (80%) took at least 2 pills/day. Mean utility values for taking 1 pill/day using the time trade-off method were: 0.9972 (95% CI 0.9962 to 0.9980). Values derived from the standard gamble and willingness-to-pay methods were 0.9967 (0.9954 to 0.9979) and 0.9989 (95% CI 0.9986 to 0.9991), respectively. Utility values varied little across characteristics such as age, sex, race, education level or number of pills taken per day. Conclusions The utility value of taking pills daily in order to prevent an adverse CVD health outcome is approximately 0.997. PMID:25967985
Bair, Lucas S.; Rogowski, David L.; Neher, Christopher
2016-01-01
Glen Canyon Dam (GCD) on the Colorado River in northern Arizona provides water storage, flood control, and power system benefits to approximately 40 million people who rely on water and energy resources in the Colorado River basin. Downstream resources (e.g., angling, whitewater floating) in Glen Canyon National Recreation Area (GCNRA) and Grand Canyon National Park are impacted by the operation of GCD. The GCD Adaptive Management Program was established in 1997 to monitor and research the effects of dam operations on the downstream environment. We utilized secondary survey data and an individual observation travel cost model to estimate the net economic benefit of angling in GCNRA for each season and each type of angler. As expected, the demand for angling decreased with increasing travel cost; the annual value of angling at Lees Ferry totaled US$2.7 million at 2014 visitation levels. Demand for angling was also affected by season, with per-trip values of $210 in the summer, $237 in the spring, $261 in the fall, and $399 in the winter. This information provides insight into the ways in which anglers are potentially impacted by seasonal GCD operations and adaptive management experiments aimed at improving downstream resource conditions.
CONTRIBUTIONS OF CHEMICAL EXCHANGE TO T1ρ DISPERSION IN A TISSUE MODEL
Cobb, Jared G.; Xie, Jingping; Gore, John C.
2015-01-01
Variations in T1ρ with locking-field strength (T1ρ dispersion) may be used to estimate proton exchange rates. We developed a novel approach utilizing the second derivative of the dispersion curve to measure exchange in a model system of cross-linked polyacrylamide gels. These gels were varied in relative composition of co-monomers, increasing stiffness, and in pH, modifying exchange rates. MR images were recorded with a spin-locking sequence as described by Sepponen et al. These measurements were fit to a mono-exponential decay function yielding values for T1ρ at each locking-field measured. These values were then fit to a model by Chopra et al. for estimating exchange rates. For low stiffness gels, the calculated exchange values increased by a factor of 4 as pH increased, consistent with chemical exchange being the dominant contributor to T1ρ dispersion. Interestingly, calculated chemical exchange rates also increased with stiffness, likely due to modified side-chain exchange kinetics as the composition varied. This paper demonstrates a new method to assess the structural and chemical effects on T1ρ relaxation dispersion with a suitable model. These phenomena may be exploited in an imaging context to emphasize the presence of nuclei of specific exchange rates, rather than chemical shifts. PMID:21590720
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Robust and transferable quantification of NMR spectral quality using IROC analysis
NASA Astrophysics Data System (ADS)
Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.
2017-12-01
Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.
Doble, Brett; John, Thomas; Thomas, David; Fellowes, Andrew; Fox, Stephen; Lorgelly, Paula
2017-05-01
To identify parameters that drive the cost-effectiveness of precision medicine by comparing the use of multiplex targeted sequencing (MTS) to select targeted therapy based on tumour genomic profiles to either no further testing with chemotherapy or no further testing with best supportive care in the fourth-line treatment of metastatic lung adenocarcinoma. A combined decision tree and Markov model to compare costs, life-years, and quality-adjusted life-years over a ten-year time horizon from an Australian healthcare payer perspective. Data sources included the published literature and a population-based molecular cohort study (Cancer 2015). Uncertainty was assessed using deterministic sensitivity analyses and quantified by estimating expected value of perfect/partial perfect information. Uncertainty due to technological/scientific advancement was assessed through a number of plausible future scenario analyses. Point estimate incremental cost-effective ratios indicate that MTS is not cost-effective for selecting fourth-line treatment of metastatic lung adenocarcinoma. Lower mortality rates during testing and for true positive patients, lower health state utility values for progressive disease, and targeted therapy resulting in reductions in inpatient visits, however, all resulted in more favourable cost-effectiveness estimates for MTS. The expected value to decision makers of removing all current decision uncertainty was estimated to be between AUD 5,962,843 and AUD 13,196,451, indicating that additional research to reduce uncertainty may be a worthwhile investment. Plausible future scenarios analyses revealed limited improvements in cost-effectiveness under scenarios of improved test performance, decreased costs of testing/interpretation, and no biopsy costs/adverse events. Reductions in off-label targeted therapy costs, when considered together with the other scenarios did, however, indicate more favourable cost-effectiveness of MTS. As more clinical evidence is generated for MTS, the model developed should be revisited and cost-effectiveness re-estimated under different testing scenarios to further understand the value of precision medicine and its potential impact on the overall health budget. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Brown, G C
1999-01-01
OBJECTIVE: To determine the relationship of visual acuity loss to quality of life. DESIGN: Three hundred twenty-five patients with visual loss to a minimum of 20/40 or greater in at least 1 eye were interviewed in a standardized fashion using a modified VF-14, questionnaire. Utility values were also obtained using both the time trade-off and standard gamble methods of utility assessment. MAIN OUTCOME MEASURES: Best-corrected visual acuity was correlated with the visual function score on the modified VF-14 questionnaire, as well as with utility values obtained using both the time trade-off and standard gamble methods. RESULTS: Decreasing levels of vision in the eye with better acuity correlated directly with decreasing visual function scores on the modified VF-14 questionnaire, as did decreasing utility values using the time trade-off method of utility evaluation. The standard gamble method of utility evaluation was not as directly correlated with vision as the time trade-off method. Age, level of education, gender, race, length of time of visual loss, and the number of associated systemic comorbidities did not significantly affect the time trade-off utility values associated with visual loss in the better eye. The level of reduced vision in the better eye, rather than the specific disease process causing reduced vision, was related to mean utility values. The average person with 20/40 vision in the better seeing eye was willing to trade 2 of every 10 years of life in return for perfect vision (utility value of 0.8), while the average person with counting fingers vision in the better eye was willing to trade approximately 5 of every 10 remaining years of life (utility value of 0.52) in return for perfect vision. CONCLUSIONS: The time trade-off method of utility evaluation appears to be an effective method for assessing quality of life associated with visual loss. Time trade-off utility values decrease in direct conjunction with decreasing vision in the better-seeing eye. Unlike the modified VF-14 test and its counterparts, utility values allow the quality of life associated with visual loss to be more readily compared to the quality of life associated with other health (disease) states. This information can be employed for cost-effective analyses that objectively compare evidence-based medicine, patient-based preferences and sound econometric principles across all specialties in health care. PMID:10703139
Incorporating structure from motion uncertainty into image-based pose estimation
NASA Astrophysics Data System (ADS)
Ludington, Ben T.; Brown, Andrew P.; Sheffler, Michael J.; Taylor, Clark N.; Berardi, Stephen
2015-05-01
A method for generating and utilizing structure from motion (SfM) uncertainty estimates within image-based pose estimation is presented. The method is applied to a class of problems in which SfM algorithms are utilized to form a geo-registered reference model of a particular ground area using imagery gathered during flight by a small unmanned aircraft. The model is then used to form camera pose estimates in near real-time from imagery gathered later. The resulting pose estimates can be utilized by any of the other onboard systems (e.g. as a replacement for GPS data) or downstream exploitation systems, e.g., image-based object trackers. However, many of the consumers of pose estimates require an assessment of the pose accuracy. The method for generating the accuracy assessment is presented. First, the uncertainty in the reference model is estimated. Bundle Adjustment (BA) is utilized for model generation. While the high-level approach for generating a covariance matrix of the BA parameters is straightforward, typical computing hardware is not able to support the required operations due to the scale of the optimization problem within BA. Therefore, a series of sparse matrix operations is utilized to form an exact covariance matrix for only the parameters that are needed at a particular moment. Once the uncertainty in the model has been determined, it is used to augment Perspective-n-Point pose estimation algorithms to improve the pose accuracy and to estimate the resulting pose uncertainty. The implementation of the described method is presented along with results including results gathered from flight test data.
Estimating Price Elasticity using Market-Level Appliance Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujita, K. Sydny
This report provides and update to and expansion upon our 2008 LBNL report “An Analysis of the Price Elasticity of Demand for Appliances,” in which we estimated an average relative price elasticity of -0.34 for major household appliances (Dale and Fujita 2008). Consumer responsiveness to price change is a key component of energy efficiency policy analysis; these policies influence consumer purchases through price both explicitly and implicitly. However, few studies address appliance demand elasticity in the U.S. market and public data sources are generally insufficient for rigorous estimation. Therefore, analysts have relied on a small set of outdated papers focusedmore » on limited appliance types, assuming long-term elasticities estimated for other durables (e.g., vehicles) decades ago are applicable to current and future appliance purchasing behavior. We aim to partially rectify this problem in the context of appliance efficiency standards by revisiting our previous analysis, utilizing data released over the last ten years and identifying additional estimates of durable goods price elasticities in the literature. Reviewing the literature, we find the following ranges of market-level price elasticities: -0.14 to -0.42 for appliances; -0.30 to -1.28 for automobiles; -0.47 to -2.55 for other durable goods. Brand price elasticities are substantially higher for these product groups, with most estimates -2.0 or more elastic. Using market-level shipments, sales value, and efficiency level data for 1989-2009, we run various iterations of a log-log regression model, arriving at a recommended range of short run appliance price elasticity between -0.4 and -0.5, with a default value of -0.45.« less
Pavelko, Michael T.
2004-01-01
Land subsidence related to aquifer-system compaction and ground-water withdrawals has been occurring in Las Vegas Valley, Nevada, since the 1930's, and by the late 1980's some areas in the valley had subsided more than 5 feet. Since the late 1980's, seasonal artificial-recharge programs have lessened the effects of summertime pumping on aquifer-system compaction, but the long-term trend of compaction continues in places. Since 1994, the U.S. Geological Survey has continuously monitored water-level changes in three piezometers and vertical aquifer-system deformation with a borehole extensometer at the Lorenzi site in Las Vegas, Nevada. A one-dimensional, numerical, ground-water flow model of the aquifer system below the Lorenzi site was developed for the period 1901-2000, to estimate aquitard vertical hydraulic conductivity, aquitard inelastic skeletal specific storage, and aquitard and aquifer elastic skeletal specific storage. Aquifer water-level data were used in the model as the aquifer-system stresses that controlled simulated vertical aquifer-system deformation. Nonlinear-regression methods were used to calibrate the model, utilizing estimated and measured aquifer-system deformation data to minimize a weighted least-squares objective function, and estimate optimal property values. Model results indicate that at the Lorenzi site, aquitard vertical hydraulic conductivity is 3 x 10-6 feet per day, aquitard inelastic skeletal specific storage is 4 x 10-5 per foot, aquitard elastic skeletal specific storage is 5 x 10-6 per foot, and aquifer elastic skeletal specific storage is 3 x 10-7 per foot. Regression statistics indicate that the model and data provided sufficient information to estimate the target properties, the model adequately simulated observed data, and the estimated property values are accurate and unique.
Vokoun, Jason C.; Rabeni, Charles F.
2005-01-01
Flathead catfish Pylodictis olivaris were radio-tracked in the Grand River and Cuivre River, Missouri, from late July until they moved to overwintering habitats in late October. Fish moved within a definable area, and although occasional long-distance movements occurred, the fish typically returned to the previously occupied area. Seasonal home range was calculated with the use of kernel density estimation, which can be interpreted as a probabilistic utilization distribution that documents the internal structure of the estimate by delineating portions of the range that was used a specified percentage of the time. A traditional linear range also was reported. Most flathead catfish (89%) had one 50% kernel-estimated core area, whereas 11% of the fish split their time between two core areas. Core areas were typically in the middle of the 90% kernel-estimated home range (58%), although several had core areas in upstream (26%) and downstream (16%) portions of the home range. Home-range size did not differ based on river, sex, or size and was highly variable among individuals. The median 95% kernel estimate was 1,085 m (range, 70– 69,090 m) for all fish. The median 50% kernel-estimated core area was 135 m (10–2,260 m). The median linear range was 3,510 m (150–50,400 m). Fish pairs with core areas in the same and neighboring pools had static joint space use values of up to 49% (area of intersection index), indicating substantial overlap and use of the same area. However, all fish pairs had low dynamic joint space use values (<0.07; coefficient of association), indicating that fish pairs were temporally segregated, rarely occurring in the same location at the same time.
A catastrophe model for the prospect-utility theory question.
Oliva, Terence A; McDade, Sean R
2008-07-01
Anomalies have played a big part in the analysis of decision making under risk. Both expected utility and prospect theories were born out of anomalies exhibited by actual decision making behavior. Since the same individual can use both expected utility and prospect approaches at different times, it seems there should be a means of uniting the two. This paper turns to nonlinear dynamical systems (NDS), specifically a catastrophe model, to help suggest an 'out of the box' line of solution toward integration. We use a cusp model to create a value surface whose control dimensions are involvement and gains versus losses. By including 'involvement' as a variable the importance of the individual's psychological state is included, and it provides a rationale for how decision makers' changes from expected utility to prospect might occur. Additionally, it provides a possible explanation for what appears to be even more irrational decisions that individuals make when highly emotionally involved. We estimate the catastrophe model using a sample of 997 gamblers who attended a casino and compare it to the linear model using regression. Hence, we have actual data from individuals making real bets, under real conditions.
Bacterial carbon utilization in vertical subsurface flow constructed wetlands.
Tietz, Alexandra; Langergraber, Günter; Watzinger, Andrea; Haberl, Raimund; Kirschner, Alexander K T
2008-03-01
Subsurface vertical flow constructed wetlands with intermittent loading are considered as state of the art and can comply with stringent effluent requirements. It is usually assumed that microbial activity in the filter body of constructed wetlands, responsible for the removal of carbon and nitrogen, relies mainly on bacterially mediated transformations. However, little quantitative information is available on the distribution of bacterial biomass and production in the "black-box" constructed wetland. The spatial distribution of bacterial carbon utilization, based on bacterial (14)C-leucine incorporation measurements, was investigated for the filter body of planted and unplanted indoor pilot-scale constructed wetlands, as well as for a planted outdoor constructed wetland. A simple mass-balance approach was applied to explain the bacterially catalysed organic matter degradation in this system by comparing estimated bacterial carbon utilization rates with simultaneously measured carbon reduction values. The pilot-scale constructed wetlands proved to be a suitable model system for investigating microbial carbon utilization in constructed wetlands. Under an ideal operating mode, the bulk of bacterial productivity occurred within the first 10cm of the filter body. Plants seemed to have no significant influence on productivity and biomass of bacteria, as well as on wastewater total organic carbon removal.
Specification of the utility function in discrete choice experiments.
van der Pol, Marjon; Currie, Gillian; Kromm, Seija; Ryan, Mandy
2014-03-01
The specification of the utility function has received limited attention within the discrete choice experiment (DCE) literature. This lack of investigation is surprising given that evidence from the contingent valuation literature suggests that welfare estimates are sensitive to different specifications of the utility function. This study investigates the effect of different specifications of the utility function on results within a DCE. The DCE elicited the public's preferences for waiting time for hip and knee replacement and estimated willingness to wait (WTW). The results showed that the WTW for the different patient profiles varied considerably across the three different specifications of the utility function. Assuming a linear utility function led to much higher estimates of marginal rates of substitution (WTWs) than with nonlinear specifications. The goodness-of-fit measures indicated that nonlinear specifications were superior. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Walusimbi, Simon; Kwesiga, Brendan; Rodrigues, Rashmi; Haile, Melles; de Costa, Ayesha; Bogg, Lennart; Katamba, Achilles
2016-10-10
Microscopic Observation Drug Susceptibility (MODS) and Xpert MTB/Rif (Xpert) are highly sensitive tests for diagnosis of pulmonary tuberculosis (PTB). This study evaluated the cost effectiveness of utilizing MODS versus Xpert for diagnosis of active pulmonary TB in HIV infected patients in Uganda. A decision analysis model comparing MODS versus Xpert for TB diagnosis was used. Costs were estimated by measuring and valuing relevant resources required to perform the MODS and Xpert tests. Diagnostic accuracy data of the tests were obtained from systematic reviews involving HIV infected patients. We calculated base values for unit costs and varied several assumptions to obtain the range estimates. Cost effectiveness was expressed as costs per TB patient diagnosed for each of the two diagnostic strategies. Base case analysis was performed using the base estimates for unit cost and diagnostic accuracy of the tests. Sensitivity analysis was performed using a range of value estimates for resources, prevalence, number of tests and diagnostic accuracy. The unit cost of MODS was US$ 6.53 versus US$ 12.41 of Xpert. Consumables accounted for 59 % (US$ 3.84 of 6.53) of the unit cost for MODS and 84 % (US$10.37 of 12.41) of the unit cost for Xpert. The cost effectiveness ratio of the algorithm using MODS was US$ 34 per TB patient diagnosed compared to US$ 71 of the algorithm using Xpert. The algorithm using MODS was more cost-effective compared to the algorithm using Xpert for a wide range of different values of accuracy, cost and TB prevalence. The cost (threshold value), where the algorithm using Xpert was optimal over the algorithm using MODS was US$ 5.92. MODS versus Xpert was more cost-effective for the diagnosis of PTB among HIV patients in our setting. Efforts to scale-up MODS therefore need to be explored. However, since other non-economic factors may still favour the use of Xpert, the current cost of the Xpert cartridge still needs to be reduced further by more than half, in order to make it economically competitive with MODS.
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
Nguyen, Thuy Trang; Schäfer, Helmut; Timmesfeld, Nina
2013-05-01
An index measuring the utility of testing a DNA marker before deciding between two alternative treatments is proposed which can be estimated from pharmaco-epidemiological case-control or cohort studies. In the case-control design, external estimates of the prevalence of the disease and of the frequency of the genetic risk variant are required for estimating the utility index. Formulas for point and interval estimates are derived. Empirical coverage probabilities of the confidence intervals were estimated under different scenarios of disease prevalence, prevalence of drug use, and population frequency of the genetic variant. To illustrate our method, we re-analyse pharmaco-epidemiological case-control data on oral contraceptive intake and venous thrombosis in carriers and non-carriers of the factor V Leiden mutation. We also re-analyse cross-sectional data from the Framingham study on a gene-diet interaction between an APOA2 polymorphism and high saturated fat intake on obesity. We conclude that the utility index may be helpful to evaluate and appraise the potential clinical and public health relevance of gene-environment interaction effects detected in genomic and candidate gene association studies and may be a valuable decision support for designing prospective studies on the clinical utility. © 2013 Wiley Periodicals, Inc.
Distinguishing body mass and activity level from the lower limb: can entheses diagnose obesity?
Godde, Kanya; Taylor, Rebecca Wilson
2013-03-10
The ability to estimate body size from the skeleton has broad applications, but is especially important to the forensic community when identifying unknown skeletal remains. This research investigates the utility of using entheses/muscle skeletal markers of the lower limb to estimate body size and to classify individuals into average, obese, and active categories, while using a biomechanical approach to interpret the results. Eighteen muscle attachment sites of the lower limb, known to be involved in the sit-to-stand transition, were scored for robusticity and stress in 105 white males (aged 31-81 years) from the William M. Bass Donated Skeletal Collection. Both logistic regression and log linear models were applied to the data to (1) test the utility of entheses as an indicator of body weight and activity level, and (2) to generate classification percentages that speak to the accuracy of the method. Thirteen robusticity scores differed significantly between the groups, but classification percentages were only slightly greater than chance. However, clear differences could be seen between the average and obese and the average and active groups. Stress scores showed no value in discriminating between groups. These results were interpreted in relation to biomechanical forces at the microscopic and macroscopic levels. Even though robusticity alone is not able to classify individuals well, its significance may show greater value when incorporated into a model that has multiple skeletal indicators. Further research needs to evaluate a larger sample and incorporate several lines of evidence to improve classification rates. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Adler, Robert F.; Kidd, Christopher; Petty, Grant; Morrissey, Mark; Goodman, H. Michael; Einaudi, Franco (Technical Monitor)
2000-01-01
A set of global, monthly rainfall products has been intercompared to understand the quality and utility of the estimates. The products include 25 observational (satellite-based), four model and two climatological products. The results of the intercomparison indicate a very large range (factor of two or three) of values when all products are considered. The range of values is reduced considerably when the set of observational products is limited to those considered quasi-standard. The model products do significantly poorer in the tropics, but are competitive with satellite-based fields in mid-latitudes over land. Over ocean, products are compared to frequency of precipitation from ship observations. The evaluation of the observational products point to merged data products (including rain gauge information) as providing the overall best results.
Observed effects of soil organic matter content on the microwave emissivity of soils
NASA Technical Reports Server (NTRS)
O'Neill, P. E.; Jackson, T. J.
1990-01-01
In order to determine the significance of organic matter content on the microwave emissivity of soils when estimating soil moisture, field experiments were conducted in which 1.4 GHz microwave emissivity data were collected over test plots of sandy loam soil with different organic matter levels (1.8, 4.0, and 6.1 percent) for a range of soil moisture values. Analyses of the observed data show only minor variation in microwave emissivity due to a change in organic matter content at a given moisture level for soils with similar texture and structure. Predictions of microwave emissivity made using a dielectric model for aggregated soils exhibit the same trends and type of response as the measured data when appropriate values for the input parameters were utilized.
Observed effects of soil organic matter content on the microwave intensity of soils
NASA Technical Reports Server (NTRS)
Jackson, T. J.; Oneill, P. E.
1988-01-01
In order to determine the significance of organic matter content on the microwave emissivity of soils when estimating soil moisture, field experiments were conducted in which 1.4 GHz microwave emissivity data were collected over test plots of sandy loam soil with different organic matter levels (1.8, 4.0, and 6.1 percent) for a range of soil moisture values. Analyses of the observed data show only minor variation in microwave emissivity due to a change in organic matter content at a given moisture level for soils with similar texture and structure. Predictions of microwave emissivity made using a dielectric model for aggregated soils exhibit the same trends and type of response as the measured data when appropriate values for the input parameters were utilized.
Dutton, Daniel J; McLaren, Lindsay
2014-05-06
National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18-65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23-28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind included in this study is recommended. If the researcher is interested in using BMI as a predictor variable for modelling disease, then both self-reported and corrected BMI result in biased estimates of association.