Sample records for evaluating value-at-risk models

  1. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  2. Quantile uncertainty and value-at-risk model risk.

    PubMed

    Alexander, Carol; Sarabia, José María

    2012-08-01

    This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value-at-Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of "model risk" in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value-at-Risk model risk and compute the required regulatory capital add-on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value-at-Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks. © 2012 Society for Risk Analysis.

  3. Multifractal Value at Risk model

    NASA Astrophysics Data System (ADS)

    Lee, Hojin; Song, Jae Wook; Chang, Woojin

    2016-06-01

    In this paper new Value at Risk (VaR) model is proposed and investigated. We consider the multifractal property of financial time series and develop a multifractal Value at Risk (MFVaR). MFVaR introduced in this paper is analytically tractable and not based on simulation. Empirical study showed that MFVaR can provide the more stable and accurate forecasting performance in volatile financial markets where large loss can be incurred. This implies that our multifractal VaR works well for the risk measurement of extreme credit events.

  4. Estimation of Value-at-Risk for Energy Commodities via CAViaR Model

    NASA Astrophysics Data System (ADS)

    Xiliang, Zhao; Xi, Zhu

    This paper uses the Conditional Autoregressive Value at Risk model (CAViaR) proposed by Engle and Manganelli (2004) to evaluate the value-at-risk for daily spot prices of Brent crude oil and West Texas Intermediate crude oil covering the period May 21th, 1987 to Novermber 18th, 2008. Then the accuracy of the estimates of CAViaR model, Normal-GARCH, and GED-GARCH was compared. The results show that all the methods do good job for the low confidence level (95%), and GED-GARCH is the best for spot WTI price, Normal-GARCH and Adaptive-CAViaR are the best for spot Brent price. However, for the high confidence level (99%), Normal-GARCH do a good job for spot WTI, GED-GARCH and four kind of CAViaR specifications do well for spot Brent price. Normal-GARCH does badly for spot Brent price. The result seems suggest that CAViaR do well as well as GED-GARCH since CAViaR directly model the quantile autoregression, but it does not outperform GED-GARCH although it does outperform Normal-GARCH.

  5. Estimating the Value-at-Risk for some stocks at the capital market in Indonesia based on ARMA-FIGARCH models

    NASA Astrophysics Data System (ADS)

    Sukono; Lesmana, E.; Susanti, D.; Napitupulu, H.; Hidayat, Y.

    2017-11-01

    Value-at-Risk has already become a standard measurement that must be carried out by the financial institution for both internal interest and regulatory. In this paper, the estimation of Value-at-Risk of some stocks with econometric models approach is analyzed. In this research, we assume that the stock return follows the time series model. To do the estimation of mean value we are using ARMA models, while to estimate the variance value we are using FIGARCH models. Furthermore, the mean value estimator and the variance are used to estimate the Value-at-Risk. The result of the analysis shows that from five stock PRUF, BBRI, MPPA, BMRI, and INDF, the Value-at-Risk obtained are 0.01791, 0.06037, 0.02550, 0.06030, and 0.02585 respectively. Since Value-at-Risk represents the maximum risk size of each stock at a 95% level of significance, then it can be taken into consideration in determining the investment policy on stocks.

  6. Risk evaluation on leading companies in property and real estate subsector at IDX: A Value-at-Risk with ARMAX-GARCHX approach and duration test

    NASA Astrophysics Data System (ADS)

    Dwi Prastyo, Dedy; Handayani, Dwi; Fam, Soo-Fen; Puteri Rahayu, Santi; Suhartono; Luh Putu Satyaning Pradnya Paramita, Ni

    2018-03-01

    Risk assessment and evaluation becomes essential for financial institution to measure the potential risk of their counterparties. In middle of 2016 until first quarter of 2017, there is national program from Indonesian government so-called Tax Amnesty. One subsector that has potential to receive positive impact from the Tax Amnesty program is property and real estate. This work evaluates the risk of top five companies in term of capital share listed in Indonesia stock exchange (IDX). To do this, the Value-at-Risk (VaR) with ARMAX-GARCHX approach is employed. The ARMAX-GARCHX simultaneously models the adaptive mean and variance of stock return of each company considering exogenous variables, i.e. IDR/USD exchange rate and Jakarta Composite Index (JCI). The risk is evaluated in scheme of time moving window. The risk evaluation using 5% quantile with window size 500 transaction days perform better result compare to other scenarios. In addition, duration test is used to test the dependency between shortfalls. It informs that series of shortfall are independent.

  7. Forecasting the value-at-risk of Chinese stock market using the HARQ model and extreme value theory

    NASA Astrophysics Data System (ADS)

    Liu, Guangqiang; Wei, Yu; Chen, Yongfei; Yu, Jiang; Hu, Yang

    2018-06-01

    Using intraday data of the CSI300 index, this paper discusses value-at-risk (VaR) forecasting of the Chinese stock market from the perspective of high-frequency volatility models. First, we measure the realized volatility (RV) with 5-minute high-frequency returns of the CSI300 index and then model it with the newly introduced heterogeneous autoregressive quarticity (HARQ) model, which can handle the time-varying coefficients of the HAR model. Second, we forecast the out-of-sample VaR of the CSI300 index by combining the HARQ model and extreme value theory (EVT). Finally, using several popular backtesting methods, we compare the VaR forecasting accuracy of HARQ model with other traditional HAR-type models, such as HAR, HAR-J, CHAR, and SHAR. The empirical results show that the novel HARQ model can beat other HAR-type models in forecasting the VaR of the Chinese stock market at various risk levels.

  8. Clearing margin system in the futures markets—Applying the value-at-risk model to Taiwanese data

    NASA Astrophysics Data System (ADS)

    Chiu, Chien-Liang; Chiang, Shu-Mei; Hung, Jui-Cheng; Chen, Yu-Lung

    2006-07-01

    This article sets out to investigate if the TAIFEX has adequate clearing margin adjustment system via unconditional coverage, conditional coverage test and mean relative scaled bias to assess the performance of three value-at-risk (VaR) models (i.e., the TAIFEX, RiskMetrics and GARCH-t). For the same model, original and absolute returns are compared to explore which can accurately capture the true risk. For the same return, daily and tiered adjustment methods are examined to evaluate which corresponds to risk best. The results indicate that the clearing margin adjustment of the TAIFEX cannot reflect true risks. The adjustment rules, including the use of absolute return and tiered adjustment of the clearing margin, have distorted VaR-based margin requirements. Besides, the results suggest that the TAIFEX should use original return to compute VaR and daily adjustment system to set clearing margin. This approach would improve the funds operation efficiency and the liquidity of the futures markets.

  9. Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Weng, Yingliang

    2016-01-01

    This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.

  10. Occupational health and safety: Designing and building with MACBETH a value risk-matrix for evaluating health and safety risks

    NASA Astrophysics Data System (ADS)

    Lopes, D. F.; Oliveira, M. D.; Costa, C. A. Bana e.

    2015-05-01

    Risk matrices (RMs) are commonly used to evaluate health and safety risks. Nonetheless, they violate some theoretical principles that compromise their feasibility and use. This study describes how multiple criteria decision analysis methods have been used to improve the design and the deployment of RMs to evaluate health and safety risks at the Occupational Health and Safety Unit (OHSU) of the Regional Health Administration of Lisbon and Tagus Valley. ‘Value risk-matrices’ (VRMs) are built with the MACBETH approach in four modelling steps: a) structuring risk impacts, involving the construction of descriptors of impact that link risk events with health impacts and are informed by scientific evidence; b) generating a value measurement scale of risk impacts, by applying the MACBETH-Choquet procedure; c) building a system for eliciting subjective probabilities that makes use of a numerical probability scale that was constructed with MACBETH qualitative judgments on likelihood; d) and defining a classification colouring scheme for the VRM. A VRM built with OHSU members was implemented in a decision support system which will be used by OHSU members to evaluate health and safety risks and to identify risk mitigation actions.

  11. Refining value-at-risk estimates using a Bayesian Markov-switching GJR-GARCH copula-EVT model.

    PubMed

    Sampid, Marius Galabe; Hasim, Haslifah M; Dai, Hongsheng

    2018-01-01

    In this paper, we propose a model for forecasting Value-at-Risk (VaR) using a Bayesian Markov-switching GJR-GARCH(1,1) model with skewed Student's-t innovation, copula functions and extreme value theory. A Bayesian Markov-switching GJR-GARCH(1,1) model that identifies non-constant volatility over time and allows the GARCH parameters to vary over time following a Markov process, is combined with copula functions and EVT to formulate the Bayesian Markov-switching GJR-GARCH(1,1) copula-EVT VaR model, which is then used to forecast the level of risk on financial asset returns. We further propose a new method for threshold selection in EVT analysis, which we term the hybrid method. Empirical and back-testing results show that the proposed VaR models capture VaR reasonably well in periods of calm and in periods of crisis.

  12. Exchangeability, extreme returns and Value-at-Risk forecasts

    NASA Astrophysics Data System (ADS)

    Huang, Chun-Kai; North, Delia; Zewotir, Temesgen

    2017-07-01

    In this paper, we propose a new approach to extreme value modelling for the forecasting of Value-at-Risk (VaR). In particular, the block maxima and the peaks-over-threshold methods are generalised to exchangeable random sequences. This caters for the dependencies, such as serial autocorrelation, of financial returns observed empirically. In addition, this approach allows for parameter variations within each VaR estimation window. Empirical prior distributions of the extreme value parameters are attained by using resampling procedures. We compare the results of our VaR forecasts to that of the unconditional extreme value theory (EVT) approach and the conditional GARCH-EVT model for robust conclusions.

  13. [Establishment of risk evaluation model of peritoneal metastasis in gastric cancer and its predictive value].

    PubMed

    Zhao, Junjie; Zhou, Rongjian; Zhang, Qi; Shu, Ping; Li, Haojie; Wang, Xuefei; Shen, Zhenbin; Liu, Fenglin; Chen, Weidong; Qin, Jing; Sun, Yihong

    2017-01-25

    .2%). Neutrophil/lymphocyte ratio (NLR) of 26 patients was ≥ 2.37(26/231, 11.3%). Multivariate analysis showed that Lauren classification (HR=8.95, 95%CI:1.32-60.59, P=0.025), CA125(HR=17.45, 95%CI:5.54-54.89, P=0.001), CA72-4(HR=20.06, 95%CI:5.05-79.68, P=0.001), and NLR (HR=4.16, 95%CI:1.17-14.75, P=0.032) were independent risk factors of peritoneal metastasis in gastric cancer. In the nomogram, the highest score was 241, including diffuse or mixed Lauren classification (54 score), CA125 ≥ 35 kU/L (66 score), CA72-4 ≥ 10 kU/L (100 score), and NLR ≥ 2.37 (21 score), which represented a highest risk of peritoneal metastasis (more than 90%). The AUC of nomogram was 0.912, which was superior than any single variable (AUC of Lauren classification: 0.678; AUC of CA125: 0.720; AUC of CA72-4: 0.792; AUC of NLR: 0.613, all P=0.000). The total score of nomogram increased according to the TNM stage, and was highest in the peritoneal metastasis group (F=49.1, P=0.000). When the cut-off value calculated by ROC analysis was set at 140, the model could best balanced the sensitivity (0.79) and the specificity (0.87). Only 5% of patients had peritoneal metastasis when their nomogram scores were lower than 140, while 58% of patients had peritoneal metastasis when their scores were ≥ 140(χ 2 =69.1, P=0.000). The risk evaluation model established with Lauren classification, CA125, CA72-4 and NLR can effectively predict the risk of peritoneal metastasis in gastric cancer, and provide the reference to preoperative staging and choice of therapeutic strategy.

  14. Multifractality and value-at-risk forecasting of exchange rates

    NASA Astrophysics Data System (ADS)

    Batten, Jonathan A.; Kinateder, Harald; Wagner, Niklas

    2014-05-01

    This paper addresses market risk prediction for high frequency foreign exchange rates under nonlinear risk scaling behaviour. We use a modified version of the multifractal model of asset returns (MMAR) where trading time is represented by the series of volume ticks. Our dataset consists of 138,418 5-min round-the-clock observations of EUR/USD spot quotes and trading ticks during the period January 5, 2006 to December 31, 2007. Considering fat-tails, long-range dependence as well as scale inconsistency with the MMAR, we derive out-of-sample value-at-risk (VaR) forecasts and compare our approach to historical simulation as well as a benchmark GARCH(1,1) location-scale VaR model. Our findings underline that the multifractal properties in EUR/USD returns in fact have notable risk management implications. The MMAR approach is a parsimonious model which produces admissible VaR forecasts at the 12-h forecast horizon. For the daily horizon, the MMAR outperforms both alternatives based on conditional as well as unconditional coverage statistics.

  15. The social values at risk from sea-level rise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Sonia, E-mail: sonia.graham@unimelb.edu.au; Barnett, Jon, E-mail: jbarn@unimelb.edu.au; Fincher, Ruth, E-mail: r.fincher@unimelb.edu.au

    Analysis of the risks of sea-level rise favours conventionally measured metrics such as the area of land that may be subsumed, the numbers of properties at risk, and the capital values of assets at risk. Despite this, it is clear that there exist many less material but no less important values at risk from sea-level rise. This paper re-theorises these multifarious social values at risk from sea-level rise, by explaining their diverse nature, and grounding them in the everyday practices of people living in coastal places. It is informed by a review and analysis of research on social values frommore » within the fields of social impact assessment, human geography, psychology, decision analysis, and climate change adaptation. From this we propose that it is the ‘lived values’ of coastal places that are most at risk from sea-level rise. We then offer a framework that groups these lived values into five types: those that are physiological in nature, and those that relate to issues of security, belonging, esteem, and self-actualisation. This framework of lived values at risk from sea-level rise can guide empirical research investigating the social impacts of sea-level rise, as well as the impacts of actions to adapt to sea-level rise. It also offers a basis for identifying the distribution of related social outcomes across populations exposed to sea-level rise or sea-level rise policies.« less

  16. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  17. Measuring Value-at-Risk and Expected Shortfall of crude oil portfolio using extreme value theory and vine copula

    NASA Astrophysics Data System (ADS)

    Yu, Wenhua; Yang, Kun; Wei, Yu; Lei, Likun

    2018-01-01

    Volatilities of crude oil price have important impacts on the steady and sustainable development of world real economy. Thus it is of great academic and practical significance to model and measure the volatility and risk of crude oil markets accurately. This paper aims to measure the Value-at-Risk (VaR) and Expected Shortfall (ES) of a portfolio consists of four crude oil assets by using GARCH-type models, extreme value theory (EVT) and vine copulas. The backtesting results show that the combination of GARCH-type-EVT models and vine copula methods can produce accurate risk measures of the oil portfolio. Mixed R-vine copula is more flexible and superior to other vine copulas. Different GARCH-type models, which can depict the long-memory and/or leverage effect of oil price volatilities, however offer similar marginal distributions of the oil returns.

  18. EVALUATING RISK-PREDICTION MODELS USING DATA FROM ELECTRONIC HEALTH RECORDS.

    PubMed

    Wang, L E; Shaw, Pamela A; Mathelier, Hansie M; Kimmel, Stephen E; French, Benjamin

    2016-03-01

    The availability of data from electronic health records facilitates the development and evaluation of risk-prediction models, but estimation of prediction accuracy could be limited by outcome misclassification, which can arise if events are not captured. We evaluate the robustness of prediction accuracy summaries, obtained from receiver operating characteristic curves and risk-reclassification methods, if events are not captured (i.e., "false negatives"). We derive estimators for sensitivity and specificity if misclassification is independent of marker values. In simulation studies, we quantify the potential for bias in prediction accuracy summaries if misclassification depends on marker values. We compare the accuracy of alternative prognostic models for 30-day all-cause hospital readmission among 4548 patients discharged from the University of Pennsylvania Health System with a primary diagnosis of heart failure. Simulation studies indicate that if misclassification depends on marker values, then the estimated accuracy improvement is also biased, but the direction of the bias depends on the direction of the association between markers and the probability of misclassification. In our application, 29% of the 1143 readmitted patients were readmitted to a hospital elsewhere in Pennsylvania, which reduced prediction accuracy. Outcome misclassification can result in erroneous conclusions regarding the accuracy of risk-prediction models.

  19. Development and validation of a risk model for identification of non-neutropenic, critically ill adult patients at high risk of invasive Candida infection: the Fungal Infection Risk Evaluation (FIRE) Study.

    PubMed

    Harrison, D; Muskett, H; Harvey, S; Grieve, R; Shahin, J; Patel, K; Sadique, Z; Allen, E; Dybowski, R; Jit, M; Edgeworth, J; Kibbler, C; Barnes, R; Soni, N; Rowan, K

    2013-02-01

    There is increasing evidence that invasive fungal disease (IFD) is more likely to occur in non-neutropenic patients in critical care units. A number of randomised controlled trials (RCTs) have evaluated antifungal prophylaxis in non-neutropenic, critically ill patients, demonstrating a reduction in the risk of proven IFD and suggesting a reduction in mortality. It is necessary to establish a method to identify and target antifungal prophylaxis at those patients at highest risk of IFD, who stand to benefit most from any antifungal prophylaxis strategy. To develop and validate risk models to identify non-neutropenic, critically ill adult patients at high risk of invasive Candida infection, who would benefit from antifungal prophylaxis, and to assess the cost-effectiveness of targeting antifungal prophylaxis to high-risk patients based on these models. Systematic review, prospective data collection, statistical modelling, economic decision modelling and value of information analysis. Ninety-six UK adult general critical care units. Consecutive admissions to participating critical care units. None. Invasive fungal disease, defined as a blood culture or sample from a normally sterile site showing yeast/mould cells in a microbiological or histopathological report. For statistical and economic modelling, the primary outcome was invasive Candida infection, defined as IFD-positive for Candida species. Systematic review: Thirteen articles exploring risk factors, risk models or clinical decision rules for IFD in critically ill adult patients were identified. Risk factors reported to be significantly associated with IFD were included in the final data set for the prospective data collection. Data were collected on 60,778 admissions between July 2009 and March 2011. Overall, 383 patients (0.6%) were admitted with or developed IFD. The majority of IFD patients (94%) were positive for Candida species. The most common site of infection was blood (55%). The incidence of IFD

  20. Two-stage stochastic unit commitment model including non-generation resources with conditional value-at-risk constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yuping; Zheng, Qipeng P.; Wang, Jianhui

    2014-11-01

    tThis paper presents a two-stage stochastic unit commitment (UC) model, which integrates non-generation resources such as demand response (DR) and energy storage (ES) while including riskconstraints to balance between cost and system reliability due to the fluctuation of variable genera-tion such as wind and solar power. This paper uses conditional value-at-risk (CVaR) measures to modelrisks associated with the decisions in a stochastic environment. In contrast to chance-constrained modelsrequiring extra binary variables, risk constraints based on CVaR only involve linear constraints and con-tinuous variables, making it more computationally attractive. The proposed models with risk constraintsare able to avoid over-conservative solutions butmore » still ensure system reliability represented by loss ofloads. Then numerical experiments are conducted to study the effects of non-generation resources ongenerator schedules and the difference of total expected generation costs with risk consideration. Sen-sitivity analysis based on reliability parameters is also performed to test the decision preferences ofconfidence levels and load-shedding loss allowances on generation cost reduction.« less

  1. A Formative Evaluation of the Children, Youth, and Families at Risk Coaching Model

    ERIC Educational Resources Information Center

    Olson, Jonathan R.; Smith, Burgess; Hawkey, Kyle R.; Perkins, Daniel F.; Borden, Lynne M.

    2016-01-01

    In this article, we describe the results of a formative evaluation of a coaching model designed to support recipients of funding through the Children, Youth, and Families at Risk (CYFAR) initiative. Results indicate that CYFAR coaches draw from a variety of types of coaching and that CYFAR principle investigators (PIs) are generally satisfied with…

  2. Measuring daily Value-at-Risk of SSEC index: A new approach based on multifractal analysis and extreme value theory

    NASA Astrophysics Data System (ADS)

    Wei, Yu; Chen, Wang; Lin, Yu

    2013-05-01

    Recent studies in the econophysics literature reveal that price variability has fractal and multifractal characteristics not only in developed financial markets, but also in emerging markets. Taking high-frequency intraday quotes of the Shanghai Stock Exchange Component (SSEC) Index as example, this paper proposes a new method to measure daily Value-at-Risk (VaR) by combining the newly introduced multifractal volatility (MFV) model and the extreme value theory (EVT) method. Two VaR backtesting techniques are then employed to compare the performance of the model with that of a group of linear and nonlinear generalized autoregressive conditional heteroskedasticity (GARCH) models. The empirical results show the multifractal nature of price volatility in Chinese stock market. VaR measures based on the multifractal volatility model and EVT method outperform many GARCH-type models at high-risk levels.

  3. Developing Risk Prediction Models for Kidney Injury and Assessing Incremental Value for Novel Biomarkers

    PubMed Central

    Kerr, Kathleen F.; Meisner, Allison; Thiessen-Philbrook, Heather; Coca, Steven G.

    2014-01-01

    The field of nephrology is actively involved in developing biomarkers and improving models for predicting patients’ risks of AKI and CKD and their outcomes. However, some important aspects of evaluating biomarkers and risk models are not widely appreciated, and statistical methods are still evolving. This review describes some of the most important statistical concepts for this area of research and identifies common pitfalls. Particular attention is paid to metrics proposed within the last 5 years for quantifying the incremental predictive value of a new biomarker. PMID:24855282

  4. Extensions of criteria for evaluating risk prediction models for public health applications.

    PubMed

    Pfeiffer, Ruth M

    2013-04-01

    We recently proposed two novel criteria to assess the usefulness of risk prediction models for public health applications. The proportion of cases followed, PCF(p), is the proportion of individuals who will develop disease who are included in the proportion p of individuals in the population at highest risk. The proportion needed to follow-up, PNF(q), is the proportion of the general population at highest risk that one needs to follow in order that a proportion q of those destined to become cases will be followed (Pfeiffer, R.M. and Gail, M.H., 2011. Two criteria for evaluating risk prediction models. Biometrics 67, 1057-1065). Here, we extend these criteria in two ways. First, we introduce two new criteria by integrating PCF and PNF over a range of values of q or p to obtain iPCF, the integrated PCF, and iPNF, the integrated PNF. A key assumption in the previous work was that the risk model is well calibrated. This assumption also underlies novel estimates of iPCF and iPNF based on observed risks in a population alone. The second extension is to propose and study estimates of PCF, PNF, iPCF, and iPNF that are consistent even if the risk models are not well calibrated. These new estimates are obtained from case-control data when the outcome prevalence in the population is known, and from cohort data, with baseline covariates and observed health outcomes. We study the efficiency of the various estimates and propose and compare tests for comparing two risk models, both of which were evaluated in the same validation data.

  5. Valuing a Lifestyle Intervention for Middle Eastern Immigrants at Risk of Diabetes.

    PubMed

    Saha, Sanjib; Gerdtham, Ulf-G; Siddiqui, Faiza; Bennet, Louise

    2018-02-27

    Willingness-to-pay (WTP) techniques are increasingly being used in the healthcare sector for assessing the value of interventions. The objective of this study was to estimate WTP and its predictors in a randomized controlled trial of a lifestyle intervention exclusively targeting Middle Eastern immigrants living in Malmö, Sweden, who are at high risk of type 2 diabetes. We used the contingent valuation method to evaluate WTP. The questionnaire was designed following the payment-scale approach, and administered at the end of the trial, giving an ex-post perspective. We performed logistic regression and linear regression techniques to identify the factors associated with zero WTP value and positive WTP values. The intervention group had significantly higher average WTP than the control group (216 SEK vs. 127 SEK; p = 0.035; 1 U.S.$ = 8.52 SEK, 2015 price year) per month. The regression models demonstrated that being in the intervention group, acculturation, and self-employment were significant factors associated with positive WTP values. Male participants and lower-educated participants had a significantly higher likelihood of zero WTP. In this era of increased migration, our findings can help policy makers to take informed decisions to implement lifestyle interventions for immigrant populations.

  6. Valuing a Lifestyle Intervention for Middle Eastern Immigrants at Risk of Diabetes

    PubMed Central

    Siddiqui, Faiza

    2018-01-01

    Willingness-to-pay (WTP) techniques are increasingly being used in the healthcare sector for assessing the value of interventions. The objective of this study was to estimate WTP and its predictors in a randomized controlled trial of a lifestyle intervention exclusively targeting Middle Eastern immigrants living in Malmö, Sweden, who are at high risk of type 2 diabetes. We used the contingent valuation method to evaluate WTP. The questionnaire was designed following the payment-scale approach, and administered at the end of the trial, giving an ex-post perspective. We performed logistic regression and linear regression techniques to identify the factors associated with zero WTP value and positive WTP values. The intervention group had significantly higher average WTP than the control group (216 SEK vs. 127 SEK; p = 0.035; 1 U.S.$ = 8.52 SEK, 2015 price year) per month. The regression models demonstrated that being in the intervention group, acculturation, and self-employment were significant factors associated with positive WTP values. Male participants and lower-educated participants had a significantly higher likelihood of zero WTP. In this era of increased migration, our findings can help policy makers to take informed decisions to implement lifestyle interventions for immigrant populations. PMID:29495529

  7. Two criteria for evaluating risk prediction models

    PubMed Central

    Pfeiffer, R.M.; Gail, M.H.

    2010-01-01

    SUMMARY We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF(q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF(p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF(q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF(p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data. PMID:21155746

  8. Developing risk prediction models for kidney injury and assessing incremental value for novel biomarkers.

    PubMed

    Kerr, Kathleen F; Meisner, Allison; Thiessen-Philbrook, Heather; Coca, Steven G; Parikh, Chirag R

    2014-08-07

    The field of nephrology is actively involved in developing biomarkers and improving models for predicting patients' risks of AKI and CKD and their outcomes. However, some important aspects of evaluating biomarkers and risk models are not widely appreciated, and statistical methods are still evolving. This review describes some of the most important statistical concepts for this area of research and identifies common pitfalls. Particular attention is paid to metrics proposed within the last 5 years for quantifying the incremental predictive value of a new biomarker. Copyright © 2014 by the American Society of Nephrology.

  9. Value at Risk on Composite Price Share Index Stock Data

    NASA Astrophysics Data System (ADS)

    Oktaviarina, A.

    2018-01-01

    The financial servicest authority was declared Let’s Save Campaign on n commemoration of the World Savings Day that falls on this day, October 31, 2016. The activity was greeted enthusiastically by Indonesia Stock Exchange by taking out the slogan Let’s Save The Stocks. Stock is a form of investment that is expected to benefit in the future despite has risks. Value at Risk (VaR) is a method that can measure how much the risk of a financial investment. Composite Stock Price Indeks is the stock price index used by Indonesia Stock Exchange as stock volatility benchmarks in Indonesia. This study aimed to estimate Value at Risk (VaR) on closing price Composite Price Share Index Stock data on the period 20 September 2016 until 20 September 2017. Box-Pierce test results p value=0.9528 which is greater than a, that shows homoskedasticity. Value at Risk (VaR) with Variance Covariance Method is Rp.3.054.916,07 which means with 99% confindence interval someone who invests Rp.100.000.000,00 will get Rp.3.054.916,07 as a maximum loss.

  10. Building a Values-Informed Mental Model for New Orleans Climate Risk Management.

    PubMed

    Bessette, Douglas L; Mayer, Lauren A; Cwik, Bryan; Vezér, Martin; Keller, Klaus; Lempert, Robert J; Tuana, Nancy

    2017-10-01

    Individuals use values to frame their beliefs and simplify their understanding when confronted with complex and uncertain situations. The high complexity and deep uncertainty involved in climate risk management (CRM) lead to individuals' values likely being coupled to and contributing to their understanding of specific climate risk factors and management strategies. Most mental model approaches, however, which are commonly used to inform our understanding of people's beliefs, ignore values. In response, we developed a "Values-informed Mental Model" research approach, or ViMM, to elicit individuals' values alongside their beliefs and determine which values people use to understand and assess specific climate risk factors and CRM strategies. Our results show that participants consistently used one of three values to frame their understanding of risk factors and CRM strategies in New Orleans: (1) fostering a healthy economy, wealth, and job creation, (2) protecting and promoting healthy ecosystems and biodiversity, and (3) preserving New Orleans' unique culture, traditions, and historically significant neighborhoods. While the first value frame is common in analyses of CRM strategies, the latter two are often ignored, despite their mirroring commonly accepted pillars of sustainability. Other values like distributive justice and fairness were prioritized differently depending on the risk factor or strategy being discussed. These results suggest that the ViMM method could be a critical first step in CRM decision-support processes and may encourage adoption of CRM strategies more in line with stakeholders' values. © 2017 Society for Risk Analysis.

  11. Literature Review on Modeling Cyber Networks and Evaluating Cyber Risks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelic, Andjelka; Campbell, Philip L

    The National Infrastructure Simulations and Analysis Center (NISAC) conducted a literature review on modeling cyber networks and evaluating cyber risks. The literature review explores where modeling is used in the cyber regime and ways that consequence and risk are evaluated. The relevant literature clusters in three different spaces: network security, cyber-physical, and mission assurance. In all approaches, some form of modeling is utilized at varying levels of detail, while the ability to understand consequence varies, as do interpretations of risk. This document summarizes the different literature viewpoints and explores their applicability to securing enterprise networks.

  12. On The Value at Risk Using Bayesian Mixture Laplace Autoregressive Approach for Modelling the Islamic Stock Risk Investment

    NASA Astrophysics Data System (ADS)

    Miftahurrohmah, Brina; Iriawan, Nur; Fithriasari, Kartika

    2017-06-01

    Stocks are known as the financial instruments traded in the capital market which have a high level of risk. Their risks are indicated by their uncertainty of their return which have to be accepted by investors in the future. The higher the risk to be faced, the higher the return would be gained. Therefore, the measurements need to be made against the risk. Value at Risk (VaR) as the most popular risk measurement method, is frequently ignore when the pattern of return is not uni-modal Normal. The calculation of the risks using VaR method with the Normal Mixture Autoregressive (MNAR) approach has been considered. This paper proposes VaR method couple with the Mixture Laplace Autoregressive (MLAR) that would be implemented for analysing the first three biggest capitalization Islamic stock return in JII, namely PT. Astra International Tbk (ASII), PT. Telekomunikasi Indonesia Tbk (TLMK), and PT. Unilever Indonesia Tbk (UNVR). Parameter estimation is performed by employing Bayesian Markov Chain Monte Carlo (MCMC) approaches.

  13. Risk and value analysis of SETI

    NASA Technical Reports Server (NTRS)

    Billingham, J.

    1986-01-01

    The risks, values, and costs of the SETI project are evaluated and compared with those of the Viking project. Examination of the scientific values, side benefits, and costs of the two projects reveal that both projects provide equal benefits at equal costs. The probability of scientific and technical success is analyzed.

  14. Evaluating the risk of water distribution system failure: A shared frailty model

    NASA Astrophysics Data System (ADS)

    Clark, Robert M.; Thurnau, Robert C.

    2011-12-01

    Condition assessment (CA) Modeling is drawing increasing interest as a technique that can assist in managing drinking water infrastructure. This paper develops a model based on the application of a Cox proportional hazard (PH)/shared frailty model and applies it to evaluating the risk of failure in drinking water networks using data from the Laramie Water Utility (located in Laramie, Wyoming, USA). Using the risk model a cost/ benefit analysis incorporating the inspection value method (IVM), is used to assist in making improved repair, replacement and rehabilitation decisions for selected drinking water distribution system pipes. A separate model is developed to predict failures in prestressed concrete cylinder pipe (PCCP). Various currently available inspection technologies are presented and discussed.

  15. Multilevel models for evaluating the risk of pedestrian-motor vehicle collisions at intersections and mid-blocks.

    PubMed

    Quistberg, D Alex; Howard, Eric J; Ebel, Beth E; Moudon, Anne V; Saelens, Brian E; Hurvitz, Philip M; Curtin, James E; Rivara, Frederick P

    2015-11-01

    Walking is a popular form of physical activity associated with clear health benefits. Promoting safe walking for pedestrians requires evaluating the risk of pedestrian-motor vehicle collisions at specific roadway locations in order to identify where road improvements and other interventions may be needed. The objective of this analysis was to estimate the risk of pedestrian collisions at intersections and mid-blocks in Seattle, WA. The study used 2007-2013 pedestrian-motor vehicle collision data from police reports and detailed characteristics of the microenvironment and macroenvironment at intersection and mid-block locations. The primary outcome was the number of pedestrian-motor vehicle collisions over time at each location (incident rate ratio [IRR] and 95% confidence interval [95% CI]). Multilevel mixed effects Poisson models accounted for correlation within and between locations and census blocks over time. Analysis accounted for pedestrian and vehicle activity (e.g., residential density and road classification). In the final multivariable model, intersections with 4 segments or 5 or more segments had higher pedestrian collision rates compared to mid-blocks. Non-residential roads had significantly higher rates than residential roads, with principal arterials having the highest collision rate. The pedestrian collision rate was higher by 9% per 10 feet of street width. Locations with traffic signals had twice the collision rate of locations without a signal and those with marked crosswalks also had a higher rate. Locations with a marked crosswalk also had higher risk of collision. Locations with a one-way road or those with signs encouraging motorists to cede the right-of-way to pedestrians had fewer pedestrian collisions. Collision rates were higher in locations that encourage greater pedestrian activity (more bus use, more fast food restaurants, higher employment, residential, and population densities). Locations with higher intersection density had a lower

  16. Multilevel models for evaluating the risk of pedestrian-motor vehicle collisions at intersections and mid-blocks

    PubMed Central

    Quistberg, D. Alex; Howard, Eric J.; Ebel, Beth E.; Moudon, Anne V.; Saelens, Brian E.; Hurvitz, Philip M.; Curtin, James E.; Rivara, Frederick P.

    2015-01-01

    Walking is a popular form of physical activity associated with clear health benefits. Promoting safe walking for pedestrians requires evaluating the risk of pedestrian-motor vehicle collisions at specific roadway locations in order to identify where road improvements and other interventions may be needed. The objective of this analysis was to estimate the risk of pedestrian collisions at intersections and mid-blocks in Seattle, WA. The study used 2007-2013 pedestrian-motor vehicle collision data from police reports and detailed characteristics of the microenvironment and macroenvironment at intersection and mid-block locations. The primary outcome was the number of pedestrian-motor vehicle collisions over time at each location (incident rate ratio [IRR] and 95% confidence interval [95% CI]). Multilevel mixed effects Poisson models accounted for correlation within and between locations and census blocks over time. Analysis accounted for pedestrian and vehicle activity (e.g., residential density and road classification). In the final multivariable model, intersections with 4 segments or 5 or more segments had higher pedestrian collision rates compared to mid-blocks. Non-residential roads had significantly higher rates than residential roads, with principal arterials having the highest collision rate. The pedestrian collision rate was higher by 9% per 10 feet of street width. Locations with traffic signals had twice the collision rate of locations without a signal and those with marked crosswalks also had a higher rate. Locations with a marked crosswalk also had higher risk of collision. Locations with a one-way road or those with signs encouraging motorists to cede the right-of-way to pedestrians had fewer pedestrian collisions. Collision rates were higher in locations that encourage greater pedestrian activity (more bus use, more fast food restaurants, higher employment, residential, and population densities). Locations with higher intersection density had a lower

  17. A wildfire risk modeling system for evaluating landscape fuel treatment strategies

    Treesearch

    Alan Ager; Mark Finney; Andrew McMahan

    2006-01-01

    Despite a wealth of literature and models concerning wildfire risk, field units in Federal land management agencies lack a clear framework and operational tools to measure how risk might change from proposed fuel treatments. In an actuarial context, risk is defined as the expected value change from a fire, calculated as the product of (1) probability of a fire at a...

  18. An integrated approach to evaluating alternative risk prediction strategies: a case study comparing alternative approaches for preventing invasive fungal disease.

    PubMed

    Sadique, Z; Grieve, R; Harrison, D A; Jit, M; Allen, E; Rowan, K M

    2013-12-01

    This article proposes an integrated approach to the development, validation, and evaluation of new risk prediction models illustrated with the Fungal Infection Risk Evaluation study, which developed risk models to identify non-neutropenic, critically ill adult patients at high risk of invasive fungal disease (IFD). Our decision-analytical model compared alternative strategies for preventing IFD at up to three clinical decision time points (critical care admission, after 24 hours, and end of day 3), followed with antifungal prophylaxis for those judged "high" risk versus "no formal risk assessment." We developed prognostic models to predict the risk of IFD before critical care unit discharge, with data from 35,455 admissions to 70 UK adult, critical care units, and validated the models externally. The decision model was populated with positive predictive values and negative predictive values from the best-fitting risk models. We projected lifetime cost-effectiveness and expected value of partial perfect information for groups of parameters. The risk prediction models performed well in internal and external validation. Risk assessment and prophylaxis at the end of day 3 was the most cost-effective strategy at the 2% and 1% risk threshold. Risk assessment at each time point was the most cost-effective strategy at a 0.5% risk threshold. Expected values of partial perfect information were high for positive predictive values or negative predictive values (£11 million-£13 million) and quality-adjusted life-years (£11 million). It is cost-effective to formally assess the risk of IFD for non-neutropenic, critically ill adult patients. This integrated approach to developing and evaluating risk models is useful for informing clinical practice and future research investment. © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Published by International Society for Pharmacoeconomics and Outcomes Research (ISPOR) All rights reserved.

  19. At-Risk Youth Appearance and Job Performance Evaluation

    ERIC Educational Resources Information Center

    Freeburg, Beth Winfrey; Workman, Jane E.

    2008-01-01

    The goal of this study was to identify the relationship of at-risk youth workplace appearance to other job performance criteria. Employers (n = 30; each employing from 1 to 17 youths) evaluated 178 at-risk high school youths who completed a paid summer employment experience. Appearance evaluations were significantly correlated with evaluations of…

  20. Persistent hemifacial spasm after microvascular decompression: a risk assessment model.

    PubMed

    Shah, Aalap; Horowitz, Michael

    2017-06-01

    Microvascular decompression (MVD) for hemifacial spasm (HFS) provides resolution of disabling symptoms such as eyelid twitching and muscle contractions of the entire hemiface. The primary aim of this study was to evaluate the predictive value of patient demographics and spasm characteristics on long-term outcomes, with or without intraoperative lateral spread response (LSR) as an additional variable in a risk assessment model. A retrospective study was undertaken to evaluate the associations of pre-operative patient characteristics, as well as intraoperative LSR and need for a staged procedure on the presence of persistent or recurrent HFS at the time of hospital discharge and at follow-up. A risk assessment model was constructed with the inclusion of six clinically or statistically significant variables from the univariate analyses. A receiving operator characteristic curve was generated, and area under the curve was calculated to determine the strength of the predictive model. A risk assessment model was first created consisting of significant pre-operative variables (Model 1) (age >50, female gender, history of botulinum toxin use, platysma muscle involvement). This model demonstrated borderline predictive value for persistent spasm at discharge (AUC .60; p=.045) and fair predictive value at follow-up (AUC .75; p=.001). Intraoperative variables (e.g. LSR persistence) demonstrated little additive value (Model 2) (AUC .67). Patients with a higher risk score (three or greater) demonstrated greater odds of persistent HFS at the time of discharge (OR 1.5 [95%CI 1.16-1.97]; p=.035), as well as greater odds of persistent or recurrent spasm at the time of follow-up (OR 3.0 [95%CI 1.52-5.95]; p=.002) Conclusions: A risk assessment model consisting of pre-operative clinical characteristics is useful in prognosticating HFS persistence at follow-up.

  1. A step function model to evaluate the real monetary value of man-sievert with real GDP.

    PubMed

    Na, Seong H; Kim, Sun G

    2009-01-01

    For use in a cost-benefit analysis to establish optimum levels of radiation protection in Korea under the ALARA principle, we introduce a discrete step function model to evaluate man-sievert monetary value in the real economic value. The model formula, which is unique and country-specific, is composed of real GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and average life expectancy. Unlike previous researches on alpha-value assessment, we show different alpha values in the real term, differentiated with respect to the range of individual doses, which would be more realistic and informative for application to the radiation protection practices. GDP deflators of economy can reflect the society's situations. Finally, we suggest that the Korean model can be generalized simply to other countries without normalizing any country-specific factors.

  2. A risk evaluation model and its application in online retailing trustfulness

    NASA Astrophysics Data System (ADS)

    Ye, Ruyi; Xu, Yingcheng

    2017-08-01

    Building a general model for risks evaluation in advance could improve the convenience, normality and comparability of the results of repeating risks evaluation in the case that the repeating risks evaluating are in the same area and for a similar purpose. One of the most convenient and common risks evaluation models is an index system including of several index, according weights and crediting method. One method to build a risk evaluation index system that guarantees the proportional relationship between the resulting credit and the expected risk loss is proposed and an application example is provided in online retailing in this article.

  3. Evaluating Special Educator Effectiveness: Addressing Issues Inherent to Value-Added Modeling

    ERIC Educational Resources Information Center

    Steinbrecher, Trisha D.; Selig, James P.; Cosbey, Joanna; Thorstensen, Beata I.

    2014-01-01

    States are increasingly using value-added approaches to evaluate teacher effectiveness. There is much debate regarding whether these methods should be employed and, if employed, what role such methods should play in comprehensive teacher evaluation systems. In this article, we consider the use of value-added modeling (VAM) to evaluate special…

  4. 'Weather Value at Risk': A uniform approach to describe and compare sectoral income risks from climate change.

    PubMed

    Prettenthaler, Franz; Köberl, Judith; Bird, David Neil

    2016-02-01

    We extend the concept of 'Weather Value at Risk' - initially introduced to measure the economic risks resulting from current weather fluctuations - to describe and compare sectoral income risks from climate change. This is illustrated using the examples of wheat cultivation and summer tourism in (parts of) Sardinia. Based on climate scenario data from four different regional climate models we study the change in the risk of weather-related income losses between some reference (1971-2000) and some future (2041-2070) period. Results from both examples suggest an increase in weather-related risks of income losses due to climate change, which is somewhat more pronounced for summer tourism. Nevertheless, income from wheat cultivation is at much higher risk of weather-related losses than income from summer tourism, both under reference and future climatic conditions. A weather-induced loss of at least 5% - compared to the income associated with average reference weather conditions - shows a 40% (80%) probability of occurrence in the case of wheat cultivation, but only a 0.4% (16%) probability of occurrence in the case of summer tourism, given reference (future) climatic conditions. Whereas in the agricultural example increases in the weather-related income risks mainly result from an overall decrease in average wheat yields, the heightened risk in the tourism example stems mostly from a change in the weather-induced variability of tourism incomes. With the extended 'Weather Value at Risk' concept being able to capture both, impacts from changes in the mean and the variability of the climate, it is a powerful tool for presenting and disseminating the results of climate change impact assessments. Due to its flexibility, the concept can be applied to any economic sector and therefore provides a valuable tool for cross-sectoral comparisons of climate change impacts, but also for the assessment of the costs and benefits of adaptation measures. Copyright © 2015 Elsevier B

  5. Evaluation of Turkish and Mathematics Curricula According to Value-Based Evaluation Model

    ERIC Educational Resources Information Center

    Duman, Serap Nur; Akbas, Oktay

    2017-01-01

    This study evaluated secondary school seventh-grade Turkish and mathematics programs using the Context-Input-Process-Product Evaluation Model based on student, teacher, and inspector views. The convergent parallel mixed method design was used in the study. Student values were identified using the scales for socio-level identification, traditional…

  6. Evaluating biomarkers to model cancer risk post cosmic ray exposure

    PubMed Central

    Sridhara, Deepa M.; Asaithamby, Aroumougame; Blattnig, Steve R.; Costes, Sylvain V.; Doetsch, Paul W.; Dynan, William S.; Hahnfeldt, Philip; Hlatky, Lynn; Kidane, Yared; Kronenberg, Amy; Naidu, Mamta D.; Peterson, Leif E.; Plante, Ianik; Ponomarev, Artem L.; Saha, Janapriya; Snijders, Antoine M.; Srinivasan, Kalayarasan; Tang, Jonathan; Werner, Erica; Pluth, Janice M.

    2017-01-01

    Robust predictive models are essential to manage the risk of radiation-induced carcinogenesis. Chronic exposure to cosmic rays in the context of the complex deep space environment may place astronauts at high cancer risk. To estimate this risk, it is critical to understand how radiation-induced cellular stress impacts cell fate decisions and how this in turn alters the risk of carcinogenesis. Exposure to the heavy ion component of cosmic rays triggers a multitude of cellular changes, depending on the rate of exposure, the type of damage incurred and individual susceptibility. Heterogeneity in dose, dose rate, radiation quality, energy and particle flux contribute to the complexity of risk assessment. To unravel the impact of each of these factors, it is critical to identify sensitive biomarkers that can serve as inputs for robust modeling of individual risk of cancer or other long-term health consequences of exposure. Limitations in sensitivity of biomarkers to dose and dose rate, and the complexity of longitudinal monitoring, are some of the factors that increase uncertainties in the output from risk prediction models. Here, we critically evaluate candidate early and late biomarkers of radiation exposure and discuss their usefulness in predicting cell fate decisions. Some of the biomarkers we have reviewed include complex clustered DNA damage, persistent DNA repair foci, reactive oxygen species, chromosome aberrations and inflammation. Other biomarkers discussed, often assayed for at longer points post exposure, include mutations, chromosome aberrations, reactive oxygen species and telomere length changes. We discuss the relationship of biomarkers to different potential cell fates, including proliferation, apoptosis, senescence, and loss of stemness, which can propagate genomic instability and alter tissue composition and the underlying mRNA signatures that contribute to cell fate decisions. Our goal is to highlight factors that are important in choosing

  7. Evaluating biomarkers to model cancer risk post cosmic ray exposure

    NASA Astrophysics Data System (ADS)

    Sridharan, Deepa M.; Asaithamby, Aroumougame; Blattnig, Steve R.; Costes, Sylvain V.; Doetsch, Paul W.; Dynan, William S.; Hahnfeldt, Philip; Hlatky, Lynn; Kidane, Yared; Kronenberg, Amy; Naidu, Mamta D.; Peterson, Leif E.; Plante, Ianik; Ponomarev, Artem L.; Saha, Janapriya; Snijders, Antoine M.; Srinivasan, Kalayarasan; Tang, Jonathan; Werner, Erica; Pluth, Janice M.

    2016-06-01

    Robust predictive models are essential to manage the risk of radiation-induced carcinogenesis. Chronic exposure to cosmic rays in the context of the complex deep space environment may place astronauts at high cancer risk. To estimate this risk, it is critical to understand how radiation-induced cellular stress impacts cell fate decisions and how this in turn alters the risk of carcinogenesis. Exposure to the heavy ion component of cosmic rays triggers a multitude of cellular changes, depending on the rate of exposure, the type of damage incurred and individual susceptibility. Heterogeneity in dose, dose rate, radiation quality, energy and particle flux contribute to the complexity of risk assessment. To unravel the impact of each of these factors, it is critical to identify sensitive biomarkers that can serve as inputs for robust modeling of individual risk of cancer or other long-term health consequences of exposure. Limitations in sensitivity of biomarkers to dose and dose rate, and the complexity of longitudinal monitoring, are some of the factors that increase uncertainties in the output from risk prediction models. Here, we critically evaluate candidate early and late biomarkers of radiation exposure and discuss their usefulness in predicting cell fate decisions. Some of the biomarkers we have reviewed include complex clustered DNA damage, persistent DNA repair foci, reactive oxygen species, chromosome aberrations and inflammation. Other biomarkers discussed, often assayed for at longer points post exposure, include mutations, chromosome aberrations, reactive oxygen species and telomere length changes. We discuss the relationship of biomarkers to different potential cell fates, including proliferation, apoptosis, senescence, and loss of stemness, which can propagate genomic instability and alter tissue composition and the underlying mRNA signatures that contribute to cell fate decisions. Our goal is to highlight factors that are important in choosing

  8. 'Scarier than another storm': values at risk in the mapping and insuring of US floodplains.

    PubMed

    Elliott, Rebecca

    2018-05-08

    How do people respond to the ways in which insurance mediates environmental risks? Socio-cultural risk research has characterized and analyzed the experiential dimension of risk, but has yet to focus on insurance, which is a key institution shaping how people understand and relate to risk. Insurance not only assesses and communicates risk; it also economizes it, making the problem on the ground not just one of risk, but also of value. This article addresses these issues with an investigation of the social life of the flood insurance rate map, the central technology of the U.S. National Flood Insurance Program (NFIP), as it grafts a new landscape of 'value at risk' onto the physical and social world of New York City in the aftermath of Hurricane Sandy. Like other risk technologies, ubiquitous in modern societies as decision-making and planning tools, the map disseminates information about value and risk in order to tame uncertainty and enable prudent action oriented toward the future. However, drawing together interview, ethnographic, and documentary data, I find that for its users on the ground, the map does not simply measure 'value at risk' in ways that produce clear strategies for protecting property values from flooding. Instead, it puts values-beyond simply the financial worth of places-at risk, as well as implicates past, present, and future risks beyond simply flooding. By informing and enlarging the stakes of what needs protecting, and from what, I argue that plural and interacting 'values at risk' shape how people live with and respond to environmental risks that are mediated by insurance technologies. © London School of Economics and Political Science 2018.

  9. An evaluation of Computational Fluid dynamics model for flood risk analysis

    NASA Astrophysics Data System (ADS)

    Di Francesco, Silvia; Biscarini, Chiara; Montesarchio, Valeria

    2014-05-01

    This work presents an analysis of the hydrological-hydraulic engineering requisites for Risk evaluation and efficient flood damage reduction plans. Most of the research efforts have been dedicated to the scientific and technical aspects of risk assessment, providing estimates of possible alternatives and of the risk associated. In the decision making process for mitigation plan, the contribute of scientist is crucial, due to the fact that Risk-Damage analysis is based on evaluation of flow field ,of Hydraulic Risk and on economical and societal considerations. The present paper will focus on the first part of process, the mathematical modelling of flood events which is the base for all further considerations. The evaluation of potential catastrophic damage consequent to a flood event and in particular to dam failure requires modelling of the flood with sufficient detail so to capture the spatial and temporal evolutions of the event, as well of the velocity field. Thus, the selection of an appropriate mathematical model to correctly simulate flood routing is an essential step. In this work we present the application of two 3D Computational fluid dynamics models to a synthetic and real case study in order to evaluate the correct evolution of flow field and the associated flood Risk . The first model is based on a opensource CFD platform called openFoam. Water flow is schematized with a classical continuum approach based on Navier-Stokes equation coupled with Volume of fluid (VOF) method to take in account the multiphase character of river bottom-water- air systems. The second model instead is based on the Lattice Boltzmann method, an innovative numerical fluid dynamics scheme based on Boltzmann's kinetic equation that represents the flow dynamics at the macroscopic level by incorporating a microscopic kinetic approach. Fluid is seen as composed by particles that can move and collide among them. Simulation results from both models are promising and congruent to

  10. Forecasting risk along a river basin using a probabilistic and deterministic model for environmental risk assessment of effluents through ecotoxicological evaluation and GIS.

    PubMed

    Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente

    2009-12-20

    This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.

  11. The Potential Consequence of Using Value-Added Models to Evaluate Teachers

    ERIC Educational Resources Information Center

    Shen, Zuchao; Simon, Carlee Escue; Kelcey, Ben

    2016-01-01

    Value-added models try to separate the contribution of individual teachers or schools to students' learning growth measured by standardized test scores. There is a policy trend to use value-added modeling to evaluate teachers because of its face validity and superficial objectiveness. This article investigates the potential long term consequences…

  12. Evaluating biomarkers to model cancer risk post cosmic ray exposure.

    PubMed

    Sridharan, Deepa M; Asaithamby, Aroumougame; Blattnig, Steve R; Costes, Sylvain V; Doetsch, Paul W; Dynan, William S; Hahnfeldt, Philip; Hlatky, Lynn; Kidane, Yared; Kronenberg, Amy; Naidu, Mamta D; Peterson, Leif E; Plante, Ianik; Ponomarev, Artem L; Saha, Janapriya; Snijders, Antoine M; Srinivasan, Kalayarasan; Tang, Jonathan; Werner, Erica; Pluth, Janice M

    2016-06-01

    Robust predictive models are essential to manage the risk of radiation-induced carcinogenesis. Chronic exposure to cosmic rays in the context of the complex deep space environment may place astronauts at high cancer risk. To estimate this risk, it is critical to understand how radiation-induced cellular stress impacts cell fate decisions and how this in turn alters the risk of carcinogenesis. Exposure to the heavy ion component of cosmic rays triggers a multitude of cellular changes, depending on the rate of exposure, the type of damage incurred and individual susceptibility. Heterogeneity in dose, dose rate, radiation quality, energy and particle flux contribute to the complexity of risk assessment. To unravel the impact of each of these factors, it is critical to identify sensitive biomarkers that can serve as inputs for robust modeling of individual risk of cancer or other long-term health consequences of exposure. Limitations in sensitivity of biomarkers to dose and dose rate, and the complexity of longitudinal monitoring, are some of the factors that increase uncertainties in the output from risk prediction models. Here, we critically evaluate candidate early and late biomarkers of radiation exposure and discuss their usefulness in predicting cell fate decisions. Some of the biomarkers we have reviewed include complex clustered DNA damage, persistent DNA repair foci, reactive oxygen species, chromosome aberrations and inflammation. Other biomarkers discussed, often assayed for at longer points post exposure, include mutations, chromosome aberrations, reactive oxygen species and telomere length changes. We discuss the relationship of biomarkers to different potential cell fates, including proliferation, apoptosis, senescence, and loss of stemness, which can propagate genomic instability and alter tissue composition and the underlying mRNA signatures that contribute to cell fate decisions. Our goal is to highlight factors that are important in choosing

  13. A quantile-based Time at Risk: A new approach for assessing risk in financial markets

    NASA Astrophysics Data System (ADS)

    Bolgorian, Meysam; Raei, Reza

    2013-11-01

    In this paper, we provide a new measure for evaluation of risk in financial markets. This measure is based on the return interval of critical events in financial markets or other investment situations. Our main goal was to devise a model like Value at Risk (VaR). As VaR, for a given financial asset, probability level and time horizon, gives a critical value such that the likelihood of loss on the asset over the time horizon exceeds this value is equal to the given probability level, our concept of Time at Risk (TaR), using a probability distribution function of return intervals, provides a critical time such that the probability that the return interval of a critical event exceeds this time equals the given probability level. As an empirical application, we applied our model to data from the Tehran Stock Exchange Price Index (TEPIX) as a financial asset (market portfolio) and reported the results.

  14. [Theoretical model study about the application risk of high risk medical equipment].

    PubMed

    Shang, Changhao; Yang, Fenghui

    2014-11-01

    Research for establishing a risk monitoring theoretical model of high risk medical equipment at applying site. Regard the applying site as a system which contains some sub-systems. Every sub-system consists of some risk estimating indicators. After quantizing of each indicator, the quantized values are multiplied with corresponding weight and then the products are accumulated. Hence, the risk estimating value of each subsystem is attained. Follow the calculating method, the risk estimating values of each sub-system are multiplied with corresponding weights and then the product is accumulated. The cumulative sum is the status indicator of the high risk medical equipment at applying site. The status indicator reflects the applying risk of the medical equipment at applying site. Establish a risk monitoring theoretical model of high risk medical equipment at applying site. The model can monitor the applying risk of high risk medical equipment at applying site dynamically and specially.

  15. Estimating the value of a Country's built assets: investment-based exposure modelling for global risk assessment

    NASA Astrophysics Data System (ADS)

    Daniell, James; Pomonis, Antonios; Gunasekera, Rashmin; Ishizawa, Oscar; Gaspari, Maria; Lu, Xijie; Aubrecht, Christoph; Ungar, Joachim

    2017-04-01

    In order to quantify disaster risk, there is a demand and need for determining consistent and reliable economic value of built assets at national or sub national level exposed to natural hazards. The value of the built stock in the context of a city or a country is critical for risk modelling applications as it allows for the upper bound in potential losses to be established. Under the World Bank probabilistic disaster risk assessment - Country Disaster Risk Profiles (CDRP) Program and rapid post-disaster loss analyses in CATDAT, key methodologies have been developed that quantify the asset exposure of a country. In this study, we assess the complementary methods determining value of building stock through capital investment data vs aggregated ground up values based on built area and unit cost of construction analyses. Different approaches to modelling exposure around the world, have resulted in estimated values of built assets of some countries differing by order(s) of magnitude. Using the aforementioned methodology of comparing investment data based capital stock and bottom-up unit cost of construction values per square meter of assets; a suitable range of capital stock estimates for built assets have been created. A blind test format was undertaken to compare the two types of approaches from top-down (investment) and bottom-up (construction cost per unit), In many cases, census data, demographic, engineering and construction cost data are key for bottom-up calculations from previous years. Similarly for the top-down investment approach, distributed GFCF (Gross Fixed Capital Formation) data is also required. Over the past few years, numerous studies have been undertaken through the World Bank Caribbean and Central America disaster risk assessment program adopting this methodology initially developed by Gunasekera et al. (2015). The range of values of the building stock is tested for around 15 countries. In addition, three types of costs - Reconstruction cost

  16. Modelling the impact of new patient visits on risk adjusted access at 2 clinics.

    PubMed

    Kolber, Michael A; Rueda, Germán; Sory, John B

    2018-06-01

    To evaluate the effect new outpatient clinic visits has on the availability of follow-up visits for established patients when patient visit frequency is risk adjusted. Diagnosis codes for patients from 2 Internal Medicine Clinics were extracted through billing data. The HHS-HCC risk adjusted scores for each clinic were determined based upon the average of all clinic practitioners' profiles. These scores were then used to project encounter frequencies for established patients, and for new patients entering the clinic based on risk and time of entry into the clinics. A distinct mean risk frequency distribution for physicians in each clinic could be defined providing model parameters. Within the model, follow-up visit utilization at the highest risk adjusted visit frequencies would require more follow-up slots than currently available when new patient no-show rates and annual patient loss are included. Patients seen at an intermediate or lower visit risk adjusted frequency could be accommodated when new patient no-show rates and annual patient clinic loss are considered. Value-based care is driven by control of cost while maintaining quality of care. In order to control cost, there has been a drive to increase visit frequency in primary care for those patients at increased risk. Adding new patients to primary care clinics limits the availability of follow-up slots that accrue over time for those at highest risk, thereby limiting disease and, potentially, cost control. If frequency of established care visits can be reduced by improved disease control, closing the practice to new patients, hiring health care extenders, or providing non-face to face care models then quality and cost of care may be improved. © 2018 John Wiley & Sons, Ltd.

  17. Risk of Acute Liver Failure in Patients With Drug-Induced Liver Injury: Evaluation of Hy's Law and a New Prognostic Model.

    PubMed

    Lo Re, Vincent; Haynes, Kevin; Forde, Kimberly A; Goldberg, David S; Lewis, James D; Carbonari, Dena M; Leidl, Kimberly B F; Reddy, K Rajender; Nezamzadeh, Melissa S; Roy, Jason; Sha, Daohang; Marks, Amy R; De Boer, Jolanda; Schneider, Jennifer L; Strom, Brian L; Corley, Douglas A

    2015-12-01

    Few studies have evaluated the ability of laboratory tests to predict risk of acute liver failure (ALF) among patients with drug-induced liver injury (DILI). We aimed to develop a highly sensitive model to identify DILI patients at increased risk of ALF. We compared its performance with that of Hy's Law, which predicts severity of DILI based on levels of alanine aminotransferase or aspartate aminotransferase and total bilirubin, and validated the model in a separate sample. We conducted a retrospective cohort study of 15,353 Kaiser Permanente Northern California members diagnosed with DILI from 2004 through 2010, liver aminotransferase levels above the upper limit of normal, and no pre-existing liver disease. Thirty ALF events were confirmed by medical record review. Logistic regression was used to develop prognostic models for ALF based on laboratory results measured at DILI diagnosis. External validation was performed in a sample of 76 patients with DILI at the University of Pennsylvania. Hy's Law identified patients that developed ALF with a high level of specificity (0.92) and negative predictive value (0.99), but low level of sensitivity (0.68) and positive predictive value (0.02). The model we developed, comprising data on platelet count and total bilirubin level, identified patients with ALF with a C statistic of 0.87 (95% confidence interval [CI], 0.76-0.96) and enabled calculation of a risk score (Drug-Induced Liver Toxicity ALF Score). We found a cut-off score that identified patients at high risk patients for ALF with a sensitivity value of 0.91 (95% CI, 0.71-0.99) and a specificity value of 0.76 (95% CI, 0.75-0.77). This cut-off score identified patients at high risk for ALF with a high level of sensitivity (0.89; 95% CI, 0.52-1.00) in the validation analysis. Hy's Law identifies patients with DILI at high risk for ALF with low sensitivity but high specificity. We developed a model (the Drug-Induced Liver Toxicity ALF Score) based on platelet count and

  18. Risk assessment and remedial policy evaluation using predictive modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linkov, L.; Schell, W.R.

    1996-06-01

    As a result of nuclear industry operation and accidents, large areas of natural ecosystems have been contaminated by radionuclides and toxic metals. Extensive societal pressure has been exerted to decrease the radiation dose to the population and to the environment. Thus, in making abatement and remediation policy decisions, not only economic costs but also human and environmental risk assessments are desired. This paper introduces a general framework for risk assessment and remedial policy evaluation using predictive modeling. Ecological risk assessment requires evaluation of the radionuclide distribution in ecosystems. The FORESTPATH model is used for predicting the radionuclide fate in forestmore » compartments after deposition as well as for evaluating the efficiency of remedial policies. Time of intervention and radionuclide deposition profile was predicted as being crucial for the remediation efficiency. Risk assessment conducted for a critical group of forest users in Belarus shows that consumption of forest products (berries and mushrooms) leads to about 0.004% risk of a fatal cancer annually. Cost-benefit analysis for forest cleanup suggests that complete removal of organic layer is too expensive for application in Belarus and a better methodology is required. In conclusion, FORESTPATH modeling framework could have wide applications in environmental remediation of radionuclides and toxic metals as well as in dose reconstruction and, risk-assessment.« less

  19. A Risk Score Model for Evaluation and Management of Patients with Thyroid Nodules.

    PubMed

    Zhang, Yongwen; Meng, Fanrong; Hong, Lianqing; Chu, Lanfang

    2018-06-12

    The study is aimed to establish a simplified and practical tool for analyzing thyroid nodules. A novel risk score model was designed, risk factors including patient history, patient characteristics, physical examination, symptoms of compression, thyroid function, ultrasonography (US) of thyroid and cervical lymph nodes were evaluated and classified into high risk factors, intermediate risk factors, and low risk factors. A total of 243 thyroid nodules in 162 patients were assessed with risk score system and Thyroid Imaging-Reporting and Data System (TI-RADS). The diagnostic performance of risk score system and TI-RADS was compared. The accuracy in the diagnosis of thyroid nodules was 89.3% for risk score system, 74.9% for TI-RADS respectively. The specificity, accuracy and positive predictive value (PPV) of risk score system were significantly higher than the TI-RADS system (χ 2 =26.287, 17.151, 11.983; p <0.05), statistically significant differences were not observed in the sensitivity and negative predictive value (NPV) between the risk score system and TI-RADS (χ 2 =1.276, 0.290; p>0.05). The area under the curve (AUC) for risk score diagnosis system was 0.963, standard error 0.014, 95% confidence interval (CI)=0.934-0.991, the AUC for TI-RADS diagnosis system was 0.912 with standard error 0.021, 95% CI=0.871-0.953, the AUC for risk score system was significantly different from that of TI-RADS (Z=2.02; p <0.05). Risk score model is a reliable, simplified and cost-effective diagnostic tool used in diagnosis of thyroid cancer. The higher the score is, the higher the risk of malignancy will be. © Georg Thieme Verlag KG Stuttgart · New York.

  20. On set-valued functionals: Multivariate risk measures and Aumann integrals

    NASA Astrophysics Data System (ADS)

    Ararat, Cagin

    In this dissertation, multivariate risk measures for random vectors and Aumann integrals of set-valued functions are studied. Both are set-valued functionals with values in a complete lattice of subsets of Rm. Multivariate risk measures are considered in a general d-asset financial market with trading opportunities in discrete time. Specifically, the following features of the market are incorporated in the evaluation of multivariate risk: convex transaction costs modeled by solvency regions, intermediate trading constraints modeled by convex random sets, and the requirement of liquidation into the first m ≤ d of the assets. It is assumed that the investor has a "pure" multivariate risk measure R on the space of m-dimensional random vectors which represents her risk attitude towards the assets but does not take into account the frictions of the market. Then, the investor with a d-dimensional position minimizes the set-valued functional R over all m-dimensional positions that she can reach by trading in the market subject to the frictions described above. The resulting functional Rmar on the space of d-dimensional random vectors is another multivariate risk measure, called the market-extension of R. A dual representation for R mar that decomposes the effects of R and the frictions of the market is proved. Next, multivariate risk measures are studied in a utility-based framework. It is assumed that the investor has a complete risk preference towards each individual asset, which can be represented by a von Neumann-Morgenstern utility function. Then, an incomplete preference is considered for multivariate positions which is represented by the vector of the individual utility functions. Under this structure, multivariate shortfall and divergence risk measures are defined as the optimal values of set minimization problems. The dual relationship between the two classes of multivariate risk measures is constructed via a recent Lagrange duality for set optimization. In

  1. Risk assessment of flood disaster and forewarning model at different spatial-temporal scales

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Jin, Juliang; Xu, Jinchao; Guo, Qizhong; Hang, Qingfeng; Chen, Yaqian

    2018-05-01

    Aiming at reducing losses from flood disaster, risk assessment of flood disaster and forewarning model is studied. The model is built upon risk indices in flood disaster system, proceeding from the whole structure and its parts at different spatial-temporal scales. In this study, on the one hand, it mainly establishes the long-term forewarning model for the surface area with three levels of prediction, evaluation, and forewarning. The method of structure-adaptive back-propagation neural network on peak identification is used to simulate indices in prediction sub-model. Set pair analysis is employed to calculate the connection degrees of a single index, comprehensive index, and systematic risk through the multivariate connection number, and the comprehensive assessment is made by assessment matrixes in evaluation sub-model. The comparison judging method is adopted to divide warning degree of flood disaster on risk assessment comprehensive index with forewarning standards in forewarning sub-model and then the long-term local conditions for proposing planning schemes. On the other hand, it mainly sets up the real-time forewarning model for the spot, which introduces the real-time correction technique of Kalman filter based on hydrological model with forewarning index, and then the real-time local conditions for presenting an emergency plan. This study takes Tunxi area, Huangshan City of China, as an example. After risk assessment and forewarning model establishment and application for flood disaster at different spatial-temporal scales between the actual and simulated data from 1989 to 2008, forewarning results show that the development trend for flood disaster risk remains a decline on the whole from 2009 to 2013, despite the rise in 2011. At the macroscopic level, project and non-project measures are advanced, while at the microcosmic level, the time, place, and method are listed. It suggests that the proposed model is feasible with theory and application, thus

  2. Evaluating the Value of High Spatial Resolution in National Capacity Expansion Models using ReEDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat; Cole, Wesley

    2016-11-14

    Power sector capacity expansion models (CEMs) have a broad range of spatial resolutions. This paper uses the Regional Energy Deployment System (ReEDS) model, a long-term national scale electric sector CEM, to evaluate the value of high spatial resolution for CEMs. ReEDS models the United States with 134 load balancing areas (BAs) and captures the variability in existing generation parameters, future technology costs, performance, and resource availability using very high spatial resolution data, especially for wind and solar modeled at 356 resource regions. In this paper we perform planning studies at three different spatial resolutions--native resolution (134 BAs), state-level, and NERCmore » region level--and evaluate how results change under different levels of spatial aggregation in terms of renewable capacity deployment and location, associated transmission builds, and system costs. The results are used to ascertain the value of high geographically resolved models in terms of their impact on relative competitiveness among renewable energy resources.« less

  3. Inpatient Glucose Values: Determining the Nondiabetic Range and Use in Identifying Patients at High Risk for Diabetes.

    PubMed

    Rhee, Mary K; Safo, Sandra E; Jackson, Sandra L; Xue, Wenqiong; Olson, Darin E; Long, Qi; Barb, Diana; Haw, J Sonya; Tomolo, Anne M; Phillips, Lawrence S

    2018-04-01

    Many individuals with diabetes remain undiagnosed, leading to delays in treatment and higher risk for subsequent diabetes complications. Despite recommendations for diabetes screening in high-risk groups, the optimal approach is not known. We evaluated the utility of inpatient glucose levels as an opportunistic screening tool for identifying patients at high risk for diabetes. We retrospectively examined 462,421 patients in the US Department of Veterans Affairs healthcare system, hospitalized on medical/surgical services in 2000-2010, for ≥3 days, with ≥2 inpatient random plasma glucose (RPG) measurements. All had continuity of care: ≥1 primary care visit and ≥1 glucose measurement within 2 years before hospitalization and yearly for ≥3 years after discharge. Glucose levels during hospitalization and incidence of diabetes within 3 years after discharge in patients without diabetes were evaluated. Patients had a mean age of 65.0 years, body mass index of 29.9 kg/m 2 , and were 96% male, 71% white, and 18% black. Pre-existing diabetes was present in 39.4%, 1.3% were diagnosed during hospitalization, 8.1% were diagnosed 5 years after discharge, and 51.3% were never diagnosed (NonDM). The NonDM group had the lowest mean hospital RPG value (112 mg/dL [6.2 mmol/L]). Having at least 2 RPG values >140 mg/dL (>7.8 mmol/L), the 95th percentile of NonDM hospital glucose, provided 81% specificity for identifying incident diabetes within 3 years after discharge. Screening for diabetes could be considered in patients with at least 2 hospital glucose values at/above the 95th percentile of the nondiabetic range (141 mg/dL [7.8 mmol/L]). Published by Elsevier Inc.

  4. Value of a facilitated quality improvement initiative on cardiovascular disease risk: findings from an evaluation of the Aggressively Treating Global Cardiometabolic Risk Factors to Reduce Cardiovascular Events (AT GOAL).

    PubMed

    Losby, Jan L; Osuji, Thearis A; House, Marnie J; Davis, Rachel; Boyce, Simone Peart; Greenberg, Michael Canter; Whitehill, John M

    2015-10-01

    In the United States, cardiovascular disease (CVD) is the leading cause of death. The US Centers for Disease Control and Prevention contracted an evaluation of the Aggressively Treating Global Cardiometabolic Risk Factors to Reduce Cardiovascular Events (AT GOAL) programme as part of its effort to identify strategies to address CVD risk factors. This study analysed patient-level data from 7527 patients in 43 primary care practices. The researchers assessed average change in control rates for CVD-related measures across practices, and then across patients between baseline and a patient's last visit during the practice's tenure in the programme (referred to as 'end line') using repeated measures analysis of variance and random effects generalized least squares, respectively. Among non-diabetic patients, there were significant increases in control rates for overall blood pressure (74.3% to 78.0%, P = 0.0002), systolic blood pressure (70.3% to 80.6%, P = 0.0099), diastolic blood pressure (90.1% to 92.7%, P = 0.0001) and low-density lipoprotein (LDL; 48.6% to 53.1%, P = 0.0001) between baseline and end line. Among diabetic patients, there was a significant increase in diastolic blood pressure control (59.8% to 61.9%, P = 0.0141). While continuous CVD-related outcomes show an overall trend between baseline and end line, patients with uncontrolled measures at baseline showed a decrease between baseline and end line relative to their counterparts who were controlled at baseline. Findings from the AT GOAL evaluation support the value of a facilitated quality improvement (QI) initiative on managing CVD risk. © 2015 John Wiley & Sons, Ltd.

  5. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  6. A neural network model for credit risk evaluation.

    PubMed

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  7. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    NASA Astrophysics Data System (ADS)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  8. No added value of age at menopause and the lifetime cumulative number of menstrual cycles for cardiovascular risk prediction in postmenopausal women.

    PubMed

    Atsma, Femke; van der Schouw, Yvonne T; Grobbee, Diederick E; Hoes, Arno W; Bartelink, Marie-Louise E L

    2008-11-12

    The aim of the present study was to investigate the added value of age at menopause and the lifetime cumulative number of menstrual cycles in cardiovascular risk prediction in postmenopausal women. This study included 971 women. The ankle-arm index was used as a proxy for cardiovascular morbidity and mortality. The ankle-arm index was calculated for each leg by dividing the highest ankle systolic blood pressure by the highest brachial systolic blood pressure. A cut-off value of 0.95 was used to differentiate between low and high risk women. Three cardiovascular risk models were constructed. In the initial model all classical predictors for cardiovascular disease were investigated. This model was then extended by age at menopause or the lifetime cumulative number of menstrual cycles to test their added value for cardiovascular risk prediction. Differences in discriminative power between the models were investigated by comparing the area under the receiver operating characteristic (ROC) curves. The mean age was 66.0 (+/-5.6) years. The 6 independent predictors for cardiovascular disease were age, systolic blood pressure, total to HDL cholesterol ratio, current smoking, glucose level, and body mass index > or =30 kg/m(2). The ROC area was 0.69 (0.64-0.73) and did not change when age at menopause or the lifetime cumulative number of menstrual cycles was added. The findings in this study among postmenopausal women did not support the view that age at menopause or a refined estimation of lifetime endogenous estrogen exposure would improve cardiovascular risk prediction as approximated by the ankle-arm index.

  9. Problems With Risk Reclassification Methods for Evaluating Prediction Models

    PubMed Central

    Pepe, Margaret S.

    2011-01-01

    For comparing the performance of a baseline risk prediction model with one that includes an additional predictor, a risk reclassification analysis strategy has been proposed. The first step is to cross-classify risks calculated according to the 2 models for all study subjects. Summary measures including the percentage of reclassification and the percentage of correct reclassification are calculated, along with 2 reclassification calibration statistics. The author shows that interpretations of the proposed summary measures and P values are problematic. The author's recommendation is to display the reclassification table, because it shows interesting information, but to use alternative methods for summarizing and comparing model performance. The Net Reclassification Index has been suggested as one alternative method. The author argues for reporting components of the Net Reclassification Index because they are more clinically relevant than is the single numerical summary measure. PMID:21555714

  10. Classification models for identification of at-risk groups for incident memory complaints.

    PubMed

    van den Kommer, Tessa N; Comijs, Hannie C; Rijs, Kelly J; Heymans, Martijn W; van Boxtel, Martin P J; Deeg, Dorly J H

    2014-02-01

    Memory complaints in older adults may be a precursor of measurable cognitive decline. Causes for these complaints may vary across age groups. The goal of this study was to develop classification models for the early identification of persons at risk for memory complaints using a broad range of characteristics. Two age groups were studied, 55-65 years old (N = 1,416.8) and 65-75 years old (N = 471) using data from the Longitudinal Aging Study Amsterdam. Participants reporting memory complaints at baseline were excluded. Data on predictors of memory complaints were collected at baseline and analyzed using logistic regression analyses. Multiple imputation was applied to handle the missing data; missing data due to mortality were not imputed. In persons aged 55-65 years, 14.4% reported memory complaints after three years of follow-up. Persons using medication, who were former smokers and had insufficient/poor hearing, were at the highest risk of developing memory complaints, i.e., a predictive value of 33.3%. In persons 65-75 years old, the incidence of memory complaints was 22.5%. Persons with a low sense of mastery, who reported having pain, were at the highest risk of memory complaints resulting in a final predictive value of 56.9%. In the subsample of persons without a low sense of mastery who (almost) never visited organizations and had a low level of memory performance, 46.8% reported memory complaints at follow-up. The classification models led to the identification of specific target groups at risk for memory complaints. Suggestions for person-tailored interventions may be based on these risk profiles.

  11. Value of Information Analysis Applied to the Economic Evaluation of Interventions Aimed at Reducing Juvenile Delinquency: An Illustration.

    PubMed

    Eeren, Hester V; Schawo, Saskia J; Scholte, Ron H J; Busschbach, Jan J V; Hakkaart, Leona

    2015-01-01

    To investigate whether a value of information analysis, commonly applied in health care evaluations, is feasible and meaningful in the field of crime prevention. Interventions aimed at reducing juvenile delinquency are increasingly being evaluated according to their cost-effectiveness. Results of cost-effectiveness models are subject to uncertainty in their cost and effect estimates. Further research can reduce that parameter uncertainty. The value of such further research can be estimated using a value of information analysis, as illustrated in the current study. We built upon an earlier published cost-effectiveness model that demonstrated the comparison of two interventions aimed at reducing juvenile delinquency. Outcomes were presented as costs per criminal activity free year. At a societal willingness-to-pay of €71,700 per criminal activity free year, further research to eliminate parameter uncertainty was valued at €176 million. Therefore, in this illustrative analysis, the value of information analysis determined that society should be willing to spend a maximum of €176 million in reducing decision uncertainty in the cost-effectiveness of the two interventions. Moreover, the results suggest that reducing uncertainty in some specific model parameters might be more valuable than in others. Using a value of information framework to assess the value of conducting further research in the field of crime prevention proved to be feasible. The results were meaningful and can be interpreted according to health care evaluation studies. This analysis can be helpful in justifying additional research funds to further inform the reimbursement decision in regard to interventions for juvenile delinquents.

  12. Evaluating critical uncertainty thresholds in a spatial model of forest pest invasion risk

    Treesearch

    Frank H. Koch; Denys Yemshanov; Daniel W. McKenney; William D. Smith

    2009-01-01

    Pest risk maps can provide useful decision support in invasive species management, but most do not adequately consider the uncertainty associated with predicted risk values. This study explores how increased uncertainty in a risk model’s numeric assumptions might affect the resultant risk map. We used a spatial stochastic model, integrating components for...

  13. Assessing values of air quality and visibility at risk from wildland fires.

    Treesearch

    Sue A. Ferguson; Steven J. McKay; David E. Nagel; Trent Piepho; Miriam L. Rorig; Casey Anderson; Lara Kellogg

    2003-01-01

    To assess values of air quality and visibility at risk from wildland fire in the United States, we generated a 40-year database that includes twice daily values of wind, mixing height, and a ventilation index that is the product of windspeed and mixing height. The database provides the first nationally consistent map of surface wind and ventilation index. In addition,...

  14. Field Evaluation of an Avian Risk Assessment Model

    EPA Science Inventory

    We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in ...

  15. Estimation of value at risk in currency exchange rate portfolio using asymmetric GJR-GARCH Copula

    NASA Astrophysics Data System (ADS)

    Nurrahmat, Mohamad Husein; Noviyanti, Lienda; Bachrudin, Achmad

    2017-03-01

    In this study, we discuss the problem in measuring the risk in a portfolio based on value at risk (VaR) using asymmetric GJR-GARCH Copula. The approach based on the consideration that the assumption of normality over time for the return can not be fulfilled, and there is non-linear correlation for dependent model structure among the variables that lead to the estimated VaR be inaccurate. Moreover, the leverage effect also causes the asymmetric effect of dynamic variance and shows the weakness of the GARCH models due to its symmetrical effect on conditional variance. Asymmetric GJR-GARCH models are used to filter the margins while the Copulas are used to link them together into a multivariate distribution. Then, we use copulas to construct flexible multivariate distributions with different marginal and dependence structure, which is led to portfolio joint distribution does not depend on the assumptions of normality and linear correlation. VaR obtained by the analysis with confidence level 95% is 0.005586. This VaR derived from the best Copula model, t-student Copula with marginal distribution of t distribution.

  16. Prognostic Value of Coronary Computed Tomography Imaging in Patients at High Risk Without Symptoms of Coronary Artery Disease.

    PubMed

    Dedic, Admir; Ten Kate, Gert-Jan R; Roos, Cornelis J; Neefjes, Lisan A; de Graaf, Michiel A; Spronk, Angela; Delgado, Victoria; van Lennep, Jeanine E Roeters; Moelker, Adriaan; Ouhlous, Mohamed; Scholte, Arthur J H A; Boersma, Eric; Sijbrands, Eric J G; Nieman, Koen; Bax, Jeroen J; de Feijter, Pim J

    2016-03-01

    At present, traditional risk factors are used to guide cardiovascular management of asymptomatic subjects. Intensified surveillance may be warranted in those identified as high risk of developing cardiovascular disease (CVD). This study aims to determine the prognostic value of coronary computed tomography (CT) angiography (CCTA) next to the coronary artery calcium score (CACS) in patients at high CVD risk without symptoms suspect for coronary artery disease (CAD). A total of 665 patients at high risk (mean age 56 ± 9 years, 417 men), having at least one important CVD risk factor (diabetes mellitus, familial hypercholesterolemia, peripheral artery disease, or severe hypertension) or a calculated European systematic coronary risk evaluation of >10% were included from outpatient clinics at 2 academic centers. Follow-up was performed for the occurrence of adverse events including all-cause mortality, nonfatal myocardial infarction, unstable angina, or coronary revascularization. During a median follow-up of 3.0 (interquartile range 1.3 to 4.1) years, adverse events occurred in 40 subjects (6.0%). By multivariate analysis, adjusted for age, gender, and CACS, obstructive CAD on CCTA (≥50% luminal stenosis) was a significant predictor of adverse events (hazard ratio 5.9 [CI 1.3 to 26.1]). Addition of CCTA to age, gender, plus CACS, increased the C statistic from 0.81 to 0.84 and resulted in a total net reclassification index of 0.19 (p <0.01). In conclusion, CCTA has incremental prognostic value and risk reclassification benefit beyond CACS in patients without CAD symptoms but with high risk of developing CVD. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Mathematical modelling of risk reduction in reinsurance

    NASA Astrophysics Data System (ADS)

    Balashov, R. B.; Kryanev, A. V.; Sliva, D. E.

    2017-01-01

    The paper presents a mathematical model of efficient portfolio formation in the reinsurance markets. The presented approach provides the optimal ratio between the expected value of return and the risk of yield values below a certain level. The uncertainty in the return values is conditioned by use of expert evaluations and preliminary calculations, which result in expected return values and the corresponding risk levels. The proposed method allows for implementation of computationally simple schemes and algorithms for numerical calculation of the numerical structure of the efficient portfolios of reinsurance contracts of a given insurance company.

  18. Risk Aversion and the Value of Information.

    ERIC Educational Resources Information Center

    Eeckhoudt, Louis; Godfroid, Phillippe

    2000-01-01

    Explains why risk aversion does not always induce a greater information value, but instead may induce a lower information value when increased. Presents a basic model defining the concept of perfect information value and providing a numerical illustration. Includes references. (CMK)

  19. America's Youth Are at Risk: Developing Models for Action in the Nation's Public Libraries.

    ERIC Educational Resources Information Center

    Flum, Judith G.; Weisner, Stan

    1993-01-01

    Discussion of public library support systems for at-risk teens focuses on the Bay Area Library and Information System (BALIS) that was developed to improve library services to at-risk teenagers in the San Francisco Bay area. Highlights include needs assessment; staff training; intervention models; and project evaluation. (10 references) (LRW)

  20. Extreme value modelling of Ghana stock exchange index.

    PubMed

    Nortey, Ezekiel N N; Asare, Kwabena; Mettle, Felix Okoe

    2015-01-01

    Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana stock exchange all-shares index (2000-2010) by applying the extreme value theory (EVT) to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before the EVT method was applied. The Peak Over Threshold approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model's goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the value at risk and expected shortfall risk measures at some high quantiles, based on the fitted GPD model.

  1. Robust routing for hazardous materials transportation with conditional value-at-risk on time-dependent networks.

    DOT National Transportation Integrated Search

    2012-11-01

    New methods are proposed for mitigating risk in hazardous materials (hazmat) transportation, based on Conditional : Value-at-Risk (CVaR) measure, on time-dependent vehicular networks. While the CVaR risk measure has been : popularly used in financial...

  2. A comparison of imputation techniques for handling missing predictor values in a risk model with a binary outcome.

    PubMed

    Ambler, Gareth; Omar, Rumana Z; Royston, Patrick

    2007-06-01

    Risk models that aim to predict the future course and outcome of disease processes are increasingly used in health research, and it is important that they are accurate and reliable. Most of these risk models are fitted using routinely collected data in hospitals or general practices. Clinical outcomes such as short-term mortality will be near-complete, but many of the predictors may have missing values. A common approach to dealing with this is to perform a complete-case analysis. However, this may lead to overfitted models and biased estimates if entire patient subgroups are excluded. The aim of this paper is to investigate a number of methods for imputing missing data to evaluate their effect on risk model estimation and the reliability of the predictions. Multiple imputation methods, including hotdecking and multiple imputation by chained equations (MICE), were investigated along with several single imputation methods. A large national cardiac surgery database was used to create simulated yet realistic datasets. The results suggest that complete case analysis may produce unreliable risk predictions and should be avoided. Conditional mean imputation performed well in our scenario, but may not be appropriate if using variable selection methods. MICE was amongst the best performing multiple imputation methods with regards to the quality of the predictions. Additionally, it produced the least biased estimates, with good coverage, and hence is recommended for use in practice.

  3. Research on efficiency evaluation model of integrated energy system based on hybrid multi-attribute decision-making.

    PubMed

    Li, Yan

    2017-05-25

    The efficiency evaluation model of integrated energy system, involving many influencing factors, and the attribute values are heterogeneous and non-deterministic, usually cannot give specific numerical or accurate probability distribution characteristics, making the final evaluation result deviation. According to the characteristics of the integrated energy system, a hybrid multi-attribute decision-making model is constructed. The evaluation model considers the decision maker's risk preference. In the evaluation of the efficiency of the integrated energy system, the evaluation value of some evaluation indexes is linguistic value, or the evaluation value of the evaluation experts is not consistent. These reasons lead to ambiguity in the decision information, usually in the form of uncertain linguistic values and numerical interval values. In this paper, the risk preference of decision maker is considered when constructing the evaluation model. Interval-valued multiple-attribute decision-making method and fuzzy linguistic multiple-attribute decision-making model are proposed. Finally, the mathematical model of efficiency evaluation of integrated energy system is constructed.

  4. Geographic Mapping as a Tool for Identifying Communities at High Risk for Fires.

    PubMed

    Fahey, Erin; Lehna, Carlee; Hanchette, Carol; Coty, Mary-Beth

    2016-01-01

    The purpose of this study was to evaluate whether the sample of older adults in a home fire safety (HFS) study captured participants living in the areas at highest risk for fire occurrence. The secondary aim was to identify high risk areas to focus future HFS interventions. Geographic information systems software was used to identify census tracts where study participants resided. Census data for these tracts were compared with participant data based on seven risk factors (ie, age greater than 65 years, nonwhite race, below high school education, low socioeconomic status, rented housing, year home built, home value) previously identified in a fire risk model. The distribution of participants and census tracts among risk categories determined how well higher risk census tracts were sampled. Of the 46 census tracts where the HFS intervention was implemented, 78% (n = 36) were identified as high or severe risk according to the fire risk model. Study participants' means for median annual family income (P < .0001) and median home value (P < .0001) were significantly lower than the census tract means (n = 46), indicating participants were at higher risk of fire occurrence. Of the 92 census tracts identified as high or severe risk in the entire county, the study intervention was implemented in 39% (n = 36), indicating 56 census tracts as potential areas for future HFS interventions. The Geographic information system-based fire risk model is an underutilized but important tool for practice that allows community agencies to develop, plan, and evaluate their outreach efforts and ensure the most effective use of scarce resources.

  5. Experimental Evaluation of the Value Added by Raising a Reader and Supplemental Parent Training in Shared Reading

    ERIC Educational Resources Information Center

    Anthony, Jason L.; Williams, Jeffrey M.; Zhang, Zhoe; Landry, Susan H.; Dunkelberger, Martha J.

    2014-01-01

    Research Findings: In an effort toward developing a comprehensive, effective, scalable, and sustainable early childhood education program for at-risk populations, we conducted an experimental evaluation of the value added by 2 family involvement programs to the Texas Early Education Model (TEEM). A total of 91 preschool classrooms that served…

  6. Threshold Values for Identification of Contamination Predicted by Reduced-Order Models

    DOE PAGES

    Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; ...

    2014-12-31

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.

  7. [Early prediction of the neurological result at 12 months in newborns at neurological risk].

    PubMed

    Herbón, F; Garibotti, G; Moguilevsky, J

    2015-08-01

    The aim of this study was to evaluate the Amiel-Tison neurological examination (AT) and cranial ultrasound at term for predicting the neurological result at 12 months in newborns with neurological risk. The study included 89 newborns with high risk of neurological damage, who were discharged from the Neonatal Intensive Care of the Hospital Zonal Bariloche, Argentina. The assessment consisted of a neurological examination and cranial ultrasound at term, and neurological examination and evaluation of development at 12 months. The sensitivity, specificity, positive and negative predictor value was calculated. The relationship between perinatal factors and neurodevelopment at 12 month of age was also calculated using logistic regression models. Seventy children completed the follow-up. At 12 months of age, 14% had an abnormal neurological examination, and 17% abnormal development. The neurological examination and the cranial ultrasound at term had low sensitivity to predict abnormal neurodevelopment. At 12 months, 93% of newborns with normal AT showed normal neurological results, and 86% normal development. Among newborns with normal cranial ultrasound the percentages were 90 and 81%, respectively. Among children with three or more perinatal risk factors, the frequency of abnormalities in the neurological response was 5.4 times higher than among those with fewer risk factors, and abnormal development was 3.5 times more frequent. The neurological examination and cranial ultrasound at term had low sensitivity but high negative predictive value for the neurodevelopment at 12 months. Three or more perinatal risk factors were associated with neurodevelopment abnormalities at 12 months of age. Copyright © 2014 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.

  8. Value-at-Risk analysis using ARMAX GARCHX approach for estimating risk of banking subsector stock return’s

    NASA Astrophysics Data System (ADS)

    Dewi Ratih, Iis; Sutijo Supri Ulama, Brodjol; Prastuti, Mike

    2018-03-01

    Value at Risk (VaR) is one of the statistical methods used to measure market risk by estimating the worst losses in a given time period and level of confidence. The accuracy of this measuring tool is very important in determining the amount of capital that must be provided by the company to cope with possible losses. Because there is a greater losses to be faced with a certain degree of probability by the greater risk. Based on this, VaR calculation analysis is of particular concern to researchers and practitioners of the stock market to be developed, thus getting more accurate measurement estimates. In this research, risk analysis of stocks in four banking sub-sector, Bank Rakyat Indonesia, Bank Mandiri, Bank Central Asia and Bank Negara Indonesia will be done. Stock returns are expected to be influenced by exogenous variables, namely ICI and exchange rate. Therefore, in this research, stock risk estimation are done by using VaR ARMAX-GARCHX method. Calculating the VaR value with the ARMAX-GARCHX approach using window 500 gives more accurate results. Overall, Bank Central Asia is the only bank had the estimated maximum loss in the 5% quantile.

  9. Value of improved lipid control in patients at high risk for adverse cardiac events.

    PubMed

    Jena, Anupam B; Blumenthal, Daniel M; Stevens, Warren; Chou, Jacquelyn W; Ton, Thanh G N; Goldman, Dana P

    2016-06-01

    Lipid-lowering therapy (LLT) is suboptimally used in patients with hyperlipidemia in the 2 highest statin benefit groups (SBGs), as categorized by the American College of Cardiology and the American Heart Association. This study estimated the social value of reducing low-density lipoprotein cholesterol (LDL-C) levels by 50% for patients in SBGs 1 and 2 who have been treated with standard LLT but have not reached LDL-C goal, as well as the potential value of PCSK9 inhibitors for patients in these groups. Simulation model. We used National Health and Nutrition Examination Surveys (NHANES) and US Census data to project the population of SBGs 1 and 2 in the time period 2015 to 2035. We used insurance claims data to estimate incidence rates of major adverse cardiac events (MACEs), and NHANES with National Vital Statistics data to estimate cardiovascular disease mortality rates. Using established associations between LDL-C and MACE risk, we estimated the value of reducing LDL-C levels by 50%. We incorporated results from a meta-analysis to estimate the value of PSCK9 inhibitors. Among those treated with LLT with LDL-C > 70 mg/dL in SBGs 1 and 2, the cumulative value of reducing LDL-C levels by 50% would be $2.9 trillion from 2015 to 2035, resulting primarily from 1.6 million deaths averted. The cumulative value of PCSK9 inhibitors would range from $3.4 trillion to $5.1 trillion (1.9-2.8 million deaths averted), or $12,000 to $17,000 per patient-year of treatment. Lowering LDL-C in high-risk patients with hyperlipidemia has enormous potential social value. For patients in these high-risk groups, PCSK9 inhibitors may have considerable net value depending on the final prices payers ultimately select.

  10. Ambassadors: Models for At-Risk Students.

    ERIC Educational Resources Information Center

    Cahoon, Peggy

    1989-01-01

    The Ambassador Program, a partnership between Ferron Elementary School and the University of Nevada, Las Vegas, pairs university students with at-risk elementary students once a week to serve as role models. (TE)

  11. Impact of a clinical decision model for febrile children at risk for serious bacterial infections at the emergency department: a randomized controlled trial.

    PubMed

    de Vos-Kerkhof, Evelien; Nijman, Ruud G; Vergouwe, Yvonne; Polinder, Suzanne; Steyerberg, Ewout W; van der Lei, Johan; Moll, Henriëtte A; Oostenbrink, Rianne

    2015-01-01

    To assess the impact of a clinical decision model for febrile children at risk for serious bacterial infections (SBI) attending the emergency department (ED). Randomized controlled trial with 439 febrile children, aged 1 month-16 years, attending the pediatric ED of a Dutch university hospital during 2010-2012. Febrile children were randomly assigned to the intervention (clinical decision model; n = 219) or the control group (usual care; n = 220). The clinical decision model included clinical symptoms, vital signs, and C-reactive protein and provided high/low-risks for "pneumonia" and "other SBI". Nurses were guided by the intervention to initiate additional tests for high-risk children. The clinical decision model was evaluated by 1) area-under-the-receiver-operating-characteristic-curve (AUC) to indicate discriminative ability and 2) feasibility, to measure nurses' compliance to model recommendations. Primary patient outcome was defined as correct SBI diagnoses. Secondary process outcomes were defined as length of stay; diagnostic tests; antibiotic treatment; hospital admission; revisits and medical costs. The decision model had good discriminative ability for both pneumonia (n = 33; AUC 0.83 (95% CI 0.75-0.90)) and other SBI (n = 22; AUC 0.81 (95% CI 0.72-0.90)). Compliance to model recommendations was high (86%). No differences in correct SBI determination were observed. Application of the clinical decision model resulted in less full-blood-counts (14% vs. 22%, p-value < 0.05) and more urine-dipstick testing (71% vs. 61%, p-value < 0.05). In contrast to our expectations no substantial impact on patient outcome was perceived. The clinical decision model preserved, however, good discriminatory ability to detect SBI, achieved good compliance among nurses and resulted in a more standardized diagnostic approach towards febrile children, with less full blood-counts and more rightfully urine-dipstick testing. Nederlands Trial Register NTR2381.

  12. Prediction impact curve is a new measure integrating intervention effects in the evaluation of risk models.

    PubMed

    Campbell, William; Ganna, Andrea; Ingelsson, Erik; Janssens, A Cecile J W

    2016-01-01

    We propose a new measure of assessing the performance of risk models, the area under the prediction impact curve (auPIC), which quantifies the performance of risk models in terms of their average health impact in the population. Using simulated data, we explain how the prediction impact curve (PIC) estimates the percentage of events prevented when a risk model is used to assign high-risk individuals to an intervention. We apply the PIC to the Atherosclerosis Risk in Communities (ARIC) Study to illustrate its application toward prevention of coronary heart disease. We estimated that if the ARIC cohort received statins at baseline, 5% of events would be prevented when the risk model was evaluated at a cutoff threshold of 20% predicted risk compared to 1% when individuals were assigned to the intervention without the use of a model. By calculating the auPIC, we estimated that an average of 15% of events would be prevented when considering performance across the entire interval. We conclude that the PIC is a clinically meaningful measure for quantifying the expected health impact of risk models that supplements existing measures of model performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Evaluating the Risks: A Bernoulli Process Model of HIV Infection and Risk Reduction.

    ERIC Educational Resources Information Center

    Pinkerton, Steven D.; Abramson, Paul R.

    1993-01-01

    A Bernoulli process model of human immunodeficiency virus (HIV) is used to evaluate infection risks associated with various sexual behaviors (condom use, abstinence, or monogamy). Results suggest that infection is best mitigated through measures that decrease infectivity, such as condom use. (SLD)

  14. Challenges of Modeling Flood Risk at Large Scales

    NASA Astrophysics Data System (ADS)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  15. Non-animal approaches for toxicokinetics in risk evaluations of food chemicals.

    PubMed

    Punt, Ans; Peijnenburg, Ad A C M; Hoogenboom, Ron L A P; Bouwmeester, Hans

    2017-01-01

    The objective of the present work was to review the availability and predictive value of non-animal toxicokinetic approaches and to evaluate their current use in European risk evaluations of food contaminants, additives and food contact materials, as well as pesticides and medicines. Results revealed little use of quantitative animal or human kinetic data in risk evaluations of food chemicals, compared with pesticides and medicines. Risk evaluations of medicines provided sufficient in vivo kinetic data from different species to evaluate the predictive value of animal kinetic data for humans. These data showed a relatively poor correlation between the in vivo bioavailability in rats and dogs versus that in humans. In contrast, in vitro (human) kinetic data have been demonstrated to provide adequate predictions of the fate of compounds in humans, using appropriate in vitro-in vivo scalers and by integration of in vitro kinetic data with in silico kinetic modelling. Even though in vitro kinetic data were found to be occasionally included within risk evaluations of food chemicals, particularly results from Caco-2 absorption experiments and in vitro data on gut-microbial conversions, only minor use of in vitro methods for metabolism and quantitative in vitro-in vivo extrapolation methods was identified. Yet, such quantitative predictions are essential in the development of alternatives to animal testing as well as to increase human relevance of toxicological risk evaluations. Future research should aim at further improving and validating quantitative alternative methods for kinetics, thereby increasing regulatory acceptance of non-animal kinetic data.

  16. Field evaluation of an avian risk assessment model

    USGS Publications Warehouse

    Vyas, N.B.; Spann, J.W.; Hulse, C.S.; Borges, S.L.; Bennett, R.S.; Torrez, M.; Williams, B.I.; Leffel, R.

    2006-01-01

    We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in the field. We tested technical-grade diazinon and its D Z N- 50W (50% diazinon active ingredient wettable powder) formulation on Canada goose (Branta canadensis) goslings. Brain acetylcholinesterase activity was measured, and the feathers and skin, feet. and gastrointestinal contents were analyzed for diazinon residues. The dose-response curves showed that diazinon was significantly more toxic to goslings in the outdoor test than in the laboratory tests. The deterministic risk assessment method identified the potential for risk to birds in general, but the factors associated with extrapolating from the laboratory to the field, and from the laboratory test species to other species, resulted in the underestimation of risk to the goslings. The present study indicates that laboratory-based risk quotients should be interpreted with caution.

  17. Risk Evaluation of Railway Coal Transportation Network Based on Multi Level Grey Evaluation Model

    NASA Astrophysics Data System (ADS)

    Niu, Wei; Wang, Xifu

    2018-01-01

    The railway transport mode is currently the most important way of coal transportation, and now China’s railway coal transportation network has become increasingly perfect, but there is still insufficient capacity, some lines close to saturation and other issues. In this paper, the theory and method of risk assessment, analytic hierarchy process and multi-level gray evaluation model are applied to the risk evaluation of coal railway transportation network in China. Based on the example analysis of Shanxi railway coal transportation network, to improve the internal structure and the competitiveness of the market.

  18. On Value at Risk for Foreign Exchange Rates --- the Copula Approach

    NASA Astrophysics Data System (ADS)

    Jaworski, P.

    2006-11-01

    The aim of this paper is to determine the Value at Risk (VaR) of the portfolio consisting of long positions in foreign currencies on an emerging market. Basing on empirical data we restrict ourselves to the case when the tail parts of distributions of logarithmic returns of these assets follow the power laws and the lower tail of associated copula C follows the power law of degree 1. We will illustrate the practical usefulness of this approach by the analysis of the exchange rates of EUR and CHF at the Polish forex market.

  19. Measurement error and timing of predictor values for multivariable risk prediction models are poorly reported.

    PubMed

    Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D

    2018-05-18

    Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.

  20. Deficient Contractor Business Systems: Applying the Value at Risk (VAR) Model to Earned Value Management Systems

    DTIC Science & Technology

    2013-06-01

    measuring numerical risk to the government ( Galway , 2004). However, quantitative risk analysis is rarely utilized in DoD acquisition programs because the...quantitative assessment of the EVMS itself. Galway (2004) practically linked project quantitative risk assessment to EVM by focusing on cost...Kindle version]. Retrieved from Amazon.com 83 Galway , L. (2004, February). Quantitative risk analysis for project management: A critical review

  1. Areas of high conservation value at risk by plant invaders in Georgia under climate change.

    PubMed

    Slodowicz, Daniel; Descombes, Patrice; Kikodze, David; Broennimann, Olivier; Müller-Schärer, Heinz

    2018-05-01

    Invasive alien plants (IAP) are a threat to biodiversity worldwide. Understanding and anticipating invasions allow for more efficient management. In this regard, predicting potential invasion risks by IAPs is essential to support conservation planning into areas of high conservation value (AHCV) such as sites exhibiting exceptional botanical richness, assemblage of rare, and threatened and/or endemic plant species. Here, we identified AHCV in Georgia, a country showing high plant richness, and assessed the susceptibility of these areas to colonization by IAPs under present and future climatic conditions. We used actual protected areas and areas of high plant endemism (identified using occurrences of 114 Georgian endemic plant species) as proxies for AHCV. Then, we assessed present and future potential distribution of 27 IAPs using species distribution models under four climate change scenarios and stacked single-species potential distribution into a consensus map representing IAPs richness. We evaluated present and future invasion risks in AHCV using IAPs richness as a metric of susceptibility. We show that the actual protected areas cover only 9.4% of the areas of high plant endemism in Georgia. IAPs are presently located at lower elevations around the large urban centers and in western Georgia. We predict a shift of IAPs toward eastern Georgia and higher altitudes and an increased susceptibility of AHCV to IAPs under future climate change. Our study provides a good baseline for decision makers and stakeholders on where and how resources should be invested in the most efficient way to protect Georgia's high plant richness from IAPs.

  2. Program Evaluation of Growin' to Win: A Latchkey and Summer Program for At-Risk Youth.

    ERIC Educational Resources Information Center

    James, William H.; And Others

    This document presents an evaluation of the effectiveness of the Growin' to Win Project, an after-school and summer program targeted at elementary and middle school aged youth at high risk of substance abuse and gang involvement. Growin' to Win is an expansion of a model latchkey program piloted at two Tacoma (Washington) schools in 1990. The…

  3. Development of a relative risk model for evaluating ecological risk of water environment in the Haihe River Basin estuary area.

    PubMed

    Chen, Qiuying; Liu, Jingling; Ho, Kin Chung; Yang, Zhifeng

    2012-03-15

    Ecological risk assessment for water environment is significant to water resource management of basin. Effective environmental management and systems restoration such as the Haihe River Basin require holistic understanding of the relative importance of various stressor-related impacts throughout the basin. As an effective technical tool for evaluating the ecological risk, relative risk model (RRM) was applied in regional scale successfully. In this study, the risk transfer from upstream of basin was considered and the RRM was developed through introducing the source-stressor-habitat exposure filter (SSH), the endpoint-habitat exposure filter (EH) and the stressor-endpoint effect filter (SE) to reflect the meaning of exposure and effect more explicit. Water environment which includes water quality, water quantity and aquatic ecosystems was selected as the assessment endpoints. We created a conceptual model which depicting potential and effect pathways from source to stressor to habitat to endpoint. The Haihe River Basin estuary (HRBE) was selected as the model case. The results showed that there were two low risk regions, one medium risk region and two high risk regions in the HRBE. The results also indicated that urbanization was the biggest source, the second was shipping and the third was industry, their risk scores are 5.65, 4.71 and 3.68 respectively. Furthermore, habitat destruction was the largest stressor with the risk scores (2.66), the second was oxygen consuming organic pollutants (1.75) and the third was pathogens (1.75). So these three stressors were the main influencing factors of the ecological pressure in the study area. For habitats, open waters (9.59) and intertidal mudflat were enduring the bigger pressure and should be taken considerable attention. Ecological service values damaged (30.54) and biodiversity decreased were facing the biggest risk pressure. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Values in science and risk assessment.

    PubMed

    Wandall, Birgitte

    2004-09-25

    It is a widely accepted claim that scientific practice contains value judgments, i.e. decisions made on the basis of values. This paper clarifies the concepts involved in this claim and explains its implications for risk assessment. It is explained why values are necessarily a part of science and of risk assessment. A certain type of values that contribute to the aim of science, so-called epistemic values, are identified as rationally justified as basis for judgment in science. It is argued that the aims of pure science and risk assessment differ in some aspects and that consequently pure science's epistemic values are not sufficient for risk assessment. I suggest how the epistemic values may be supplemented in order to align better with the aim of risk assessment. It is concluded that since risk assessment is no less value-laden than pure science, it is important (a) that risk assessors become aware of what values they are (often implicitly) relying on, (b) that the values are justifiable, and (c) that transparency is ensured, i.e. that the values and value-based assumptions applied in particular risk assessments are explicitly acknowledged.

  5. Breeding objectives for pigs in Kenya. II: economic values incorporating risks in different smallholder production systems.

    PubMed

    Mbuthia, Jackson Mwenda; Rewe, Thomas Odiwuor; Kahi, Alexander Kigunzu

    2015-02-01

    This study estimated economic values for production traits (dressing percentage (DP), %; live weight for growers (LWg), kg; live weight for sows (LWs), kg) and functional traits (feed intake for growers (FEEDg), feed intake for sow (FEEDs), preweaning survival rate (PrSR), %; postweaning survival (PoSR), %; sow survival rate (SoSR), %, total number of piglets born (TNB) and farrowing interval (FI), days) under different smallholder pig production systems in Kenya. Economic values were estimated considering two production circumstances: fixed-herd and fixed-feed. Under the fixed-herd scenario, economic values were estimated assuming a situation where the herd cannot be increased due to other constraints apart from feed resources. The fixed-feed input scenario assumed that the herd size is restricted by limitation of feed resources available. In addition to the tradition profit model, a risk-rated bio-economic model was used to derive risk-rated economic values. This model accounted for imperfect knowledge concerning risk attitude of farmers and variance of input and output prices. Positive economic values obtained for traits DP, LWg, LWs, PoSR, PrSR, SoSR and TNB indicate that targeting them in improvement would positively impact profitability in pig breeding programmes. Under the fixed-feed basis, the risk-rated economic values for DP, LWg, LWs and SoSR were similar to those obtained under the fixed-herd situation. Accounting for risks in the EVs did not yield errors greater than ±50 % in all the production systems and basis of evaluation meaning there would be relatively little effect on the real genetic gain of a selection index. Therefore, both traditional and risk-rated models can be satisfactorily used to predict profitability in pig breeding programmes.

  6. The Value of Risk: Noah's Ark at the Skirball Cultural Center

    ERIC Educational Resources Information Center

    Bernstein, Sheri; Gittleman, Marni

    2010-01-01

    In this article Bernstein and Gittleman address the role of risk in creating an exhibition that is of value to the public and is aligned with their cultural institution's core values. Through an examination of the development process, the authors present lessons that can assist others who are interested in undertaking an exhibition with similar…

  7. Comparison of four contemporary risk models at predicting mortality after aortic valve replacement.

    PubMed

    Wang, Tom Kai Ming; Choi, David H M; Stewart, Ralph; Gamble, Greg; Haydock, David; Ruygrok, Peter

    2015-02-01

    Risk stratification for aortic valve replacement (AVR) is desirable given the increased demand for intervention and the introduction of transcatheter aortic valve implantation. We compared the prognostic utility of the European System for Cardiac Operative Risk Evaluation (EuroSCORE), EuroSCORE II, Society of Thoracic Surgeons (STS) score, and an Australasian model (Aus-AVR score) for AVR. We retrospectively calculated the 4 risk scores for patients undergoing isolated AVR at Auckland City Hospital from 2005 to 2012 and assessed their discrimination and calibration for short- and long-term mortality. A total of 620 patients were followed up for 3.8 ± 2.4 years, with an operative mortality of 2.9% (n = 18). The mean EuroSCORE, EuroSCORE II, STS score, and Aus-AVR score was 8.7% ± 8.3%, 3.8% ± 4.7%, 2.8% ± 2.7%, and 3.2% ± 4.8%, respectively. The corresponding C-statistics for operative mortality were 0.752 (95% confidence interval [CI], 0.652-0.852), 0.711 (95% CI, 0.607-0.815), 0.716 (95% CI, 0.593-0.837), and 0.684 (95% CI, 0.557-0.811). The corresponding Hosmer-Lemeshow test P and chi-square values for calibration were .007 and 21.1, .125 and 12.6, .753 and 5.0, and .468 and 7.7. The corresponding Brier scores were 0.0348, 0.0278, 0.0276, and 0.0294. Independent predictors of operative mortality included critical preoperative state, atrial fibrillation, extracardiac arteriopathy, and mitral stenosis. The log-rank test P values were all <.001 for mortality during follow-up for all 4 scores, stratified by quintile. All 4 risk scores discriminated operative mortality after isolated AVR. The EuroSCORE had poor calibration, overestimating operative mortality, although the other 3 scores fitted well with contemporary outcomes. The STS score was the best calibrated in the highest quintile of operative risk. Copyright © 2015 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  8. Impact of a Clinical Decision Model for Febrile Children at Risk for Serious Bacterial Infections at the Emergency Department: A Randomized Controlled Trial

    PubMed Central

    de Vos-Kerkhof, Evelien; Nijman, Ruud G.; Vergouwe, Yvonne; Polinder, Suzanne; Steyerberg, Ewout W.; van der Lei, Johan; Moll, Henriëtte A.; Oostenbrink, Rianne

    2015-01-01

    Objectives To assess the impact of a clinical decision model for febrile children at risk for serious bacterial infections (SBI) attending the emergency department (ED). Methods Randomized controlled trial with 439 febrile children, aged 1 month-16 years, attending the pediatric ED of a Dutch university hospital during 2010-2012. Febrile children were randomly assigned to the intervention (clinical decision model; n=219) or the control group (usual care; n=220). The clinical decision model included clinical symptoms, vital signs, and C-reactive protein and provided high/low-risks for “pneumonia” and “other SBI”. Nurses were guided by the intervention to initiate additional tests for high-risk children. The clinical decision model was evaluated by 1) area-under-the-receiver-operating-characteristic-curve (AUC) to indicate discriminative ability and 2) feasibility, to measure nurses’ compliance to model recommendations. Primary patient outcome was defined as correct SBI diagnoses. Secondary process outcomes were defined as length of stay; diagnostic tests; antibiotic treatment; hospital admission; revisits and medical costs. Results The decision model had good discriminative ability for both pneumonia (n=33; AUC 0.83 (95% CI 0.75-0.90)) and other SBI (n=22; AUC 0.81 (95% CI 0.72-0.90)). Compliance to model recommendations was high (86%). No differences in correct SBI determination were observed. Application of the clinical decision model resulted in less full-blood-counts (14% vs. 22%, p-value<0.05) and more urine-dipstick testing (71% vs. 61%, p-value<0.05). Conclusions In contrast to our expectations no substantial impact on patient outcome was perceived. The clinical decision model preserved, however, good discriminatory ability to detect SBI, achieved good compliance among nurses and resulted in a more standardized diagnostic approach towards febrile children, with less full blood-counts and more rightfully urine-dipstick testing. Trial Registration

  9. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    NASA Astrophysics Data System (ADS)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2017-06-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  10. Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints

    NASA Astrophysics Data System (ADS)

    Yan, Wei

    2012-01-01

    An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.

  11. Deficient Contractor Business Systems: Applying the Value at Risk (VaR) Model to Earned Value Management Systems

    DTIC Science & Technology

    2013-06-30

    QUANTITATIVE RISK ANALYSIS The use of quantitative cost risk analysis tools can be valuable in measuring numerical risk to the government ( Galway , 2004...assessment of the EVMS itself. Galway (2004) practically linked project quantitative risk assessment to EVM by focusing on cost, schedule, and...www.amazon.com Galway , L. (2004, February). Quantitative risk analysis for project management: A critical review (RAND Working Paper WR-112-RC

  12. Construction and evaluation of FiND, a fall risk prediction model of inpatients from nursing data.

    PubMed

    Yokota, Shinichiroh; Ohe, Kazuhiko

    2016-04-01

    To construct and evaluate an easy-to-use fall risk prediction model based on the daily condition of inpatients from secondary use electronic medical record system data. The present authors scrutinized electronic medical record system data and created a dataset for analysis by including inpatient fall report data and Intensity of Nursing Care Needs data. The authors divided the analysis dataset into training data and testing data, then constructed the fall risk prediction model FiND from the training data, and tested the model using the testing data. The dataset for analysis contained 1,230,604 records from 46,241 patients. The sensitivity of the model constructed from the training data was 71.3% and the specificity was 66.0%. The verification result from the testing dataset was almost equivalent to the theoretical value. Although the model's accuracy did not surpass that of models developed in previous research, the authors believe FiND will be useful in medical institutions all over Japan because it is composed of few variables (only age, sex, and the Intensity of Nursing Care Needs items), and the accuracy for unknown data was clear. © 2016 Japan Academy of Nursing Science.

  13. An Interprofessional Model for Serving Youth at Risk for Substance Abuse: The Team Case Study.

    ERIC Educational Resources Information Center

    Cobia, Debra C.; And Others

    1995-01-01

    Three models of interprofessional education appropriate for serving youth at risk for substance abuse are described. The evaluation of the team case study model indicated that the participants were more sensitive to the needs of the youths, experienced increased comfort in consulting other agents, and were more confident in their ability to select…

  14. Empirically evaluating decision-analytic models.

    PubMed

    Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J

    2010-08-01

    Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.

  15. University of North Carolina Caries Risk Assessment Study: comparisons of high risk prediction, any risk prediction, and any risk etiologic models.

    PubMed

    Beck, J D; Weintraub, J A; Disney, J A; Graves, R C; Stamm, J W; Kaste, L M; Bohannan, H M

    1992-12-01

    The purpose of this analysis is to compare three different statistical models for predicting children likely to be at risk of developing dental caries over a 3-yr period. Data are based on 4117 children who participated in the University of North Carolina Caries Risk Assessment Study, a longitudinal study conducted in the Aiken, South Carolina, and Portland, Maine areas. The three models differed with respect to either the types of variables included or the definition of disease outcome. The two "Prediction" models included both risk factor variables thought to cause dental caries and indicator variables that are associated with dental caries, but are not thought to be causal for the disease. The "Etiologic" model included only etiologic factors as variables. A dichotomous outcome measure--none or any 3-yr increment, was used in the "Any Risk Etiologic model" and the "Any Risk Prediction Model". Another outcome, based on a gradient measure of disease, was used in the "High Risk Prediction Model". The variables that are significant in these models vary across grades and sites, but are more consistent among the Etiologic model than the Predictor models. However, among the three sets of models, the Any Risk Prediction Models have the highest sensitivity and positive predictive values, whereas the High Risk Prediction Models have the highest specificity and negative predictive values. Considerations in determining model preference are discussed.

  16. Empirical evaluation of the market price of risk using the CIR model

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Torosantucci, L.; Uboldi, A.

    2007-03-01

    We describe a simple but effective method for the estimation of the market price of risk. The basic idea is to compare the results obtained by following two different approaches in the application of the Cox-Ingersoll-Ross (CIR) model. In the first case, we apply the non-linear least squares method to cross sectional data (i.e., all rates of a single day). In the second case, we consider the short rate obtained by means of the first procedure as a proxy of the real market short rate. Starting from this new proxy, we evaluate the parameters of the CIR model by means of martingale estimation techniques. The estimate of the market price of risk is provided by comparing results obtained with these two techniques, since this approach makes possible to isolate the market price of risk and evaluate, under the Local Expectations Hypothesis, the risk premium given by the market for different maturities. As a test case, we apply the method to data of the European Fixed Income Market.

  17. Risk-based cost-benefit analysis for evaluating microbial risk mitigation in a drinking water system.

    PubMed

    Bergion, Viktor; Lindhe, Andreas; Sokolova, Ekaterina; Rosén, Lars

    2018-04-01

    Waterborne outbreaks of gastrointestinal diseases can cause large costs to society. Risk management needs to be holistic and transparent in order to reduce these risks in an effective manner. Microbial risk mitigation measures in a drinking water system were investigated using a novel approach combining probabilistic risk assessment and cost-benefit analysis. Lake Vomb in Sweden was used to exemplify and illustrate the risk-based decision model. Four mitigation alternatives were compared, where the first three alternatives, A1-A3, represented connecting 25, 50 and 75%, respectively, of on-site wastewater treatment systems in the catchment to the municipal wastewater treatment plant. The fourth alternative, A4, represented installing a UV-disinfection unit in the drinking water treatment plant. Quantitative microbial risk assessment was used to estimate the positive health effects in terms of quality adjusted life years (QALYs), resulting from the four mitigation alternatives. The health benefits were monetised using a unit cost per QALY. For each mitigation alternative, the net present value of health and environmental benefits and investment, maintenance and running costs was calculated. The results showed that only A4 can reduce the risk (probability of infection) below the World Health Organization guidelines of 10 -4 infections per person per year (looking at the 95th percentile). Furthermore, all alternatives resulted in a negative net present value. However, the net present value would be positive (looking at the 50 th percentile using a 1% discount rate) if non-monetised benefits (e.g. increased property value divided evenly over the studied time horizon and reduced microbial risks posed to animals), estimated at 800-1200 SEK (€100-150) per connected on-site wastewater treatment system per year, were included. This risk-based decision model creates a robust and transparent decision support tool. It is flexible enough to be tailored and applied to local

  18. Risk-adjusted Outcomes of Clinically Relevant Pancreatic Fistula Following Pancreatoduodenectomy: A Model for Performance Evaluation.

    PubMed

    McMillan, Matthew T; Soi, Sameer; Asbun, Horacio J; Ball, Chad G; Bassi, Claudio; Beane, Joal D; Behrman, Stephen W; Berger, Adam C; Bloomston, Mark; Callery, Mark P; Christein, John D; Dixon, Elijah; Drebin, Jeffrey A; Castillo, Carlos Fernandez-Del; Fisher, William E; Fong, Zhi Ven; House, Michael G; Hughes, Steven J; Kent, Tara S; Kunstman, John W; Malleo, Giuseppe; Miller, Benjamin C; Salem, Ronald R; Soares, Kevin; Valero, Vicente; Wolfgang, Christopher L; Vollmer, Charles M

    2016-08-01

    To evaluate surgical performance in pancreatoduodenectomy using clinically relevant postoperative pancreatic fistula (CR-POPF) occurrence as a quality indicator. Accurate assessment of surgeon and institutional performance requires (1) standardized definitions for the outcome of interest and (2) a comprehensive risk-adjustment process to control for differences in patient risk. This multinational, retrospective study of 4301 pancreatoduodenectomies involved 55 surgeons at 15 institutions. Risk for CR-POPF was assessed using the previously validated Fistula Risk Score, and pancreatic fistulas were stratified by International Study Group criteria. CR-POPF variability was evaluated and hierarchical regression analysis assessed individual surgeon and institutional performance. There was considerable variability in both CR-POPF risk and occurrence. Factors increasing the risk for CR-POPF development included increasing Fistula Risk Score (odds ratio 1.49 per point, P < 0.00001) and octreotide (odds ratio 3.30, P < 0.00001). When adjusting for risk, performance outliers were identified at the surgeon and institutional levels. Of the top 10 surgeons (≥15 cases) for nonrisk-adjusted performance, only 6 remained in this high-performing category following risk adjustment. This analysis of pancreatic fistulas following pancreatoduodenectomy demonstrates considerable variability in both the risk and occurrence of CR-POPF among surgeons and institutions. Disparities in patient risk between providers reinforce the need for comprehensive, risk-adjusted modeling when assessing performance based on procedure-specific complications. Furthermore, beyond inherent patient risk factors, surgical decision-making influences fistula outcomes.

  19. Human disease mortality kinetics are explored through a chain model embodying principles of extreme value theory and competing risks.

    PubMed

    Juckett, D A; Rosenberg, B

    1992-04-21

    The distributions for human disease-specific mortality exhibit two striking characteristics: survivorship curves that intersect near the longevity limit; and, the clustering of best-fitting Weibull shape parameter values into groups centered on integers. Correspondingly, we have hypothesized that the distribution intersections result from either competitive processes or population partitioning and the integral clustering in the shape parameter results from the occurrence of a small number of rare, rate-limiting events in disease progression. In this report we initiate a theoretical examination of these questions by exploring serial chain model dynamics and parameteric competing risks theory. The links in our chain models are composed of more than one bond, where the number of bonds in a link are denoted the link size and are the number of events necessary to break the link and, hence, the chain. We explored chains with all links of the same size or with segments of the chain composed of different size links (competition). Simulations showed that chain breakage dynamics depended on the weakest-link principle and followed kinetics of extreme-values which were very similar to human mortality kinetics. In particular, failure distributions for simple chains were Weibull-type extreme-value distributions with shape parameter values that were identifiable with the integral link size in the limit of infinite chain length. Furthermore, for chains composed of several segments of differing link size, the survival distributions for the various segments converged at a point in the S(t) tails indistinguishable from human data. This was also predicted by parameteric competing risks theory using Weibull underlying distributions. In both the competitive chain simulations and the parametric competing risks theory, however, the shape values for the intersecting distributions deviated from the integer values typical of human data. We conclude that rare events can be the source of

  20. Technical Evaluation of the NASA Model for Cancer Risk to Astronauts Due to Space Radiation

    NASA Technical Reports Server (NTRS)

    2012-01-01

    At the request of NASA, the National Research Council's (NRC's) Committee for Evaluation of Space Radiation Cancer Risk Model1 reviewed a number of changes that NASA proposes to make to its model for estimating the risk of radiation-induced cancer in astronauts. The NASA model in current use was last updated in 2005, and the proposed model would incorporate recent research directed at improving the quantification and understanding of the health risks posed by the space radiation environment. NASA's proposed model is defined by the 2011 NASA report Space Radiation Cancer Risk Projections and Uncertainties--2010 . The committee's evaluation is based primarily on this source, which is referred to hereafter as the 2011 NASA report, with mention of specific sections or tables. The overall process for estimating cancer risks due to low linear energy transfer (LET) radiation exposure has been fully described in reports by a number of organizations. The approaches described in the reports from all of these expert groups are quite similar. NASA's proposed space radiation cancer risk assessment model calculates, as its main output, age- and gender-specific risk of exposure-induced death (REID) for use in the estimation of mission and astronaut-specific cancer risk. The model also calculates the associated uncertainties in REID. The general approach for estimating risk and uncertainty in the proposed model is broadly similar to that used for the current (2005) NASA model and is based on recommendations by the National Council on Radiation Protection and Measurements. However, NASA's proposed model has significant changes with respect to the following: the integration of new findings and methods into its components by taking into account newer epidemiological data and analyses, new radiobiological data indicating that quality factors differ for leukemia and solid cancers, an improved method for specifying quality factors in terms of radiation track structure concepts as

  1. On a true value of risk

    NASA Astrophysics Data System (ADS)

    Kozine, Igor

    2018-04-01

    The paper suggests looking on probabilistic risk quantities and concepts through the prism of accepting one of the views: whether a true value of risk exists or not. It is argued that discussions until now have been primarily focused on closely related topics that are different from the topic of the current paper. The paper examines operational consequences of adhering to each of the views and contrasts them. It is demonstrated that operational differences on how and what probabilistic measures can be assessed and how they can be interpreted appear tangible. In particular, this concerns prediction intervals, the use of Byes rule, models of complete ignorance, hierarchical models of uncertainty, assignment of probabilities over possibility space and interpretation of derived probabilistic measures. Behavioural implications of favouring the either view are also briefly described.

  2. Brownfields and health risks--air dispersion modeling and health risk assessment at landfill redevelopment sites.

    PubMed

    Ofungwu, Joseph; Eget, Steven

    2006-07-01

    Redevelopment of landfill sites in the New Jersey-New York metropolitan area for recreational (golf courses), commercial, and even residential purposes seems to be gaining acceptance among municipal planners and developers. Landfill gas generation, which includes methane and potentially toxic nonmethane compounds usually continues long after closure of the landfill exercise phase. It is therefore prudent to evaluate potential health risks associated with exposure to gas emissions before redevelopment of the landfill sites as recreational, commercial, and, especially, residential properties. Unacceptably high health risks would call for risk management measures such as limiting the development to commercial/recreational rather than residential uses, stringent gas control mechanisms, interior air filtration, etc. A methodology is presented for applying existing models to estimate residual landfill hazardous compounds emissions and to quantify associated health risks. Besides the toxic gas constituents of landfill emissions, other risk-related issues concerning buried waste, landfill leachate, and explosive gases were qualitatively evaluated. Five contiguously located landfill sites in New Jersey intended for residential and recreational redevelopment were used to exemplify the approach.

  3. Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model.

    PubMed

    Tang, Jiechen; Zhou, Chao; Yuan, Xinyu; Sriboonchitta, Songsak

    2015-01-01

    This paper concentrates on estimating the risk of Title Transfer Facility (TTF) Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT) is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR) and conditional value at risk (CVaR). Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels.

  4. Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model

    PubMed Central

    Tang, Jiechen; Zhou, Chao; Yuan, Xinyu; Sriboonchitta, Songsak

    2015-01-01

    This paper concentrates on estimating the risk of Title Transfer Facility (TTF) Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT) is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR) and conditional value at risk (CVaR). Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels. PMID:26351652

  5. Using School Lotteries to Evaluate the Value-Added Model

    ERIC Educational Resources Information Center

    Deutsch, Jonah

    2013-01-01

    There has been an active debate in the literature over the validity of value-added models. In this study, the author tests the central assumption of value-added models that school assignment is random relative to expected test scores conditional on prior test scores, demographic variables, and other controls. He uses a Chicago charter school's…

  6. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    NASA Astrophysics Data System (ADS)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  7. Risk modeling in prospective diabetes studies: Association and predictive value of anthropometrics.

    PubMed

    Jafari-Koshki, Tohid; Arsang-Jang, Shahram; Aminorroaya, Ashraf; Mansourian, Marjan; Amini, Masoud

    2018-04-03

    This study aimed to introduce and apply modern statistical techniques for assessing association and predictive value of risk factors in first-degree relatives (FDR) of patients with diabetes from repeatedly measured diabetes data. We used data from 1319 FDR's of patients with diabetes followed for 8 years. Association and predictive performance of weight (Wt), body mass index (BMI), waist and hip circumferences (WC and HC) and their ratio (WHR), waist-height ratio (WHtR) and a body shape index (ABSI) in relation to future diabetes were evaluated by using Cox regression and joint longitudinal-survival modeling. According to Cox regression, in total sample, WC, HC, Wt, WHtR and BMI had significant direct association with diabetes (all p < 0.01) with the best predictive ability for WHtR (concordance probability estimate = 0.575). Joint modeling suggested direct associations between diabetes and WC, WHR, Wt, WHtR and BMI in total sample (all p < 0.05). According to LPML criterion, WHtR was the best predictor in both total sample and females with LPML of -2666.27 and -2185.67, respectively. However, according to AUC criteria, BMI had the best predictive performance with AUC-JM = 0.7629 and dAUC-JM = 0.5883 in total sample. In females, both AUC criteria indicated that WC was the best predictor followed by WHtR. WC, WHR, Wt, WHtR and BMI are among candidate anthropometric measures to be monitored in diabetes prevention programs. Larger multi-ethnic and multivariate research are warranted to assess interactions and identify the best predictors in subgroups. Copyright © 2018 Diabetes India. Published by Elsevier Ltd. All rights reserved.

  8. Predictive value of general movements' quality in low-risk infants for minor neurological dysfunction and behavioural problems at preschool age.

    PubMed

    Bennema, Anne N; Schendelaar, Pamela; Seggers, Jorien; Haadsma, Maaike L; Heineman, Maas Jan; Hadders-Algra, Mijna

    2016-03-01

    General movement (GM) assessment is a well-established tool to predict cerebral palsy in high-risk infants. Little is known on the predictive value of GM assessment in low-risk populations. To assess the predictive value of GM quality in early infancy for the development of the clinically relevant form of minor neurological dysfunction (complex MND) and behavioral problems at preschool age. Prospective cohort study. A total of 216 members of the prospective Groningen Assisted Reproductive Techniques (ART) cohort study were included in this study. ART did not affect neurodevelopmental outcome of these relatively low-risk infants born to subfertile parents. GM quality was determined at 2 weeks and 3 months. At 18 months and 4 years, the Hempel neurological examination was used to assess MND. At 4 years, parents completed the Child Behavior Checklist; this resulted in the total problem score (TPS), internalizing problem score (IPS), and externalizing problem score (EPS). Predictive values of definitely (DA) and mildly (MA) abnormal GMs were calculated. DA GMs at 2 weeks were associated with complex MND at 18 months and atypical TPS and IPS at 4 years (all p<0.05). Sensitivity and positive predictive value of DA GMs at 2 weeks were rather low (13%-60%); specificity and negative predictive value were excellent (92%-99%). DA GMs at 3 months occurred too infrequently to calculate prediction. MA GMs were not associated with outcome. GM quality as a single predictor for complex MND and behavioral problems at preschool age has limited clinical value in children at low risk for developmental disorders. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Models for At Risk Youth. Final Report.

    ERIC Educational Resources Information Center

    Woloszyk, Carl A.

    Secondary data sources, including the ERIC and National Dropout Prevention Center databases, were reviewed to identify programs and strategies effective in keeping at-risk youth in school and helping them make successful school-to-work transitions. The dropout prevention model that was identified features a system of prevention, mediation,…

  10. Facility-specific radiation exposure risks and their implications for radiation workers at Department of Energy laboratories

    NASA Astrophysics Data System (ADS)

    Davis, Adam Christopher

    This research develops a new framework for evaluating the occupational risks of exposure to hazardous substances in any setting where As Low As Reasonably Achievable (ALARA) practices are mandated or used. The evaluation is performed by developing a hypothesis-test-based procedure for evaluating the homogeneity of various epidemiological cohorts, and thus the appropriateness of the application of aggregate data-pooling techniques to those cohorts. A statistical methodology is then developed as an alternative to aggregate pooling for situations in which individual cohorts show heterogeneity between them and are thus unsuitable for pooled analysis. These methods are then applied to estimate the all-cancer mortality risks incurred by workers at four Department-of-Energy nuclear weapons laboratories. Both linear, no-threshold and dose-bin averaged risks are calculated and it is further shown that aggregate analysis tends to overestimate the risks with respect to those calculated by the methods developed in this work. The risk estimates developed in Chapter 2 are, in Chapter 3, applied to assess the risks to workers engaged in americium recovery operations at Los Alamos National Laboratory. The work described in Chapter 3 develops a full radiological protection assessment for the new americium recovery project, including development of exposure cases, creation and modification of MCNP5 models, development of a time-and-motion study, and the final synthesis of all data. This work also develops a new risk-based method of determining whether administrative controls, such as staffing increases, are ALARA-optimized. The EPA's estimate of the value of statistical life is applied to these risk estimates to determine a monetary value for risk. The rate of change of this "risk value" (marginal risk) is then compared with the rate of change of workers' compensations as additional workers are added to the project to reduce the dose (and therefore, presumably, risk) to each

  11. Modeling Success: Using Preenrollment Data to Identify Academically At-Risk Students

    ERIC Educational Resources Information Center

    Gansemer-Topf, Ann M.; Compton, Jonathan; Wohlgemuth, Darin; Forbes, Greg; Ralston, Ekaterina

    2015-01-01

    Improving student success and degree completion is one of the core principles of strategic enrollment management. To address this principle, institutional data were used to develop a statistical model to identify academically at-risk students. The model employs multiple linear regression techniques to predict students at risk of earning below a…

  12. Evaluating the risk of death via the hematopoietic syndrome mode for prolonged exposure of nuclear workers to radiation delivered at very low rates.

    PubMed

    Scott, B R; Lyzlov, A F; Osovets, S V

    1998-05-01

    During a Phase-I effort, studies were planned to evaluate deterministic (nonstochastic) effects of chronic exposure of nuclear workers at the Mayak atomic complex in the former Soviet Union to relatively high levels (> 0.25 Gy) of ionizing radiation. The Mayak complex has been used, since the late 1940's, to produce plutonium for nuclear weapons. Workers at Site A of the complex were involved in plutonium breeding using nuclear reactors, and some were exposed to relatively large doses of gamma rays plus relatively small neutron doses. The Weibull normalized-dose model, which has been set up to evaluate the risk of specific deterministic effects of combined, continuous exposure of humans to alpha, beta, and gamma radiations, is here adapted for chronic exposure to gamma rays and neutrons during repeated 6-h work shifts--as occurred for some nuclear workers at Site A. Using the adapted model, key conclusions were reached that will facilitate a Phase-II study of deterministic effects among Mayak workers. These conclusions include the following: (1) neutron doses may be more important for Mayak workers than for Japanese A-bomb victims in Hiroshima and can be accounted for using an adjusted dose (which accounts for neutron relative biological effectiveness); (2) to account for dose-rate effects, normalized dose X (a dimensionless fraction of an LD50 or ED50) can be evaluated in terms of an adjusted dose; (3) nonlinear dose-response curves for the risk of death via the hematopoietic mode can be converted to linear dose-response curves (for low levels of risk) using a newly proposed dimensionless dose, D = X(V), in units of Oklad (where D is pronounced "deh"), and V is the shape parameter in the Weibull model; (4) for X < or = Xo, where Xo is the threshold normalized dose, D = 0; (5) unlike absorbed dose, the dose D can be averaged over different Mayak workers in order to calculate the average risk of death via the hematopoietic mode for the population exposed at Site A

  13. Technical Evaluation of the NASA Model for Cancer Risk to Astronauts Due to Space Radiation

    NASA Technical Reports Server (NTRS)

    2012-01-01

    At the request of NASA, the National Research Council's (NRC's) Committee for Evaluation of Space Radiation Cancer Risk Model reviewed a number of changes that NASA proposes to make to its model for estimating the risk of radiation-induced cancer in astronauts. The NASA model in current use was last updated in 2005, and the proposed model would incorporate recent research directed at improving the quantification and understanding of the health risks posed by the space radiation environment. NASA's proposed model is defined by the 2011 NASA report Space Radiation Cancer Risk Projections and Uncertainties 2010 (Cucinotta et al., 2011). The committee's evaluation is based primarily on this source, which is referred to hereafter as the 2011 NASA report, with mention of specific sections or tables cited more formally as Cucinotta et al. (2011). The overall process for estimating cancer risks due to low linear energy transfer (LET) radiation exposure has been fully described in reports by a number of organizations. They include, more recently: (1) The "BEIR VII Phase 2" report from the NRC's Committee on Biological Effects of Ionizing Radiation (BEIR) (NRC, 2006); (2) Studies of Radiation and Cancer from the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR, 2006), (3) The 2007 Recommendations of the International Commission on Radiological Protection (ICRP), ICRP Publication 103 (ICRP, 2007); and (4) The Environmental Protection Agency s (EPA s) report EPA Radiogenic Cancer Risk Models and Projections for the U.S. Population (EPA, 2011). The approaches described in the reports from all of these expert groups are quite similar. NASA's proposed space radiation cancer risk assessment model calculates, as its main output, age- and gender-specific risk of exposure-induced death (REID) for use in the estimation of mission and astronaut-specific cancer risk. The model also calculates the associated uncertainties in REID. The general approach for

  14. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    PubMed

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become

  15. A Probabilistic Model for Cushing's Syndrome Screening in At-Risk Populations: A Prospective Multicenter Study.

    PubMed

    León-Justel, Antonio; Madrazo-Atutxa, Ainara; Alvarez-Rios, Ana I; Infantes-Fontán, Rocio; Garcia-Arnés, Juan A; Lillo-Muñoz, Juan A; Aulinas, Anna; Urgell-Rull, Eulàlia; Boronat, Mauro; Sánchez-de-Abajo, Ana; Fajardo-Montañana, Carmen; Ortuño-Alonso, Mario; Salinas-Vert, Isabel; Granada, Maria L; Cano, David A; Leal-Cerro, Alfonso

    2016-10-01

    Cushing's syndrome (CS) is challenging to diagnose. Increased prevalence of CS in specific patient populations has been reported, but routine screening for CS remains questionable. To decrease the diagnostic delay and improve disease outcomes, simple new screening methods for CS in at-risk populations are needed. To develop and validate a simple scoring system to predict CS based on clinical signs and an easy-to-use biochemical test. Observational, prospective, multicenter. Referral hospital. A cohort of 353 patients attending endocrinology units for outpatient visits. All patients were evaluated with late-night salivary cortisol (LNSC) and a low-dose dexamethasone suppression test for CS. Diagnosis or exclusion of CS. Twenty-six cases of CS were diagnosed in the cohort. A risk scoring system was developed by logistic regression analysis, and cutoff values were derived from a receiver operating characteristic curve. This risk score included clinical signs and symptoms (muscular atrophy, osteoporosis, and dorsocervical fat pad) and LNSC levels. The estimated area under the receiver operating characteristic curve was 0.93, with a sensitivity of 96.2% and specificity of 82.9%. We developed a risk score to predict CS in an at-risk population. This score may help to identify at-risk patients in non-endocrinological settings such as primary care, but external validation is warranted.

  16. Risk aversion affects economic values of blue fox breeding scheme.

    PubMed

    Peura, J; Kempe, R; Strandén, I; Rydhmer, L

    2016-12-01

    The profit and production of an average Finnish blue fox farm was simulated using a deterministic bio-economic farm model. Risk was included using Arrow-Prat absolute risk aversion coefficient and profit variance. Risk-rated economic values were calculated for pregnancy rate, litter loss, litter size, pelt size, pelt quality, pelt colour clarity, feed efficiency and eye infection. With high absolute risk aversion, economic values were lower than with low absolute risk aversion. Economic values were highest for litter loss (18.16 and 26.42 EUR), litter size (13.27 and 19.40 EUR), pregnancy (11.99 and 18.39 EUR) and eye infection (12.39 and 13.81 EUR). Sensitivity analysis showed that selection pressure for improved eye health depended strongly on proportion of culled animals among infected animals and much less on the proportion of infected animals. The economic value of feed efficiency was lower than expected (6.06 and 8.03 EUR). However, it was almost the same magnitude as pelt quality (7.30 and 7.30 EUR) and higher than the economic value of pelt size (3.37 and 5.26 EUR). Risk factors should be considered in blue fox breeding scheme because they change the relative importance of traits. © 2016 Blackwell Verlag GmbH.

  17. Predicting Readmission at Early Hospitalization Using Electronic Clinical Data: An Early Readmission Risk Score.

    PubMed

    Tabak, Ying P; Sun, Xiaowu; Nunez, Carlos M; Gupta, Vikas; Johannes, Richard S

    2017-03-01

    Identifying patients at high risk for readmission early during hospitalization may aid efforts in reducing readmissions. We sought to develop an early readmission risk predictive model using automated clinical data available at hospital admission. We developed an early readmission risk model using a derivation cohort and validated the model with a validation cohort. We used a published Acute Laboratory Risk of Mortality Score as an aggregated measure of clinical severity at admission and the number of hospital discharges in the previous 90 days as a measure of disease progression. We then evaluated the administrative data-enhanced model by adding principal and secondary diagnoses and other variables. We examined the c-statistic change when additional variables were added to the model. There were 1,195,640 adult discharges from 70 hospitals with 39.8% male and the median age of 63 years (first and third quartile: 43, 78). The 30-day readmission rate was 11.9% (n=142,211). The early readmission model yielded a graded relationship of readmission and the Acute Laboratory Risk of Mortality Score and the number of previous discharges within 90 days. The model c-statistic was 0.697 with good calibration. When administrative variables were added to the model, the c-statistic increased to 0.722. Automated clinical data can generate a readmission risk score early at hospitalization with fair discrimination. It may have applied value to aid early care transition. Adding administrative data increases predictive accuracy. The administrative data-enhanced model may be used for hospital comparison and outcome research.

  18. EVALUATION OF PHYSIOLOGY COMPUTER MODELS, AND THE FEASIBILITY OF THEIR USE IN RISK ASSESSMENT.

    EPA Science Inventory

    This project will evaluate the current state of quantitative models that simulate physiological processes, and the how these models might be used in conjunction with the current use of PBPK and BBDR models in risk assessment. The work will include a literature search to identify...

  19. Multiple attribute decision making model and application to food safety risk evaluation.

    PubMed

    Ma, Lihua; Chen, Hong; Yan, Huizhe; Yang, Lifeng; Wu, Lifeng

    2017-01-01

    Decision making for supermarket food purchase decisions are characterized by network relationships. This paper analyzed factors that influence supermarket food selection and proposes a supplier evaluation index system based on the whole process of food production. The author established the intuitive interval value fuzzy set evaluation model based on characteristics of the network relationship among decision makers, and validated for a multiple attribute decision making case study. Thus, the proposed model provides a reliable, accurate method for multiple attribute decision making.

  20. Adverse conditions at the workplace are associated with increased suicide risk.

    PubMed

    Baumert, Jens; Schneider, Barbara; Lukaschek, Karoline; Emeny, Rebecca T; Meisinger, Christa; Erazo, Natalia; Dragano, Nico; Ladwig, Karl-Heinz

    2014-10-01

    The present study addressed potential harms of a negative working environment for employed subjects. The main aim was to evaluate if adverse working conditions and job strain are related to an increase in suicide mortality. The study population consisted of 6817 participants drawn from the MONICA/KORA Augsburg, Germany, surveys conducted in 1984-1995, being employed at baseline examination and followed up on average for 12.6 years. Adverse working conditions were assessed by an instrument of 16 items about chronobiological, physical and psychosocial conditions at the workplace, job strain was assessed as defined by Karasek. Suicide risks were estimated by Cox regression adjusted for suicide-related risk factors. A number of 28 suicide cases were observed within follow-up. High levels of adversity in chronobiological/physical working conditions significantly increased the risk for suicide mortality (HR 3.28, 95% CI 1.43-7.54) compared to low/intermediate levels in a model adjusted for age, sex and survey (p value 0.005). Additional adjustment for living alone, low educational level, smoking, high alcohol consumption, obesity and depressed mood attenuated this effect (HR 2.73) but significance remained (p value 0.022). Adverse psychosocial working conditions and job strain, in contrast, had no impact on subsequent suicide mortality risk (p values > 0.200). A negative working environment concerning chronobiological or physical conditions at the workplace had an unfavourable impact on suicide mortality risk, even after controlling for relevant suicide-related risk factors. Employer interventions aimed to improve workplace conditions might be considered as a suitable means to prevent suicides among employees. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Characterizing uncertainty when evaluating risk management metrics: risk assessment modeling of Listeria monocytogenes contamination in ready-to-eat deli meats.

    PubMed

    Gallagher, Daniel; Ebel, Eric D; Gallagher, Owen; Labarre, David; Williams, Michael S; Golden, Neal J; Pouillot, Régis; Dearfield, Kerry L; Kause, Janell

    2013-04-01

    This report illustrates how the uncertainty about food safety metrics may influence the selection of a performance objective (PO). To accomplish this goal, we developed a model concerning Listeria monocytogenes in ready-to-eat (RTE) deli meats. This application used a second order Monte Carlo model that simulates L. monocytogenes concentrations through a series of steps: the food-processing establishment, transport, retail, the consumer's home and consumption. The model accounted for growth inhibitor use, retail cross contamination, and applied an FAO/WHO dose response model for evaluating the probability of illness. An appropriate level of protection (ALOP) risk metric was selected as the average risk of illness per serving across all consumed servings-per-annum and the model was used to solve for the corresponding performance objective (PO) risk metric as the maximum allowable L. monocytogenes concentration (cfu/g) at the processing establishment where regulatory monitoring would occur. Given uncertainty about model inputs, an uncertainty distribution of the PO was estimated. Additionally, we considered how RTE deli meats contaminated at levels above the PO would be handled by the industry using three alternative approaches. Points on the PO distribution represent the probability that - if the industry complies with a particular PO - the resulting risk-per-serving is less than or equal to the target ALOP. For example, assuming (1) a target ALOP of -6.41 log10 risk of illness per serving, (2) industry concentrations above the PO that are re-distributed throughout the remaining concentration distribution and (3) no dose response uncertainty, establishment PO's of -4.98 and -4.39 log10 cfu/g would be required for 90% and 75% confidence that the target ALOP is met, respectively. The PO concentrations from this example scenario are more stringent than the current typical monitoring level of an absence in 25 g (i.e., -1.40 log10 cfu/g) or a stricter criteria of absence

  2. Accurate Diabetes Risk Stratification Using Machine Learning: Role of Missing Value and Outliers.

    PubMed

    Maniruzzaman, Md; Rahman, Md Jahanur; Al-MehediHasan, Md; Suri, Harman S; Abedin, Md Menhazul; El-Baz, Ayman; Suri, Jasjit S

    2018-04-10

    Diabetes mellitus is a group of metabolic diseases in which blood sugar levels are too high. About 8.8% of the world was diabetic in 2017. It is projected that this will reach nearly 10% by 2045. The major challenge is that when machine learning-based classifiers are applied to such data sets for risk stratification, leads to lower performance. Thus, our objective is to develop an optimized and robust machine learning (ML) system under the assumption that missing values or outliers if replaced by a median configuration will yield higher risk stratification accuracy. This ML-based risk stratification is designed, optimized and evaluated, where: (i) the features are extracted and optimized from the six feature selection techniques (random forest, logistic regression, mutual information, principal component analysis, analysis of variance, and Fisher discriminant ratio) and combined with ten different types of classifiers (linear discriminant analysis, quadratic discriminant analysis, naïve Bayes, Gaussian process classification, support vector machine, artificial neural network, Adaboost, logistic regression, decision tree, and random forest) under the hypothesis that both missing values and outliers when replaced by computed medians will improve the risk stratification accuracy. Pima Indian diabetic dataset (768 patients: 268 diabetic and 500 controls) was used. Our results demonstrate that on replacing the missing values and outliers by group median and median values, respectively and further using the combination of random forest feature selection and random forest classification technique yields an accuracy, sensitivity, specificity, positive predictive value, negative predictive value and area under the curve as: 92.26%, 95.96%, 79.72%, 91.14%, 91.20%, and 0.93, respectively. This is an improvement of 10% over previously developed techniques published in literature. The system was validated for its stability and reliability. RF-based model showed the best

  3. The care of Filipino juvenile offenders in residential facilities evaluated using the risk-need-responsivity model.

    PubMed

    Spruit, Anouk; Wissink, Inge B; Stams, Geert Jan J M

    2016-01-01

    According to the risk-need-responsivity model of offender, assessment and rehabilitation treatment should target specific factors that are related to re-offending. This study evaluates the residential care of Filipino juvenile offenders using the risk-need-responsivity model. Risk analyses and criminogenic needs assessments (parenting style, aggression, relationships with peers, empathy, and moral reasoning) have been conducted using data of 55 juvenile offenders in four residential facilities. The psychological care has been assessed using a checklist. Statistical analyses showed that juvenile offenders had a high risk of re-offending, high aggression, difficulties in making pro-social friends, and a delayed socio-moral development. The psychological programs in the residential facilities were evaluated to be poor. The availability of the psychological care in the facilities fitted poorly with the characteristics of the juvenile offenders and did not comply with the risk-need-responsivity model. Implications for research and practice are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Passenger Value Structure Model

    DOT National Transportation Integrated Search

    1980-07-01

    The objective of this research was to develop a model of the passenger's value structure which would reveal the role and importance of perceived security and other system characteristics on the passenger's evaluation and use of transit systems. The g...

  5. FRAMEWORK FOR EVALUATION OF PHYSIOLOGICALLY-BASED PHARMACOKINETIC MODELS FOR USE IN SAFETY OR RISK ASSESSMENT

    EPA Science Inventory

    ABSTRACT

    Proposed applications of increasingly sophisticated biologically-based computational models, such as physiologically-based pharmacokinetic (PBPK) models, raise the issue of how to evaluate whether the models are adequate for proposed uses including safety or risk ...

  6. Prognostic Value of the Thrombolysis in Myocardial Infarction Risk Score in ST-Elevation Myocardial Infarction Patients With Left Ventricular Dysfunction (from the EPHESUS Trial).

    PubMed

    Popovic, Batric; Girerd, Nicolas; Rossignol, Patrick; Agrinier, Nelly; Camenzind, Edoardo; Fay, Renaud; Pitt, Bertram; Zannad, Faiez

    2016-11-15

    The Thrombolysis in Myocardial Infarction (TIMI) risk score remains a robust prediction tool for short-term and midterm outcome in the patients with ST-elevation myocardial infarction (STEMI). However, the validity of this risk score in patients with STEMI with reduced left ventricular ejection fraction (LVEF) remains unclear. A total of 2,854 patients with STEMI with early coronary revascularization participating in the randomized EPHESUS (Epleronone Post-Acute Myocardial Infarction Heart Failure Efficacy and Survival Study) trial were analyzed. TIMI risk score was calculated at baseline, and its predictive value was evaluated using C-indexes from Cox models. The increase in reclassification of other variables in addition to TIMI score was assessed using the net reclassification index. TIMI risk score had a poor predictive accuracy for all-cause mortality (C-index values at 30 days and 1 year ≤0.67) and recurrent myocardial infarction (MI; C-index values ≤0.60). Among TIMI score items, diabetes/hypertension/angina, heart rate >100 beats/min, and systolic blood pressure <100 mm Hg were inconsistently associated with survival, whereas none of the TIMI score items, aside from age, were significantly associated with MI recurrence. Using a constructed predictive model, lower LVEF, lower estimated glomerular filtration rate (eGFR), and previous MI were significantly associated with all-cause mortality. The predictive accuracy of this model, which included LVEF and eGFR, was fair for both 30-day and 1-year all-cause mortality (C-index values ranging from 0.71 to 0.75). In conclusion, TIMI risk score demonstrates poor discrimination in predicting mortality or recurrent MI in patients with STEMI with reduced LVEF. LVEF and eGFR are major factors that should not be ignored by predictive risk scores in this population. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Conditional Toxicity Value (CTV) Predictor: An In Silico Approach for Generating Quantitative Risk Estimates for Chemicals.

    PubMed

    Wignall, Jessica A; Muratov, Eugene; Sedykh, Alexander; Guyton, Kathryn Z; Tropsha, Alexander; Rusyn, Ivan; Chiu, Weihsueh A

    2018-05-01

    Human health assessments synthesize human, animal, and mechanistic data to produce toxicity values that are key inputs to risk-based decision making. Traditional assessments are data-, time-, and resource-intensive, and they cannot be developed for most environmental chemicals owing to a lack of appropriate data. As recommended by the National Research Council, we propose a solution for predicting toxicity values for data-poor chemicals through development of quantitative structure-activity relationship (QSAR) models. We used a comprehensive database of chemicals with existing regulatory toxicity values from U.S. federal and state agencies to develop quantitative QSAR models. We compared QSAR-based model predictions to those based on high-throughput screening (HTS) assays. QSAR models for noncancer threshold-based values and cancer slope factors had cross-validation-based Q 2 of 0.25-0.45, mean model errors of 0.70-1.11 log 10 units, and applicability domains covering >80% of environmental chemicals. Toxicity values predicted from QSAR models developed in this study were more accurate and precise than those based on HTS assays or mean-based predictions. A publicly accessible web interface to make predictions for any chemical of interest is available at http://toxvalue.org. An in silico tool that can predict toxicity values with an uncertainty of an order of magnitude or less can be used to quickly and quantitatively assess risks of environmental chemicals when traditional toxicity data or human health assessments are unavailable. This tool can fill a critical gap in the risk assessment and management of data-poor chemicals. https://doi.org/10.1289/EHP2998.

  8. Evaluation of different radon guideline values based on characterization of ecological risk and visualization of lung cancer mortality trends in British Columbia, Canada.

    PubMed

    Branion-Calles, Michael C; Nelson, Trisalyn A; Henderson, Sarah B

    2015-11-19

    There is no safe concentration of radon gas, but guideline values provide threshold concentrations that are used to map areas at higher risk. These values vary between different regions, countries, and organizations, which can lead to differential classification of risk. For example the World Health Organization suggests a 100 Bq m(-3)value, while Health Canada recommends 200 Bq m(-3). Our objective was to describe how different thresholds characterized ecological radon risk and their visual association with lung cancer mortality trends in British Columbia, Canada. Eight threshold values between 50 and 600 Bq m(-3) were identified, and classes of radon vulnerability were defined based on whether the observed 95(th) percentile radon concentration was above or below each value. A balanced random forest algorithm was used to model vulnerability, and the results were mapped. We compared high vulnerability areas, their estimated populations, and differences in lung cancer mortality trends stratified by smoking prevalence and sex. Classification accuracy improved as the threshold concentrations decreased and the area classified as high vulnerability increased. Majority of the population lived within areas of lower vulnerability regardless of the threshold value. Thresholds as low as 50 Bq m(-3) were associated with higher lung cancer mortality, even in areas with low smoking prevalence. Temporal trends in lung cancer mortality were increasing for women, while decreasing for men. Radon contributes to lung cancer in British Columbia. The results of the study contribute evidence supporting the use of a reference level lower than the current guideline of 200 Bq m(-3) for the province.

  9. Echocardiography and risk prediction in advanced heart failure: incremental value over clinical markers.

    PubMed

    Agha, Syed A; Kalogeropoulos, Andreas P; Shih, Jeffrey; Georgiopoulou, Vasiliki V; Giamouzis, Grigorios; Anarado, Perry; Mangalat, Deepa; Hussain, Imad; Book, Wendy; Laskar, Sonjoy; Smith, Andrew L; Martin, Randolph; Butler, Javed

    2009-09-01

    Incremental value of echocardiography over clinical parameters for outcome prediction in advanced heart failure (HF) is not well established. We evaluated 223 patients with advanced HF receiving optimal therapy (91.9% angiotensin-converting enzyme inhibitor/angiotensin receptor blocker, 92.8% beta-blockers, 71.8% biventricular pacemaker, and/or defibrillator use). The Seattle Heart Failure Model (SHFM) was used as the reference clinical risk prediction scheme. The incremental value of echocardiographic parameters for event prediction (death or urgent heart transplantation) was measured by the improvement in fit and discrimination achieved by addition of standard echocardiographic parameters to the SHFM. After a median follow-up of 2.4 years, there were 38 (17.0%) events (35 deaths; 3 urgent transplants). The SHFM had likelihood ratio (LR) chi(2) 32.0 and C statistic 0.756 for event prediction. Left ventricular end-systolic volume, stroke volume, and severe tricuspid regurgitation were independent echocardiographic predictors of events. The addition of these parameters to SHFM improved LR chi(2) to 72.0 and C statistic to 0.866 (P < .001 and P=.019, respectively). Reclassifying the SHFM-predicted risk with use of the echocardiography-added model resulted in improved prognostic separation. Addition of standard echocardiographic variables to the SHFM results in significant improvement in risk prediction for patients with advanced HF.

  10. Evaluation and Enhancement of Calibration in the American College of Surgeons NSQIP Surgical Risk Calculator.

    PubMed

    Liu, Yaoming; Cohen, Mark E; Hall, Bruce L; Ko, Clifford Y; Bilimoria, Karl Y

    2016-08-01

    The American College of Surgeon (ACS) NSQIP Surgical Risk Calculator has been widely adopted as a decision aid and informed consent tool by surgeons and patients. Previous evaluations showed excellent discrimination and combined discrimination and calibration, but model calibration alone, and potential benefits of recalibration, were not explored. Because lack of calibration can lead to systematic errors in assessing surgical risk, our objective was to assess calibration and determine whether spline-based adjustments could improve it. We evaluated Surgical Risk Calculator model calibration, as well as discrimination, for each of 11 outcomes modeled from nearly 3 million patients (2010 to 2014). Using independent random subsets of data, we evaluated model performance for the Development (60% of records), Validation (20%), and Test (20%) datasets, where prediction equations from the Development dataset were recalibrated using restricted cubic splines estimated from the Validation dataset. We also evaluated performance on data subsets composed of higher-risk operations. The nonrecalibrated Surgical Risk Calculator performed well, but there was a slight tendency for predicted risk to be overestimated for lowest- and highest-risk patients and underestimated for moderate-risk patients. After recalibration, this distortion was eliminated, and p values for miscalibration were most often nonsignificant. Calibration was also excellent for subsets of higher-risk operations, though observed calibration was reduced due to instability associated with smaller sample sizes. Performance of NSQIP Surgical Risk Calculator models was shown to be excellent and improved with recalibration. Surgeons and patients can rely on the calculator to provide accurate estimates of surgical risk. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  11. Evaluation of markers and risk prediction models: Overview of relationships between NRI and decision-analytic measures

    PubMed Central

    Calster, Ben Van; Vickers, Andrew J; Pencina, Michael J; Baker, Stuart G; Timmerman, Dirk; Steyerberg, Ewout W

    2014-01-01

    BACKGROUND For the evaluation and comparison of markers and risk prediction models, various novel measures have recently been introduced as alternatives to the commonly used difference in the area under the ROC curve (ΔAUC). The Net Reclassification Improvement (NRI) is increasingly popular to compare predictions with one or more risk thresholds, but decision-analytic approaches have also been proposed. OBJECTIVE We aimed to identify the mathematical relationships between novel performance measures for the situation that a single risk threshold T is used to classify patients as having the outcome or not. METHODS We considered the NRI and three utility-based measures that take misclassification costs into account: difference in Net Benefit (ΔNB), difference in Relative Utility (ΔRU), and weighted NRI (wNRI). We illustrate the behavior of these measures in 1938 women suspect of ovarian cancer (prevalence 28%). RESULTS The three utility-based measures appear transformations of each other, and hence always lead to consistent conclusions. On the other hand, conclusions may differ when using the standard NRI, depending on the adopted risk threshold T, prevalence P and the obtained differences in sensitivity and specificity of the two models that are compared. In the case study, adding the CA-125 tumor marker to a baseline set of covariates yielded a negative NRI yet a positive value for the utility-based measures. CONCLUSIONS The decision-analytic measures are each appropriate to indicate the clinical usefulness of an added marker or compare prediction models, since these measures each reflect misclassification costs. This is of practical importance as these measures may thus adjust conclusions based on purely statistical measures. A range of risk thresholds should be considered in applying these measures. PMID:23313931

  12. Using risk maps to link land value damage and risk as basis of flexible risk management for brownfield redevelopment.

    PubMed

    Chen, I-chun; Ma, Hwong-wen

    2013-02-01

    Brownfield redevelopment involves numerous uncertain financial risks associated with market demand and land value. To reduce the uncertainty of the specific impact of land value and social costs, this study develops small-scale risk maps to determine the relationship between population risk (PR) and damaged land value (DLV) to facilitate flexible land reutilisation plans. This study used the spatial variability of exposure parameters in each village to develop the contaminated site-specific risk maps. In view of the combination of risk and cost, risk level that most affected land use was mainly 1.00×10(-6) to 1.00×10(-5) in this study area. Village 2 showed the potential for cost-effective conversion with contaminated land development. If the risk of remediation target was set at 5.00×10(-6), the DLV could be reduced by NT$15,005 million for the land developer. The land developer will consider the net benefit by quantifying the trade-off between the changes of land value and the cost of human health. In this study, small-scale risk maps can illuminate the economic incentive potential for contaminated site redevelopment through the adjustment of land value damage and human health risk. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Distinct encoding of risk and value in economic choice between multiple risky options☆

    PubMed Central

    Wright, Nicholas D.; Symmonds, Mkael; Dolan, Raymond J.

    2013-01-01

    Neural encoding of value-based stimuli is suggested to involve representations of summary statistics, including risk and expected value (EV). A more complex, but ecologically more common, context is when multiple risky options are evaluated together. However, it is unknown whether encoding related to option evaluation in these situations involves similar principles. Here we employed fMRI during a task that parametrically manipulated EV and risk in two simultaneously presented lotteries, both of which contained either gains or losses. We found representations of EV in medial prefrontal cortex and anterior insula, an encoding that was dependent on which option was chosen (i.e. chosen and unchosen EV) and whether the choice was over gains or losses. Parietal activity reflected whether the riskier or surer option was selected, whilst activity in a network of regions that also included parietal cortex reflected both combined risk and difference in risk for the two options. Our findings provide support for the idea that summary statistics underpin a representation of value-based stimuli, and further that these summary statistics undergo distinct forms of encoding. PMID:23684860

  14. Model Continuation High Schools: Social-Cognitive Factors That Contribute to Re-Engaging At-Risk Students Emotionally, Behaviorally, and Cognitively towards Graduation

    ERIC Educational Resources Information Center

    Sumbera, Becky

    2017-01-01

    This three-phase, two-method qualitative study explored and identified policies, programs, and practices that school-site administrators perceived as most effective in reengaging at-risk students emotionally, behaviorally, and cognitively at 10 California Model Continuation High Schools (MCHS). Eccles' expectancy-value theoretical framework was…

  15. Can Predictive Modeling Identify Head and Neck Oncology Patients at Risk for Readmission?

    PubMed

    Manning, Amy M; Casper, Keith A; Peter, Kay St; Wilson, Keith M; Mark, Jonathan R; Collar, Ryan M

    2018-05-01

    Objective Unplanned readmission within 30 days is a contributor to health care costs in the United States. The use of predictive modeling during hospitalization to identify patients at risk for readmission offers a novel approach to quality improvement and cost reduction. Study Design Two-phase study including retrospective analysis of prospectively collected data followed by prospective longitudinal study. Setting Tertiary academic medical center. Subjects and Methods Prospectively collected data for patients undergoing surgical treatment for head and neck cancer from January 2013 to January 2015 were used to build predictive models for readmission within 30 days of discharge using logistic regression, classification and regression tree (CART) analysis, and random forests. One model (logistic regression) was then placed prospectively into the discharge workflow from March 2016 to May 2016 to determine the model's ability to predict which patients would be readmitted within 30 days. Results In total, 174 admissions had descriptive data. Thirty-two were excluded due to incomplete data. Logistic regression, CART, and random forest predictive models were constructed using the remaining 142 admissions. When applied to 106 consecutive prospective head and neck oncology patients at the time of discharge, the logistic regression model predicted readmissions with a specificity of 94%, a sensitivity of 47%, a negative predictive value of 90%, and a positive predictive value of 62% (odds ratio, 14.9; 95% confidence interval, 4.02-55.45). Conclusion Prospectively collected head and neck cancer databases can be used to develop predictive models that can accurately predict which patients will be readmitted. This offers valuable support for quality improvement initiatives and readmission-related cost reduction in head and neck cancer care.

  16. Long-Term Post-CABG Survival: Performance of Clinical Risk Models Versus Actuarial Predictions.

    PubMed

    Carr, Brendan M; Romeiser, Jamie; Ruan, Joyce; Gupta, Sandeep; Seifert, Frank C; Zhu, Wei; Shroyer, A Laurie

    2016-01-01

    Clinical risk models are commonly used to predict short-term coronary artery bypass grafting (CABG) mortality but are less commonly used to predict long-term mortality. The added value of long-term mortality clinical risk models over traditional actuarial models has not been evaluated. To address this, the predictive performance of a long-term clinical risk model was compared with that of an actuarial model to identify the clinical variable(s) most responsible for any differences observed. Long-term mortality for 1028 CABG patients was estimated using the Hannan New York State clinical risk model and an actuarial model (based on age, gender, and race/ethnicity). Vital status was assessed using the Social Security Death Index. Observed/expected (O/E) ratios were calculated, and the models' predictive performances were compared using a nested c-index approach. Linear regression analyses identified the subgroup of risk factors driving the differences observed. Mortality rates were 3%, 9%, and 17% at one-, three-, and five years, respectively (median follow-up: five years). The clinical risk model provided more accurate predictions. Greater divergence between model estimates occurred with increasing long-term mortality risk, with baseline renal dysfunction identified as a particularly important driver of these differences. Long-term mortality clinical risk models provide enhanced predictive power compared to actuarial models. Using the Hannan risk model, a patient's long-term mortality risk can be accurately assessed and subgroups of higher-risk patients can be identified for enhanced follow-up care. More research appears warranted to refine long-term CABG clinical risk models. © 2015 The Authors. Journal of Cardiac Surgery Published by Wiley Periodicals, Inc.

  17. Long‐Term Post‐CABG Survival: Performance of Clinical Risk Models Versus Actuarial Predictions

    PubMed Central

    Carr, Brendan M.; Romeiser, Jamie; Ruan, Joyce; Gupta, Sandeep; Seifert, Frank C.; Zhu, Wei

    2015-01-01

    Abstract Background/aim Clinical risk models are commonly used to predict short‐term coronary artery bypass grafting (CABG) mortality but are less commonly used to predict long‐term mortality. The added value of long‐term mortality clinical risk models over traditional actuarial models has not been evaluated. To address this, the predictive performance of a long‐term clinical risk model was compared with that of an actuarial model to identify the clinical variable(s) most responsible for any differences observed. Methods Long‐term mortality for 1028 CABG patients was estimated using the Hannan New York State clinical risk model and an actuarial model (based on age, gender, and race/ethnicity). Vital status was assessed using the Social Security Death Index. Observed/expected (O/E) ratios were calculated, and the models' predictive performances were compared using a nested c‐index approach. Linear regression analyses identified the subgroup of risk factors driving the differences observed. Results Mortality rates were 3%, 9%, and 17% at one‐, three‐, and five years, respectively (median follow‐up: five years). The clinical risk model provided more accurate predictions. Greater divergence between model estimates occurred with increasing long‐term mortality risk, with baseline renal dysfunction identified as a particularly important driver of these differences. Conclusions Long‐term mortality clinical risk models provide enhanced predictive power compared to actuarial models. Using the Hannan risk model, a patient's long‐term mortality risk can be accurately assessed and subgroups of higher‐risk patients can be identified for enhanced follow‐up care. More research appears warranted to refine long‐term CABG clinical risk models. doi: 10.1111/jocs.12665 (J Card Surg 2016;31:23–30) PMID:26543019

  18. Research and Evaluations of the Health Aspects of Disasters, Part IX: Risk-Reduction Framework.

    PubMed

    Birnbaum, Marvin L; Daily, Elaine K; O'Rourke, Ann P; Loretti, Alessandro

    2016-06-01

    A disaster is a failure of resilience to an event. Mitigating the risks that a hazard will progress into a destructive event, or increasing the resilience of a society-at-risk, requires careful analysis, planning, and execution. The Disaster Logic Model (DLM) is used to define the value (effects, costs, and outcome(s)), impacts, and benefits of interventions directed at risk reduction. A Risk-Reduction Framework, based on the DLM, details the processes involved in hazard mitigation and/or capacity-building interventions to augment the resilience of a community or to decrease the risk that a secondary event will develop. This Framework provides the structure to systematically undertake and evaluate risk-reduction interventions. It applies to all interventions aimed at hazard mitigation and/or increasing the absorbing, buffering, or response capacities of a community-at-risk for a primary or secondary event that could result in a disaster. The Framework utilizes the structure provided by the DLM and consists of 14 steps: (1) hazards and risks identification; (2) historical perspectives and predictions; (3) selection of hazard(s) to address; (4) selection of appropriate indicators; (5) identification of current resilience standards and benchmarks; (6) assessment of the current resilience status; (7) identification of resilience needs; (8) strategic planning; (9) selection of an appropriate intervention; (10) operational planning; (11) implementation; (12) assessments of outputs; (13) synthesis; and (14) feedback. Each of these steps is a transformation process that is described in detail. Emphasis is placed on the role of Coordination and Control during planning, implementation of risk-reduction/capacity building interventions, and evaluation. Birnbaum ML , Daily EK , O'Rourke AP , Loretti A . Research and evaluations of the health aspects of disasters, part IX: Risk-Reduction Framework. Prehosp Disaster Med. 2016;31(3):309-325.

  19. Evaluation of Uncertainty and Sensitivity in Environmental Modeling at a Radioactive Waste Management Site

    NASA Astrophysics Data System (ADS)

    Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.

    2002-05-01

    Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more

  20. Predictor characteristics necessary for building a clinically useful risk prediction model: a simulation study.

    PubMed

    Schummers, Laura; Himes, Katherine P; Bodnar, Lisa M; Hutcheon, Jennifer A

    2016-09-21

    Compelled by the intuitive appeal of predicting each individual patient's risk of an outcome, there is a growing interest in risk prediction models. While the statistical methods used to build prediction models are increasingly well understood, the literature offers little insight to researchers seeking to gauge a priori whether a prediction model is likely to perform well for their particular research question. The objective of this study was to inform the development of new risk prediction models by evaluating model performance under a wide range of predictor characteristics. Data from all births to overweight or obese women in British Columbia, Canada from 2004 to 2012 (n = 75,225) were used to build a risk prediction model for preeclampsia. The data were then augmented with simulated predictors of the outcome with pre-set prevalence values and univariable odds ratios. We built 120 risk prediction models that included known demographic and clinical predictors, and one, three, or five of the simulated variables. Finally, we evaluated standard model performance criteria (discrimination, risk stratification capacity, calibration, and Nagelkerke's r 2 ) for each model. Findings from our models built with simulated predictors demonstrated the predictor characteristics required for a risk prediction model to adequately discriminate cases from non-cases and to adequately classify patients into clinically distinct risk groups. Several predictor characteristics can yield well performing risk prediction models; however, these characteristics are not typical of predictor-outcome relationships in many population-based or clinical data sets. Novel predictors must be both strongly associated with the outcome and prevalent in the population to be useful for clinical prediction modeling (e.g., one predictor with prevalence ≥20 % and odds ratio ≥8, or 3 predictors with prevalence ≥10 % and odds ratios ≥4). Area under the receiver operating characteristic curve

  1. [The model of perioperative risk assessment in elderly patients - interim analysis].

    PubMed

    Grabowska, Izabela; Ścisło, Lucyna; Pietruszka, Szymon; Walewska, Elzbieta; Paszko, Agata; Siarkiewicz, Benita; Richter, Piotr; Budzyński, Andrzej; Szczepanik, Antoni M

    2017-04-21

    Demographic changes in contemporary society require implementation of proper perioperative care of elderly patients due to an increased risk of perioperative complications in this group. Preoperative assessment of health status identifies risks and enables preventive interventions, improving outcomes of surgical treatment. The Comprehensive Geriatric Assessment contains numerous diagnostic tests and consultations, which is expensive and difficult to use in everyday practice. The development of a simplified model of perioperative assessment of elderly patients will help identifying the group of patients who require further diagnostic workup. The aim of the study is to evaluate the usefulness of the tests used in a proposed model of perioperative risk assessment in elderly patients. In a group of 178 patients older than 64 years admitted for surgical procedures, a battery of tests was performed. The proposed model of perioperative risk assessment included: Charlson Comorbidity Index, ADL (activities of daily living), TUG test (timed "up and go" test), MNA (mini nutritional assessment), AMTS (abbreviated mental test score), spirometry measurement of respiratory muscle strength (Pimax, Pemax). Distribution of abnormal results of each test has been analysed. The Charlson Index over 6 points was recorded in 10.1% of patients (15.1% in cancer patients). Abnormal result of the TUG test was observed in 32.1%. The risk of malnutrition in MNA test has been identified in 29.7% (39.2% in cancer patients). Abnormal test results at the level of 10-30% indicate potential diagnostic value of Charlson Comorbidity Index, TUG test and MNA in the evaluation of perioperative risk in elderly patients.

  2. Risk-dependent reward value signal in human prefrontal cortex

    PubMed Central

    Tobler, Philippe N.; Christopoulos, George I.; O'Doherty, John P.; Dolan, Raymond J.; Schultz, Wolfram

    2009-01-01

    When making choices under uncertainty, people usually consider both the expected value and risk of each option, and choose the one with the higher utility. Expected value increases the expected utility of an option for all individuals. Risk increases the utility of an option for risk-seeking individuals, but decreases it for risk averse individuals. In 2 separate experiments, one involving imperative (no-choice), the other choice situations, we investigated how predicted risk and expected value aggregate into a common reward signal in the human brain. Blood oxygen level dependent responses in lateral regions of the prefrontal cortex increased monotonically with increasing reward value in the absence of risk in both experiments. Risk enhanced these responses in risk-seeking participants, but reduced them in risk-averse participants. The aggregate value and risk responses in lateral prefrontal cortex contrasted with pure value signals independent of risk in the striatum. These results demonstrate an aggregate risk and value signal in the prefrontal cortex that would be compatible with basic assumptions underlying the mean-variance approach to utility. PMID:19369207

  3. Probabilistic and deterministic evaluation of uncertainty in a local scale multi-risk analysis

    NASA Astrophysics Data System (ADS)

    Lari, S.; Frattini, P.; Crosta, G. B.

    2009-04-01

    We performed a probabilistic multi-risk analysis (QPRA) at the local scale for a 420 km2 area surrounding the town of Brescia (Northern Italy). We calculated the expected annual loss in terms of economical damage and life loss, for a set of risk scenarios of flood, earthquake and industrial accident with different occurrence probabilities and different intensities. The territorial unit used for the study was the census parcel, of variable area, for which a large amount of data was available. Due to the lack of information related to the evaluation of the hazards, to the value of the exposed elements (e.g., residential and industrial area, population, lifelines, sensitive elements as schools, hospitals) and to the process-specific vulnerability, and to a lack of knowledge of the processes (floods, industrial accidents, earthquakes), we assigned an uncertainty to the input variables of the analysis. For some variables an homogeneous uncertainty was assigned on the whole study area, as for instance for the number of buildings of various typologies, and for the event occurrence probability. In other cases, as for phenomena intensity (e.g.,depth of water during flood) and probability of impact, the uncertainty was defined in relation to the census parcel area. In fact assuming some variables homogeneously diffused or averaged on the census parcels, we introduce a larger error for larger parcels. We propagated the uncertainty in the analysis using three different models, describing the reliability of the output (risk) as a function of the uncertainty of the inputs (scenarios and vulnerability functions). We developed a probabilistic approach based on Monte Carlo simulation, and two deterministic models, namely First Order Second Moment (FOSM) and Point Estimate (PE). In general, similar values of expected losses are obtained with the three models. The uncertainty of the final risk value is in the three cases around the 30% of the expected value. Each of the models

  4. Using risk-adjustment models to identify high-cost risks.

    PubMed

    Meenan, Richard T; Goodman, Michael J; Fishman, Paul A; Hornbrook, Mark C; O'Keeffe-Rosetti, Maureen C; Bachman, Donald J

    2003-11-01

    We examine the ability of various publicly available risk models to identify high-cost individuals and enrollee groups using multi-HMO administrative data. Five risk-adjustment models (the Global Risk-Adjustment Model [GRAM], Diagnostic Cost Groups [DCGs], Adjusted Clinical Groups [ACGs], RxRisk, and Prior-expense) were estimated on a multi-HMO administrative data set of 1.5 million individual-level observations for 1995-1996. Models produced distributions of individual-level annual expense forecasts for comparison to actual values. Prespecified "high-cost" thresholds were set within each distribution. The area under the receiver operating characteristic curve (AUC) for "high-cost" prevalences of 1% and 0.5% was calculated, as was the proportion of "high-cost" dollars correctly identified. Results are based on a separate 106,000-observation validation dataset. For "high-cost" prevalence targets of 1% and 0.5%, ACGs, DCGs, GRAM, and Prior-expense are very comparable in overall discrimination (AUCs, 0.83-0.86). Given a 0.5% prevalence target and a 0.5% prediction threshold, DCGs, GRAM, and Prior-expense captured $963,000 (approximately 3%) more "high-cost" sample dollars than other models. DCGs captured the most "high-cost" dollars among enrollees with asthma, diabetes, and depression; predictive performance among demographic groups (Medicaid members, members over 64, and children under 13) varied across models. Risk models can efficiently identify enrollees who are likely to generate future high costs and who could benefit from case management. The dollar value of improved prediction performance of the most accurate risk models should be meaningful to decision-makers and encourage their broader use for identifying high costs.

  5. STakeholder-Objective Risk Model (STORM): Determining the aggregated risk of multiple contaminant hazards in groundwater well catchments

    NASA Astrophysics Data System (ADS)

    Enzenhoefer, R.; Binning, P. J.; Nowak, W.

    2015-09-01

    Risk is often defined as the product of probability, vulnerability and value. Drinking water supply from groundwater abstraction is often at risk due to multiple hazardous land use activities in the well catchment. Each hazard might or might not introduce contaminants into the subsurface at any point in time, which then affects the pumped quality upon transport through the aquifer. In such situations, estimating the overall risk is not trivial, and three key questions emerge: (1) How to aggregate the impacts from different contaminants and spill locations to an overall, cumulative impact on the value at risk? (2) How to properly account for the stochastic nature of spill events when converting the aggregated impact to a risk estimate? (3) How will the overall risk and subsequent decision making depend on stakeholder objectives, where stakeholder objectives refer to the values at risk, risk attitudes and risk metrics that can vary between stakeholders. In this study, we provide a STakeholder-Objective Risk Model (STORM) for assessing the total aggregated risk. Or concept is a quantitative, probabilistic and modular framework for simulation-based risk estimation. It rests on the source-pathway-receptor concept, mass-discharge-based aggregation of stochastically occuring spill events, accounts for uncertainties in the involved flow and transport models through Monte Carlo simulation, and can address different stakeholder objectives. We illustrate the application of STORM in a numerical test case inspired by a German drinking water catchment. As one may expect, the results depend strongly on the chosen stakeholder objectives, but they are equally sensitive to different approaches for risk aggregation across different hazards, contaminant types, and over time.

  6. [Determination of prognostic value of the OESIL risk score at 6 months in a Colombian cohort with syncope evaluated in the emergency department; first Latin American experience].

    PubMed

    Díaz-Tribaldos, Diana Carolina; Mora, Guillermo; Olaya, Alejandro; Marín, Jorge; Sierra Matamoros, Fabio

    2017-07-14

    To establish the prognostic value, with sensitivity, specificity, positive predictive value, and negative predictive value for the OESIL syncope risk score to predict the presentation of severe outcomes (death, invasive interventions, and readmission) after 6 months of observation in adults who consulted the emergency department due to syncope. Observational, prospective, and multicentre study with enrolment of subjects older than 18 years, who consulted in the emergency department due to syncope. A record was mad of the demographic and clinical information of all patients. The OESIL risk score was calculated, and severe patient outcomes were followed up during a 6 month period using telephone contact. A total of 161 patients met the inclusion criteria and were followed up for 6 months. A score above or equal to 2 in the risk score, classified as high risk, was present in 72% of the patients. The characteristics of the risk score to predict the combined outcome of mortality, invasive interventions, and readmission for a score above or equal to 2 were 75.7, 30.5, 43.1, and 64.4% for sensitivity, specificity, positive predictive value, and negative predictive value, respectively. A score above or equal to 2 in the OESIL risk score applied in Colombian population was of limited use to predict the studied severe outcomes. This score will be unable to discriminate between patients that benefit of early admission and further clinical studies. Copyright © 2017 Instituto Nacional de Cardiología Ignacio Chávez. Publicado por Masson Doyma México S.A. All rights reserved.

  7. [Quantitative evaluation of health risk associated with occupational inhalation exposure to vinyl chloride at production plants in Poland].

    PubMed

    Szymczak, W

    1997-01-01

    Vinyl chloride is classified by the IARC in group 1-human carcinogens. In Poland occupational exposure to vinyl chloride is found among workers employed in many branches of industry, among others in the industry of vinyl chloride synthesis and polymerization as well as in the plastics, footwear, rubber, pharmaceutical and metallurgical industries. Concentrations observed range from the noon-determinable level to 90 mg/m3, at the MAC value equal to 5 mg/m3. Neoplasm of liver is a major carcinogenic effect of vinyl chloride. Hence, the health assessment focused on this critical risk. Four different linear dose-response models, developed by several authors and based on results of different epidemiological studies, were used to characterise the extent of cancer risk depending on the level of vinyl chloride concentrations. The estimated risk related to a forty-year employment under exposure equal to MAC values (5 mg/m3) fell within the range from 2.9.10(-4) to 2.6.10(-3). As the figures depict it did not exceed the acceptable level (10(-3)).

  8. Predictive value of late decelerations for fetal acidemia in unselective low-risk pregnancies.

    PubMed

    Sameshima, Hiroshi; Ikenoue, Tsuyomu

    2005-01-01

    We evaluated the clinical significance of late decelerations (LD) of intrapartum fetal heart rate (FHR) monitoring to detect low pH (< 7.1) in low-risk pregnancies. We selected two secondary and two tertiary-level institutions where 10,030 women delivered. Among them, 5522 were low-risk pregnancies. The last 2 hours of FHR patterns before delivery were interpreted according to the guidelines of the National Institute of Child Health and Human Development. The correlation between the incidence of LD (occasional, < 50%; recurrent, > or = 50%) and severity (reduced baseline FHR accelerations and variability) of LD, and low pH (< 7.1) were evaluated. Statistical analyses included a contingency table with chi2 and the Fisher test, and one-way analysis of variance with the Bonferroni/Dunn test. In the 5522 low-risk pregnancies, 301 showed occasional LD and 99 showed recurrent LD. Blood gases and pH values deteriorated as the incidence of LD increased and as baseline accelerations or variability was decreased. Positive predictive value for low pH (< 7.1) was exponentially elevated from 0% at no deceleration, 1% in occasional LD, and > 50% in recurrent LD with no baseline FHR accelerations and reduced variability. In low-risk pregnancies, information on LD combined with acceleration and baseline variability enables us to predict the potential incidence of fetal acidemia.

  9. Risk evaluation of uranium mining: A geochemical inverse modelling approach

    NASA Astrophysics Data System (ADS)

    Rillard, J.; Zuddas, P.; Scislewski, A.

    2011-12-01

    It is well known that uranium extraction operations can increase risks linked to radiation exposure. The toxicity of uranium and associated heavy metals is the main environmental concern regarding exploitation and processing of U-ore. In areas where U mining is planned, a careful assessment of toxic and radioactive element concentrations is recommended before the start of mining activities. A background evaluation of harmful elements is important in order to prevent and/or quantify future water contamination resulting from possible migration of toxic metals coming from ore and waste water interaction. Controlled leaching experiments were carried out to investigate processes of ore and waste (leached ore) degradation, using samples from the uranium exploitation site located in Caetité-Bahia, Brazil. In experiments in which the reaction of waste with water was tested, we found that the water had low pH and high levels of sulphates and aluminium. On the other hand, in experiments in which ore was tested, the water had a chemical composition comparable to natural water found in the region of Caetité. On the basis of our experiments, we suggest that waste resulting from sulphuric acid treatment can induce acidification and salinization of surface and ground water. For this reason proper storage of waste is imperative. As a tool to evaluate the risks, a geochemical inverse modelling approach was developed to estimate the water-mineral interaction involving the presence of toxic elements. We used a method earlier described by Scislewski and Zuddas 2010 (Geochim. Cosmochim. Acta 74, 6996-7007) in which the reactive surface area of mineral dissolution can be estimated. We found that the reactive surface area of rock parent minerals is not constant during time but varies according to several orders of magnitude in only two months of interaction. We propose that parent mineral heterogeneity and particularly, neogenic phase formation may explain the observed variation of the

  10. Overcoming Learning Aversion in Evaluating and Managing Uncertain Risks.

    PubMed

    Cox, Louis Anthony Tony

    2015-10-01

    Decision biases can distort cost-benefit evaluations of uncertain risks, leading to risk management policy decisions with predictably high retrospective regret. We argue that well-documented decision biases encourage learning aversion, or predictably suboptimal learning and premature decision making in the face of high uncertainty about the costs, risks, and benefits of proposed changes. Biases such as narrow framing, overconfidence, confirmation bias, optimism bias, ambiguity aversion, and hyperbolic discounting of the immediate costs and delayed benefits of learning, contribute to deficient individual and group learning, avoidance of information seeking, underestimation of the value of further information, and hence needlessly inaccurate risk-cost-benefit estimates and suboptimal risk management decisions. In practice, such biases can create predictable regret in selection of potential risk-reducing regulations. Low-regret learning strategies based on computational reinforcement learning models can potentially overcome some of these suboptimal decision processes by replacing aversion to uncertain probabilities with actions calculated to balance exploration (deliberate experimentation and uncertainty reduction) and exploitation (taking actions to maximize the sum of expected immediate reward, expected discounted future reward, and value of information). We discuss the proposed framework for understanding and overcoming learning aversion and for implementing low-regret learning strategies using regulation of air pollutants with uncertain health effects as an example. © 2015 Society for Risk Analysis.

  11. Serving the Needs of At-Risk Refugee Youth: A Program Evaluation

    ERIC Educational Resources Information Center

    McBrien, J. Lynn

    2006-01-01

    Refugee students, although frequently subsumed under the "immigrant" heading, often suffer from effects of significant trauma that can make them more vulnerable than children of voluntary immigrant families. This study evaluated a program created specifically for refugee youth at-risk for academic failure and "social death." The program goals…

  12. Neural correlates of value, risk, and risk aversion contributing to decision making under risk.

    PubMed

    Christopoulos, George I; Tobler, Philippe N; Bossaerts, Peter; Dolan, Raymond J; Schultz, Wolfram

    2009-10-07

    Decision making under risk is central to human behavior. Economic decision theory suggests that value, risk, and risk aversion influence choice behavior. Although previous studies identified neural correlates of decision parameters, the contribution of these correlates to actual choices is unknown. In two different experiments, participants chose between risky and safe options. We identified discrete blood oxygen level-dependent (BOLD) correlates of value and risk in the ventral striatum and anterior cingulate, respectively. Notably, increasing inferior frontal gyrus activity to low risk and safe options correlated with higher risk aversion. Importantly, the combination of these BOLD responses effectively decoded the behavioral choice. Striatal value and cingulate risk responses increased the probability of a risky choice, whereas inferior frontal gyrus responses showed the inverse relationship. These findings suggest that the BOLD correlates of decision factors are appropriate for an ideal observer to detect behavioral choices. More generally, these biological data contribute to the validity of the theoretical decision parameters for actual decisions under risk.

  13. The European Thoracic Surgery Database project: modelling the risk of in-hospital death following lung resection.

    PubMed

    Berrisford, Richard; Brunelli, Alessandro; Rocco, Gaetano; Treasure, Tom; Utley, Martin

    2005-08-01

    To identify pre-operative factors associated with in-hospital mortality following lung resection and to construct a risk model that could be used prospectively to inform decisions and retrospectively to enable fair comparisons of outcomes. Data were submitted to the European Thoracic Surgery Database from 27 units in 14 countries. We analysed data concerning all patients that had a lung resection. Logistic regression was used with a random sample of 60% of cases to identify pre-operative factors associated with in-hospital mortality and to build a model of risk. The resulting model was tested on the remaining 40% of patients. A second model based on age and ppoFEV1% was developed for risk of in-hospital death amongst tumour resection patients. Of the 3426 adult patients that had a first lung resection for whom mortality data were available, 66 died within the same hospital admission. Within the data used for model development, dyspnoea (according to the Medical Research Council classification), ASA (American Society of Anaesthesiologists) score, class of procedure and age were found to be significantly associated with in-hospital death in a multivariate analysis. The logistic model developed on these data displayed predictive value when tested on the remaining data. Two models of the risk of in-hospital death amongst adult patients undergoing lung resection have been developed. The models show predictive value and can be used to discern between high-risk and low-risk patients. Amongst the test data, the model developed for all diagnoses performed well at low risk, underestimated mortality at medium risk and overestimated mortality at high risk. The second model for resection of lung neoplasms was developed after establishing the performance of the first model and so could not be tested robustly. That said, we were encouraged by its performance over the entire range of estimated risk. The first of these two models could be regarded as an evaluation based on

  14. Identifying at-risk employees: A behavioral model for predicting potential insider threats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greitzer, Frank L.; Kangas, Lars J.; Noonan, Christine F.

    A psychosocial model was developed to assess an employee’s behavior associated with an increased risk of insider abuse. The model is based on case studies and research literature on factors/correlates associated with precursor behavioral manifestations of individuals committing insider crimes. In many of these crimes, managers and other coworkers observed that the offenders had exhibited signs of stress, disgruntlement, or other issues, but no alarms were raised. Barriers to using such psychosocial indicators include the inability to recognize the signs and the failure to record the behaviors so that they could be assessed by a person experienced in psychosocial evaluations.more » We have developed a model using a Bayesian belief network with the help of human resources staff, experienced in evaluating behaviors in staff. We conducted an experiment to assess its agreement with human resources and management professionals, with positive results. If implemented in an operational setting, the model would be part of a set of management tools for employee assessment that can raise an alarm about employees who pose higher insider threat risks. In separate work, we combine this psychosocial model’s assessment with computer workstation behavior to raise the efficacy of recognizing an insider crime in the making.« less

  15. A behavioural and neural evaluation of prospective decision-making under risk

    PubMed Central

    Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J.

    2010-01-01

    Making the best choice when faced with a chain of decisions requires a person to judge both anticipated outcomes and future actions. Although economic decision-making models account for both risk and reward in single choice contexts there is a dearth of similar knowledge about sequential choice. Classical utility-based models assume that decision-makers select and follow an optimal pre-determined strategy, irrespective of the particular order in which options are presented. An alternative model involves continuously re-evaluating decision utilities, without prescribing a specific future set of choices. Here, using behavioral and functional magnetic resonance imaging (fMRI) data, we studied human subjects in a sequential choice task and use these data to compare alternative decision models of valuation and strategy selection. We provide evidence that subjects adopt a model of re-evaluating decision utilities, where available strategies are continuously updated and combined in assessing action values. We validate this model by using simultaneously-acquired fMRI data to show that sequential choice evokes a pattern of neural response consistent with a tracking of anticipated distribution of future reward, as expected in such a model. Thus, brain activity evoked at each decision point reflects the expected mean, variance and skewness of possible payoffs, consistent with the idea that sequential choice evokes a prospective evaluation of both available strategies and possible outcomes. PMID:20980595

  16. Sensitivity and bias in decision-making under risk: evaluating the perception of reward, its probability and value.

    PubMed

    Sharp, Madeleine E; Viswanathan, Jayalakshmi; Lanyon, Linda J; Barton, Jason J S

    2012-01-01

    There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour. We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk. Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%. Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a 'risk premium' of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability. This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia.

  17. A Single Conjunction Risk Assessment Metric: the F-Value

    NASA Technical Reports Server (NTRS)

    Frigm, Ryan Clayton; Newman, Lauri K.

    2009-01-01

    The Conjunction Assessment Team at NASA Goddard Space Flight Center provides conjunction risk assessment for many NASA robotic missions. These risk assessments are based on several figures of merit, such as miss distance, probability of collision, and orbit determination solution quality. However, these individual metrics do not singly capture the overall risk associated with a conjunction, making it difficult for someone without this complete understanding to take action, such as an avoidance maneuver. The goal of this analysis is to introduce a single risk index metric that can easily convey the level of risk without all of the technical details. The proposed index is called the conjunction "F-value." This paper presents the concept of the F-value and the tuning of the metric for use in routine Conjunction Assessment operations.

  18. The value of 24-hour video-EEG in evaluating recurrence risk following a first unprovoked seizure: A prospective study.

    PubMed

    Chen, Tao; Si, Yang; Chen, Deng; Zhu, Lina; Xu, Da; Chen, Sihan; Zhou, Dong; Liu, Ling

    2016-08-01

    To evaluate the value of 24-hour video-EEG (VEEG) in assessing recurrence risk after a first unprovoked seizure. Consecutively 134 patients with a first unprovoked epileptic seizure were recuited from West China Hospital, Sichuan University, between January 2010 and January 2013. All patients underwent VEEG and magnetic resonance imaging (MRI) of the brain, Each patient had at least 24-month follow up. Seventy-six (56.7%) patients had abnormal VEEG, and VEEG abnormalities was associated with an increased risk of seizure recurrence (RR 2.84, 95% CI 1.67-4.82, p<0.001). The overall accumulated seizure recurrence risks were 51.5% in all patients, and 45.6% in generalized seizures subgroup, and with no significant difference. The subgroup of VEEG with epileptiform discharges had an increased seizure recurrence risk compared with normal VEEG (RR 2.76, 95% CI 1.83-5.34, P<0.001) and the nonsignificant abnormality VEEG group (RR 2.05, 95% CI 1.14-3.82, P<0.001). Within the group of whom showed epileptiform discharges, the recurrence rate of those with generalized epileptiform discharge abnormality and focal epileptiform discharge abnormality were not significantly different (RR 1.09, 95% CI 0.44-2.69, P=0.85). An abnormal VEEG is a risk factor for seizure recurrence in patients with a first unprovoked seizure, especially if epileptiform discharges past. The recurrence risks were 73.2% in the epileptiform discharges abnormality VEEG group, which may help the diagnosis of epilepsy according to the practical clinical definition of epilepsy. Copyright © 2016 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  19. Early identification of patients at risk of acute lung injury: evaluation of lung injury prediction score in a multicenter cohort study.

    PubMed

    Gajic, Ognjen; Dabbagh, Ousama; Park, Pauline K; Adesanya, Adebola; Chang, Steven Y; Hou, Peter; Anderson, Harry; Hoth, J Jason; Mikkelsen, Mark E; Gentile, Nina T; Gong, Michelle N; Talmor, Daniel; Bajwa, Ednan; Watkins, Timothy R; Festic, Emir; Yilmaz, Murat; Iscimen, Remzi; Kaufman, David A; Esper, Annette M; Sadikot, Ruxana; Douglas, Ivor; Sevransky, Jonathan; Malinchoc, Michael

    2011-02-15

    Accurate, early identification of patients at risk for developing acute lung injury (ALI) provides the opportunity to test and implement secondary prevention strategies. To determine the frequency and outcome of ALI development in patients at risk and validate a lung injury prediction score (LIPS). In this prospective multicenter observational cohort study, predisposing conditions and risk modifiers predictive of ALI development were identified from routine clinical data available during initial evaluation. The discrimination of the model was assessed with area under receiver operating curve (AUC). The risk of death from ALI was determined after adjustment for severity of illness and predisposing conditions. Twenty-two hospitals enrolled 5,584 patients at risk. ALI developed a median of 2 (interquartile range 1-4) days after initial evaluation in 377 (6.8%; 148 ALI-only, 229 adult respiratory distress syndrome) patients. The frequency of ALI varied according to predisposing conditions (from 3% in pancreatitis to 26% after smoke inhalation). LIPS discriminated patients who developed ALI from those who did not with an AUC of 0.80 (95% confidence interval, 0.78-0.82). When adjusted for severity of illness and predisposing conditions, development of ALI increased the risk of in-hospital death (odds ratio, 4.1; 95% confidence interval, 2.9-5.7). ALI occurrence varies according to predisposing conditions and carries an independently poor prognosis. Using routinely available clinical data, LIPS identifies patients at high risk for ALI early in the course of their illness. This model will alert clinicians about the risk of ALI and facilitate testing and implementation of ALI prevention strategies. Clinical trial registered with www.clinicaltrials.gov (NCT00889772).

  20. [Family at-risk situation: model of care emphasizing health education].

    PubMed

    Costa, Maria Suêuda; Santos, Míria Conceiçõ Lavinas; Martinho, Neudson Johnson; Barroso, Maria Grasiela Teixeira; Vieira, Neiva Francenely Cunha

    2007-03-01

    This case study aimed at identifying family dynamics in face of risk situation, and to propose care strategies for health education based on the King's model. The case was a family considered to be at risk in the periphery of Fortaleza, Ceari, Brazil. Data were collected by domiciliary visits, participant observation, and interviews. The results showed that family care transcends the biomedical dimension, contemplates the family's perceptual field, and demands its participation in the elaboration of educational proposals aiming at the social construction of health under a participant and transforming perspective.

  1. The social value of mortality risk reduction: VSL versus the social welfare function approach.

    PubMed

    Adler, Matthew D; Hammitt, James K; Treich, Nicolas

    2014-05-01

    We examine how different welfarist frameworks evaluate the social value of mortality risk reduction. These frameworks include classical, distributively unweighted cost-benefit analysis--i.e., the "value per statistical life" (VSL) approach-and various social welfare functions (SWFs). The SWFs are either utilitarian or prioritarian, applied to policy choice under risk in either an "ex post" or "ex ante" manner. We examine the conditions on individual utility and on the SWF under which these frameworks display sensitivity to wealth and to baseline risk. Moreover, we discuss whether these frameworks satisfy related properties that have received some attention in the literature, namely equal value of risk reduction, preference for risk equity, and catastrophe aversion. We show that the particular manner in which VSL ranks risk-reduction measures is not necessarily shared by other welfarist frameworks. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. [Application of three risk assessment models in occupational health risk assessment of dimethylformamide].

    PubMed

    Wu, Z J; Xu, B; Jiang, H; Zheng, M; Zhang, M; Zhao, W J; Cheng, J

    2016-08-20

    Objective: To investigate the application of United States Environmental Protection Agency (EPA) inhalation risk assessment model, Singapore semi-quantitative risk assessment model, and occupational hazards risk assessment index method in occupational health risk in enterprises using dimethylformamide (DMF) in a certain area in Jiangsu, China, and to put forward related risk control measures. Methods: The industries involving DMF exposure in Jiangsu province were chosen as the evaluation objects in 2013 and three risk assessment models were used in the evaluation. EPA inhalation risk assessment model: HQ=EC/RfC; Singapore semi-quantitative risk assessment model: Risk= (HR×ER) 1/2 ; Occupational hazards risk assessment index=2 Health effect level ×2 exposure ratio ×Operation condition level. Results: The results of hazard quotient (HQ>1) from EPA inhalation risk assessment model suggested that all the workshops (dry method, wet method and printing) and work positions (pasting, burdening, unreeling, rolling, assisting) were high risk. The results of Singapore semi-quantitative risk assessment model indicated that the workshop risk level of dry method, wet method and printing were 3.5 (high) , 3.5 (high) and 2.8 (general) , and position risk level of pasting, burdening, unreeling, rolling, assisting were 4 (high) , 4 (high) , 2.8 (general) , 2.8 (general) and 2.8 (general) . The results of occupational hazards risk assessment index method demonstrated that the position risk index of pasting, burdening, unreeling, rolling, assisting were 42 (high) , 33 (high) , 23 (middle) , 21 (middle) and 22 (middle) . The results of Singapore semi-quantitative risk assessment model and occupational hazards risk assessment index method were similar, while EPA inhalation risk assessment model indicated all the workshops and positions were high risk. Conclusion: The occupational hazards risk assessment index method fully considers health effects, exposure, and operating conditions

  3. Detection of high GS risk group prostate tumors by diffusion tensor imaging and logistic regression modelling.

    PubMed

    Ertas, Gokhan

    2018-07-01

    To assess the value of joint evaluation of diffusion tensor imaging (DTI) measures by using logistic regression modelling to detect high GS risk group prostate tumors. Fifty tumors imaged using DTI on a 3 T MRI device were analyzed. Regions of interests focusing on the center of tumor foci and noncancerous tissue on the maps of mean diffusivity (MD) and fractional anisotropy (FA) were used to extract the minimum, the maximum and the mean measures. Measure ratio was computed by dividing tumor measure by noncancerous tissue measure. Logistic regression models were fitted for all possible pair combinations of the measures using 5-fold cross validation. Systematic differences are present for all MD measures and also for all FA measures in distinguishing the high risk tumors [GS ≥ 7(4 + 3)] from the low risk tumors [GS ≤ 7(3 + 4)] (P < 0.05). Smaller value for MD measures and larger value for FA measures indicate the high risk. The models enrolling the measures achieve good fits and good classification performances (R 2 adj  = 0.55-0.60, AUC = 0.88-0.91), however the models using the measure ratios perform better (R 2 adj  = 0.59-0.75, AUC = 0.88-0.95). The model that employs the ratios of minimum MD and maximum FA accomplishes the highest sensitivity, specificity and accuracy (Se = 77.8%, Sp = 96.9% and Acc = 90.0%). Joint evaluation of MD and FA diffusion tensor imaging measures is valuable to detect high GS risk group peripheral zone prostate tumors. However, use of the ratios of the measures improves the accuracy of the detections substantially. Logistic regression modelling provides a favorable solution for the joint evaluations easily adoptable in clinical practice. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Evaluation and simplification of the occupational slip, trip and fall risk-assessment test

    PubMed Central

    NAKAMURA, Takehiro; OYAMA, Ichiro; FUJINO, Yoshihisa; KUBO, Tatsuhiko; KADOWAKI, Koji; KUNIMOTO, Masamizu; ODOI, Haruka; TABATA, Hidetoshi; MATSUDA, Shinya

    2016-01-01

    Objective: The purpose of this investigation is to evaluate the efficacy of the occupational slip, trip and fall (STF) risk assessment test developed by the Japan Industrial Safety and Health Association (JISHA). We further intended to simplify the test to improve efficiency. Methods: A previous cohort study was performed using 540 employees aged ≥50 years who took the JISHA’s STF risk assessment test. We conducted multivariate analysis using these previous results as baseline values and answers to questionnaire items or score on physical fitness tests as variables. The screening efficiency of each model was evaluated based on the obtained receiver operating characteristic (ROC) curve. Results: The area under the ROC obtained in multivariate analysis was 0.79 when using all items. Six of the 25 questionnaire items were selected for stepwise analysis, giving an area under the ROC curve of 0.77. Conclusion: Based on the results of follow-up performed one year after the initial examination, we successfully determined the usefulness of the STF risk assessment test. Administering a questionnaire alone is sufficient for screening subjects at risk of STF during the subsequent one-year period. PMID:27021057

  5. Incidence of upper tract abnormalities in patients with neurovesical dysfunction secondary to multiple sclerosis: analysis of risk factors at initial urologic evaluation.

    PubMed

    Lemack, Gary E; Hawker, Kathleen; Frohman, Elliot

    2005-05-01

    To determine the incidence of upper tract abnormalities on renal ultrasonography in patients with multiple sclerosis (MS) referred for urologic evaluation, as well as to identify any risk factors present on the basis of the historical information and urodynamic findings. Data were derived from all patients with MS referred to the neurourology clinic during a 4-year period. The database was specifically queried for patients found to have upper tract abnormalities on screening renal ultrasonography. Demographic parameters, as well as laboratory values (creatinine) and urodynamic results, were evaluated for risk factors associated with abnormal upper tract findings. Of the 113 patients referred and evaluated, 66 completed both urodynamic testing and renal ultrasonography. Eleven (16.7%) had abnormal ultrasound findings, with focal caliectasis the most common finding. No demographic parameter (age, sex, time since MS diagnosis, MS pattern) was associated with a greater likelihood of abnormal renal ultrasonography on univariate analysis. Neither serum creatinine nor any urodynamic finding (including the presence of dyssynergia or the threshold and amplitude of detrusor overactivity) was associated with abnormal renal ultrasound findings. No patients in our series had any indication of obstructive uropathy more severe than mild hydronephrosis. Of the 16.7% of patients with any abnormal findings, most were noted to have minor caliectasis, likely to be of little clinical significance. Although no factors identifying patients at risk of renal abnormalities at presentation were found, ongoing evaluation of patients with baseline findings will serve to identify those at risk of progression.

  6. Evaluation of the ASCO Value Framework for Anticancer Drugs at an Academic Medical Center.

    PubMed

    Wilson, Leslie; Lin, Tracy; Wang, Ling; Patel, Tanuja; Tran, Denise; Kim, Sarah; Dacey, Katie; Yuen, Courtney; Kroon, Lisa; Brodowy, Bret; Rodondi, Kevin

    2017-02-01

    Anticancer drug prices have increased by an average of 12% each year from 1996 to 2014. A major concern is that the increasing cost and responsibility of evaluating treatment options are being shifted to patients. This research compared 2 value-based pricing models that were being considered for use at the University of California, San Francisco (UCSF) Medical Center to address the growing burden of high-cost cancer drugs while improving patient-centered care. The Medication Outcomes Center (MOC) in the Department of Clinical Pharmacy, University of California, San Francisco (UCSF), School of Pharmacy focuses on assessing the value of medication-related health care interventions and disseminating findings to the UCSF Medical Center. The High Cost Oncology Drug Initiative at the MOC aims to assess and adopt tools for the critical assessment and amelioration of high-cost cancer drugs. The American Society of Clinical Oncology (ASCO) Value Framework (2016 update) and a cost-effectiveness analysis (CEA) framework were identified as potential tools for adoption. To assess 1 prominent value framework, the study investigators (a) asked 8 clinicians to complete the ASCO Value Framework for 11 anticancer medications selected by the MOC; (b) reviewed CEAs assessing the drugs; (c) generated descriptive statistics; and (d) analyzed inter-rater reliability, convergence validity, and ranking consistency. On the scale of -20 to 180, the mean ASCO net health benefit (NHB) total score across 11 drugs ranged from 7.6 (SD = 7.8) to 53 (SD = 9.8). The Kappa coefficient (κ) for NHB scores across raters was 0.11, which is categorized as "slightly reliable." The combined κ score was 0.22, which is interpreted as low to fair inter-rater reliability. Convergent validity indicates that the correlation between NHB scores and CEA-based incremental cost-effectiveness ratios (ICERs) was low (-0.215). Ranking of ICERs, ASCO scores, and wholesale acquisition costs indicated different results

  7. Population at risk: using areal interpolation and Twitter messages to create population models for burglaries and robberies

    PubMed Central

    2018-01-01

    ABSTRACT Population at risk of crime varies due to the characteristics of a population as well as the crime generator and attractor places where crime is located. This establishes different crime opportunities for different crimes. However, there are very few efforts of modeling structures that derive spatiotemporal population models to allow accurate assessment of population exposure to crime. This study develops population models to depict the spatial distribution of people who have a heightened crime risk for burglaries and robberies. The data used in the study include: Census data as source data for the existing population, Twitter geo-located data, and locations of schools as ancillary data to redistribute the source data more accurately in the space, and finally gridded population and crime data to evaluate the derived population models. To create the models, a density-weighted areal interpolation technique was used that disaggregates the source data in smaller spatial units considering the spatial distribution of the ancillary data. The models were evaluated with validation data that assess the interpolation error and spatial statistics that examine their relationship with the crime types. Our approach derived population models of a finer resolution that can assist in more precise spatial crime analyses and also provide accurate information about crime rates to the public. PMID:29887766

  8. Systematic Review of Health Economic Impact Evaluations of Risk Prediction Models: Stop Developing, Start Evaluating.

    PubMed

    van Giessen, Anoukh; Peters, Jaime; Wilcher, Britni; Hyde, Chris; Moons, Carl; de Wit, Ardine; Koffijberg, Erik

    2017-04-01

    Although health economic evaluations (HEEs) are increasingly common for therapeutic interventions, they appear to be rare for the use of risk prediction models (PMs). To evaluate the current state of HEEs of PMs by performing a comprehensive systematic review. Four databases were searched for HEEs of PM-based strategies. Two reviewers independently selected eligible articles. A checklist was compiled to score items focusing on general characteristics of HEEs of PMs, model characteristics and quality of HEEs, evidence on PMs typically used in the HEEs, and the specific challenges in performing HEEs of PMs. After screening 791 abstracts, 171 full texts, and reference checking, 40 eligible HEEs evaluating 60 PMs were identified. In these HEEs, PM strategies were compared with current practice (n = 32; 80%), to other stratification methods for patient management (n = 19; 48%), to an extended PM (n = 9; 23%), or to alternative PMs (n = 5; 13%). The PMs guided decisions on treatment (n = 42; 70%), further testing (n = 18; 30%), or treatment prioritization (n = 4; 7%). For 36 (60%) PMs, only a single decision threshold was evaluated. Costs of risk prediction were ignored for 28 (46%) PMs. Uncertainty in outcomes was assessed using probabilistic sensitivity analyses in 22 (55%) HEEs. Despite the huge number of PMs in the medical literature, HEE of PMs remains rare. In addition, we observed great variety in their quality and methodology, which may complicate interpretation of HEE results and implementation of PMs in practice. Guidance on HEE of PMs could encourage and standardize their application and enhance methodological quality, thereby improving adequate use of PM strategies. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  9. Sensitivity and Bias in Decision-Making under Risk: Evaluating the Perception of Reward, Its Probability and Value

    PubMed Central

    Sharp, Madeleine E.; Viswanathan, Jayalakshmi; Lanyon, Linda J.; Barton, Jason J. S.

    2012-01-01

    Background There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour. Objective We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk. Design/Methods Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%. Results Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a ‘risk premium’ of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability. Conclusions This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia. PMID:22493669

  10. Changing health behaviors to improve health outcomes after angioplasty: a randomized trial of net present value versus future value risk communication.

    PubMed

    Charlson, M E; Peterson, J C; Boutin-Foster, C; Briggs, W M; Ogedegbe, G G; McCulloch, C E; Hollenberg, J; Wong, C; Allegrante, J P

    2008-10-01

    Patients who have undergone angioplasty experience difficulty modifying at-risk behaviors for subsequent cardiac events. The purpose of this study was to test whether an innovative approach to framing of risk, based on 'net present value' economic theory, would be more effective in behavioral intervention than the standard 'future value approach' in reducing cardiovascular morbidity and mortality following angioplasty. At baseline, all patients completed a health assessment, received an individualized risk profile and selected risk factors for modification. The intervention randomized patients into two varying methods for illustrating positive effects of behavior change. For the experimental group, each selected risk factor was assigned a numeric biologic age (the net present value) that approximated the relative potential to improve current health status and quality of life when modifying that risk factor. In the control group, risk reduction was framed as the value of preventing future health problems. Ninety-four percent of patients completed 2-year follow-up. There was no difference between the rates of death, stroke, myocardial infarction, Class II-IV angina or severe ischemia (on non-invasive testing) between the net present value group and the future value group. Our results show that a net present risk communication intervention did not result in significant differences in health outcomes.

  11. A Model To Identify Individuals at High Risk for Esophageal Squamous Cell Carcinoma and Precancerous Lesions in Regions of High Prevalence in China.

    PubMed

    Liu, Mengfei; Liu, Zhen; Cai, Hong; Guo, Chuanhai; Li, Xiang; Zhang, Chaoting; Wang, Hui; Hang, Dong; Liu, Fangfang; Deng, Qiuju; Yang, Xin; Yuan, Wenqing; Pan, Yaqi; Li, Jingjing; Zhang, Chanyuan; Shen, Na; He, Zhonghu; Ke, Yang

    2017-10-01

    We aimed to develop a population-based model to identify individuals at high risk for esophageal squamous cell carcinoma (ESCC) in regions of China with a high prevalence of this cancer. We collected findings from 15,073 permanent residents (45-69 years old) of 334 randomly selected villages in Hua County, Henan Province, China who underwent endoscopic screening (with iodine staining) for ESCC from January 2012 through September 2015. The entire esophagus and stomach were examined; biopsies were collected from all focal lesions (or from standard sites in the esophagus if no abnormalities were found) and analyzed histologically. Squamous dysplasia, carcinoma in situ, and ESCC were independently confirmed by 2 pathologists. Before endoscopy, subjects completed a questionnaire on ESCC risk factors. Variables were evaluated with unconditional univariate logistic regression analysis; variables found to be significantly associated with ESCC were then analyzed by multivariate logistic regression modeling. We used the Akaike information criterion to develop our final model structure and the coding form of variables with multiple measures. We developed 2 groups of models, separately defining severe dysplasia and above (SDA) (lesions including severe dysplasia and higher-grade lesions) and moderate dysplasia and above (lesions including moderate dysplasia and higher-grade lesions) as outcome events. Age-stratified and whole-age models were developed; their discriminative ability in the full multivariate model and the simple age model was compared. We performed area under the receiver operating characteristic curve (AUC) and the DeLong test to evaluate model performance. Our age-stratified prediction models identified individuals 60 years of age or younger with SDA with an AUC value of 0.795 (95% confidence interval, 0.736-0.854) and individuals older than 60 years with SDA with an AUC value of 0.681 (95% confidence interval, 0.618-0.743). Factors associated with SDA in

  12. A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.

    PubMed

    Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao

    2016-07-01

    Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value <0.25 in the univariate analysis were further evaluated in multivariate models using backward elimination. Discrimination was assessed using receiver operating curve. Calibration was evaluated using the McFadden's R2. Net reclassification index (NRI) associated with incorporation of laboratory results was calculated. Results were internally validated. A model similar to existing CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.

  13. Enhancing the Value of Population-Based Risk Scores for Institutional-Level Use.

    PubMed

    Raza, Sajjad; Sabik, Joseph F; Rajeswaran, Jeevanantham; Idrees, Jay J; Trezzi, Matteo; Riaz, Haris; Javadikasgari, Hoda; Nowicki, Edward R; Svensson, Lars G; Blackstone, Eugene H

    2016-07-01

    We hypothesized that factors associated with an institution's residual risk unaccounted for by population-based models may be identifiable and used to enhance the value of population-based risk scores for quality improvement. From January 2000 to January 2010, 4,971 patients underwent aortic valve replacement (AVR), either isolated (n = 2,660) or with concomitant coronary artery bypass grafting (AVR+CABG; n = 2,311). Operative mortality and major morbidity and mortality predicted by The Society of Thoracic Surgeons (STS) risk models were compared with observed values. After adjusting for patients' STS score, additional and refined risk factors were sought to explain residual risk. Differences between STS model coefficients (risk-factor strength) and those specific to our institution were calculated. Observed operative mortality was less than predicted for AVR (1.6% [42 of 2,660] vs 2.8%, p < 0.0001) and AVR+CABG (2.6% [59 of 2,311] vs 4.9%, p < 0.0001). Observed major morbidity and mortality was also lower than predicted for isolated AVR (14.6% [389 of 2,660] vs 17.5%, p < 0.0001) and AVR+CABG (20.0% [462 of 2,311] vs 25.8%, p < 0.0001). Shorter height, higher bilirubin, and lower albumin were identified as additional institution-specific risk factors, and body surface area, creatinine, glomerular filtration rate, blood urea nitrogen, and heart failure across all levels of functional class were identified as refined risk-factor variables associated with residual risk. In many instances, risk-factor strength differed substantially from that of STS models. Scores derived from population-based models can be enhanced for institutional level use by adjusting for institution-specific additional and refined risk factors. Identifying these and measuring differences in institution-specific versus population-based risk-factor strength can identify areas to target for quality improvement initiatives. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier

  14. Risk Communication, Values Clarification, and Vaccination Decisions.

    PubMed

    Witteman, Holly O; Chipenda Dansokho, Selma; Exe, Nicole; Dupuis, Audrey; Provencher, Thierry; Zikmund-Fisher, Brian J

    2015-10-01

    Many health-related decisions require choosing between two options, each with risks and benefits. When presented with such tradeoffs, people often make choices that fail to align with scientific evidence or with their own values. This study tested whether risk communication and values clarification methods could help parents and guardians make evidence-based, values-congruent decisions about children's influenza vaccinations. In 2013-2014 we conducted an online 2×2 factorial experiment in which a diverse sample of U.S. parents and guardians (n = 407) were randomly assigned to view either standard information about influenza vaccines or risk communication using absolute and incremental risk formats. Participants were then either presented or not presented with an interactive values clarification interface with constrained sliders and dynamic visual feedback. Participants randomized to the risk communication condition combined with the values clarification interface were more likely to indicate intentions to vaccinate (β = 2.10, t(399) = 2.63, p < 0.01). The effect was particularly notable among participants who had previously demonstrated less interest in having their children vaccinated against influenza (β = -2.14, t(399) = -2.06, p < 0.05). When assessing vaccination status reported by participants who agreed to participate in a follow-up study six months later (n = 116), vaccination intentions significantly predicted vaccination status (OR = 1.66, 95%CI (1.13, 2.44), p < 0.05) and rates of informed choice (OR = 1.51, 95%CI (1.07, 2.13), p < 0.012), although there were no direct effects of experimental factors on vaccination rates. Qualitative analysis suggested that logistical barriers impeded immunization rates. Risk communication and values clarification methods may contribute to increased vaccination intentions, which may, in turn, predict vaccination status if logistical barriers are also addressed. © 2015 Society for Risk Analysis.

  15. Development of a predictive model to identify inpatients at risk of re-admission within 30 days of discharge (PARR-30)

    PubMed Central

    Billings, John; Blunt, Ian; Steventon, Adam; Georghiou, Theo; Lewis, Geraint; Bardsley, Martin

    2012-01-01

    Objectives To develop an algorithm for identifying inpatients at high risk of re-admission to a National Health Service (NHS) hospital in England within 30 days of discharge using information that can either be obtained from hospital information systems or from the patient and their notes. Design Multivariate statistical analysis of routinely collected hospital episode statistics (HES) data using logistic regression to build the predictive model. The model's performance was calculated using bootstrapping. Setting HES data covering all NHS hospital admissions in England. Participants The NHS patients were admitted to hospital between April 2008 and March 2009 (10% sample of all admissions, n=576 868). Main outcome measures Area under the receiver operating characteristic curve for the algorithm, together with its positive predictive value and sensitivity for a range of risk score thresholds. Results The algorithm produces a ‘risk score’ ranging (0–1) for each admitted patient, and the percentage of patients with a re-admission within 30 days and the mean re-admission costs of all patients are provided for 20 risk bands. At a risk score threshold of 0.5, the positive predictive value (ie, percentage of inpatients identified as high risk who were subsequently re-admitted within 30 days) was 59.2% (95% CI 58.0% to 60.5%); representing 5.4% (95% CI 5.2% to 5.6%) of all inpatients who would be re-admitted within 30 days (sensitivity). The area under the receiver operating characteristic curve was 0.70 (95% CI 0.69 to 0.70). Conclusions We have developed a method of identifying inpatients at high risk of unplanned re-admission to NHS hospitals within 30 days of discharge. Though the models had a low sensitivity, we show how to identify subgroups of patients that contain a high proportion of patients who will be re-admitted within 30 days. Additional work is necessary to validate the model in practice. PMID:22885591

  16. The Role of Values and Evaluation in Thinking

    ERIC Educational Resources Information Center

    House, Ernest R.

    2016-01-01

    The concept of values is the central concept in evaluation. There are several ways of looking at values, including from the perspectives of philosophy, psychology, sociology, biology, and biography. In this article Ernest House discusses how values are conceived in cognitive psychology and what that means for evaluation. Further, he discusses the…

  17. A counterfactual p-value approach for benefit-risk assessment in clinical trials.

    PubMed

    Zeng, Donglin; Chen, Ming-Hui; Ibrahim, Joseph G; Wei, Rachel; Ding, Beiying; Ke, Chunlei; Jiang, Qi

    2015-01-01

    Clinical trials generally allow various efficacy and safety outcomes to be collected for health interventions. Benefit-risk assessment is an important issue when evaluating a new drug. Currently, there is a lack of standardized and validated benefit-risk assessment approaches in drug development due to various challenges. To quantify benefits and risks, we propose a counterfactual p-value (CP) approach. Our approach considers a spectrum of weights for weighting benefit-risk values and computes the extreme probabilities of observing the weighted benefit-risk value in one treatment group as if patients were treated in the other treatment group. The proposed approach is applicable to single benefit and single risk outcome as well as multiple benefit and risk outcomes assessment. In addition, the prior information in the weight schemes relevant to the importance of outcomes can be incorporated in the approach. The proposed CPs plot is intuitive with a visualized weight pattern. The average area under CP and preferred probability over time are used for overall treatment comparison and a bootstrap approach is applied for statistical inference. We assess the proposed approach using simulated data with multiple efficacy and safety endpoints and compare its performance with a stochastic multi-criteria acceptability analysis approach.

  18. 46 CFR 309.203 - Value at time of loss.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 8 2010-10-01 2010-10-01 false Value at time of loss. 309.203 Section 309.203 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EMERGENCY OPERATIONS VALUES FOR WAR RISK INSURANCE Stores and Supplies § 309.203 Value at time of loss. The value of unused stores and supplies on board a...

  19. Using Radiation Risk Models in Cancer Screening Simulations: Important Assumptions and Effects on Outcome Projections

    PubMed Central

    Lee, Janie M.; McMahon, Pamela M.; Lowry, Kathryn P.; Omer, Zehra B.; Eisenberg, Jonathan D.; Pandharipande, Pari V.; Gazelle, G. Scott

    2012-01-01

    Purpose: To evaluate the effect of incorporating radiation risk into microsimulation (first-order Monte Carlo) models for breast and lung cancer screening to illustrate effects of including radiation risk on patient outcome projections. Materials and Methods: All data used in this study were derived from publicly available or deidentified human subject data. Institutional review board approval was not required. The challenges of incorporating radiation risk into simulation models are illustrated with two cancer screening models (Breast Cancer Model and Lung Cancer Policy Model) adapted to include radiation exposure effects from mammography and chest computed tomography (CT), respectively. The primary outcome projected by the breast model was life expectancy (LE) for BRCA1 mutation carriers. Digital mammographic screening beginning at ages 25, 30, 35, and 40 years was evaluated in the context of screenings with false-positive results and radiation exposure effects. The primary outcome of the lung model was lung cancer–specific mortality reduction due to annual screening, comparing two diagnostic CT protocols for lung nodule evaluation. The Metropolis-Hastings algorithm was used to estimate the mean values of the results with 95% uncertainty intervals (UIs). Results: Without radiation exposure effects, the breast model indicated that annual digital mammography starting at age 25 years maximized LE (72.03 years; 95% UI: 72.01 years, 72.05 years) and had the highest number of screenings with false-positive results (2.0 per woman). When radiation effects were included, annual digital mammography beginning at age 30 years maximized LE (71.90 years; 95% UI: 71.87 years, 71.94 years) with a lower number of screenings with false-positive results (1.4 per woman). For annual chest CT screening of 50-year-old females with no follow-up for nodules smaller than 4 mm in diameter, the lung model predicted lung cancer–specific mortality reduction of 21.50% (95% UI: 20.90%, 22

  20. Psychosocial predictors of cannabis use in adolescents at risk.

    PubMed

    Hüsler, Gebhard; Plancherel, Bernard; Werlen, Egon

    2005-09-01

    This research has tested a social disintegration model in conjunction with risk and protection factors that have the power to differentiate relative, weighted interactions among variables in different socially disintegrated groups. The model was tested in a cross-sectional sample of 1082 at-risk youth in Switzerland. Structural equation analyses show significant differences between the social disintegration (low, moderate, high) groups and gender, indicating that the model works differently for groups and for gender. For the highly disintegrated adolescents results clearly show that the risk factors (negative mood, peer network, delinquency) are more important than the protective factors (family relations, secure sense of self). Family relations lose all protective value against negative peer influence, but personal variables, such as secure self, gain protective power.

  1. Risk management with substitution options: Valuing flexibility in small-scale energy systems

    NASA Astrophysics Data System (ADS)

    Knapp, Karl Eric

    Several features of small-scale energy systems make them more easily adapted to a changing operating environment than large centralized designs. This flexibility is often manifested as the ability to substitute inputs. This research explores the value of this substitution flexibility and the marginal value of becoming a "little more flexible" in the context of real project investment in developing countries. The elasticity of substitution is proposed as a stylized measure of flexibility and a choice variable. A flexible alternative (elasticity > 0) can be thought of as holding a fixed-proportions "nflexible" asset plus a sequence of exchange options---the option to move to another feasible "recipe" each period. Substitutability derives value from following a contour of anticipated variations and from responding to new information. Substitutability value, a "cost savings option", increases with elasticity and price risk. However, the required premium to incrementally increase flexibility can in some cases decrease with an increase in risk. Variance is not always a measure of risk. Tools from stochastic dominance are newly applied to real options with convex payoffs to correct some misperceptions and clarify many common modeling situations that meet the criteria for increased variance to imply increased risk. The behavior of the cost savings option is explored subject to a stochastic input price process. At the point where costs are identical for all alternatives, the stochastic process for cost savings becomes deterministic, with savings directly proportional to elasticity of substitution and price variance. The option is also formulated as a derivative security via dynamic programming. The partial differential equation is solved for the special case of Cobb-Douglas (elasticity = 1) (also shown are linear (infinite elasticity), Leontief (elasticity = 0)). Risk aversion is insufficient to prefer a more flexible alternative with the same expected value. Intertemporal

  2. A statistical approach to evaluate flood risk at the regional level: an application to Italy

    NASA Astrophysics Data System (ADS)

    Rossi, Mauro; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Guzzetti, Fausto; Sterlacchini, Simone; Zazzeri, Marco; Bonazzi, Alessandro; Carlesi, Andrea

    2016-04-01

    Floods are frequent and widespread in Italy, causing every year multiple fatalities and extensive damages to public and private structures. A pre-requisite for the development of mitigation schemes, including financial instruments such as insurance, is the ability to quantify their costs starting from the estimation of the underlying flood hazard. However, comprehensive and coherent information on flood prone areas, and estimates on the frequency and intensity of flood events, are not often available at scales appropriate for risk pooling and diversification. In Italy, River Basins Hydrogeological Plans (PAI), prepared by basin administrations, are the basic descriptive, regulatory, technical and operational tools for environmental planning in flood prone areas. Nevertheless, such plans do not cover the entire Italian territory, having significant gaps along the minor hydrographic network and in ungauged basins. Several process-based modelling approaches have been used by different basin administrations for the flood hazard assessment, resulting in an inhomogeneous hazard zonation of the territory. As a result, flood hazard assessments expected and damage estimations across the different Italian basin administrations are not always coherent. To overcome these limitations, we propose a simplified multivariate statistical approach for the regional flood hazard zonation coupled with a flood impact model. This modelling approach has been applied in different Italian basin administrations, allowing a preliminary but coherent and comparable estimation of the flood hazard and the relative impact. Model performances are evaluated comparing the predicted flood prone areas with the corresponding PAI zonation. The proposed approach will provide standardized information (following the EU Floods Directive specifications) on flood risk at a regional level which can in turn be more readily applied to assess flood economic impacts. Furthermore, in the assumption of an appropriate

  3. Worldwide Regulations of Standard Values of Pesticides for Human Health Risk Control: A Review.

    PubMed

    Li, Zijian; Jennings, Aaron

    2017-07-22

    Abstract : The impact of pesticide residues on human health is a worldwide problem, as human exposure to pesticides can occur through ingestion, inhalation, and dermal contact. Regulatory jurisdictions have promulgated the standard values for pesticides in residential soil, air, drinking water, and agricultural commodity for years. Until now, more than 19,400 pesticide soil regulatory guidance values (RGVs) and 5400 pesticide drinking water maximum concentration levels (MCLs) have been regulated by 54 and 102 nations, respectively. Over 90 nations have provided pesticide agricultural commodity maximum residue limits (MRLs) for at least one of the 12 most commonly consumed agricultural foods. A total of 22 pesticides have been regulated with more than 100 soil RGVs, and 25 pesticides have more than 100 drinking water MCLs. This research indicates that those RGVs and MCLs for an individual pesticide could vary over seven (DDT drinking water MCLs), eight (Lindane soil RGVs), or even nine (Dieldrin soil RGVs) orders of magnitude. Human health risk uncertainty bounds and the implied total exposure mass burden model were applied to analyze the most commonly regulated and used pesticides for human health risk control. For the top 27 commonly regulated pesticides in soil, there are at least 300 RGVs (8% of the total) that are above all of the computed upper bounds for human health risk uncertainty. For the top 29 most-commonly regulated pesticides in drinking water, at least 172 drinking water MCLs (5% of the total) exceed the computed upper bounds for human health risk uncertainty; while for the 14 most widely used pesticides, there are at least 310 computed implied dose limits (28.0% of the total) that are above the acceptable daily intake values. The results show that some worldwide standard values were not derived conservatively enough to avoid human health risk by the pesticides, and that some values were not computed comprehensively by considering all major human

  4. Worldwide Regulations of Standard Values of Pesticides for Human Health Risk Control: A Review

    PubMed Central

    Jennings, Aaron

    2017-01-01

    The impact of pesticide residues on human health is a worldwide problem, as human exposure to pesticides can occur through ingestion, inhalation, and dermal contact. Regulatory jurisdictions have promulgated the standard values for pesticides in residential soil, air, drinking water, and agricultural commodity for years. Until now, more than 19,400 pesticide soil regulatory guidance values (RGVs) and 5400 pesticide drinking water maximum concentration levels (MCLs) have been regulated by 54 and 102 nations, respectively. Over 90 nations have provided pesticide agricultural commodity maximum residue limits (MRLs) for at least one of the 12 most commonly consumed agricultural foods. A total of 22 pesticides have been regulated with more than 100 soil RGVs, and 25 pesticides have more than 100 drinking water MCLs. This research indicates that those RGVs and MCLs for an individual pesticide could vary over seven (DDT drinking water MCLs), eight (Lindane soil RGVs), or even nine (Dieldrin soil RGVs) orders of magnitude. Human health risk uncertainty bounds and the implied total exposure mass burden model were applied to analyze the most commonly regulated and used pesticides for human health risk control. For the top 27 commonly regulated pesticides in soil, there are at least 300 RGVs (8% of the total) that are above all of the computed upper bounds for human health risk uncertainty. For the top 29 most-commonly regulated pesticides in drinking water, at least 172 drinking water MCLs (5% of the total) exceed the computed upper bounds for human health risk uncertainty; while for the 14 most widely used pesticides, there are at least 310 computed implied dose limits (28.0% of the total) that are above the acceptable daily intake values. The results show that some worldwide standard values were not derived conservatively enough to avoid human health risk by the pesticides, and that some values were not computed comprehensively by considering all major human exposure

  5. WRF-based fire risk modelling and evaluation for years 2010 and 2012 in Poland

    NASA Astrophysics Data System (ADS)

    Stec, Magdalena; Szymanowski, Mariusz; Kryza, Maciej

    2016-04-01

    Wildfires are one of the main ecosystems' disturbances for forested, seminatural and agricultural areas. They generate significant economic loss, especially in forest management and agriculture. Forest fire risk modeling is therefore essential e.g. for forestry administration. In August 2015 a new method of forest fire risk forecasting entered into force in Poland. The method allows to predict a fire risk level in a 4-degree scale (0 - no risk, 3 - highest risk) and consists of a set of linearized regression equations. Meteorological information is used as predictors in regression equations, with air temperature, relative humidity, average wind speed, cloudiness and rainfall. The equations include also pine litter humidity as a measure of potential fuel characteristics. All these parameters are measured routinely in Poland at 42 basic and 94 auxiliary sites. The fire risk level is estimated for a current (basing on morning measurements) or next day (basing on midday measurements). Entire country is divided into 42 prognostic zones, and fire risk level for each zone is taken from the closest measuring site. The first goal of this work is to assess if the measurements needed for fire risk forecasting may be replaced by the data from mesoscale meteorological model. Additionally, the use of a meteorological model would allow to take into account much more realistic spatial differentiation of weather elements determining the fire risk level instead of discrete point-made measurements. Meteorological data have been calculated using the Weather Research and Forecasting model (WRF). For the purpose of this study the WRF model is run in the reanalysis mode allowing to estimate all required meteorological data in a 5-kilometers grid. The only parameter that cannot be directly calculated using WRF is the litter humidity, which has been estimated using empirical formula developed by Sakowska (2007). The experiments are carried out for two selected years: 2010 and 2012. The

  6. Evaluating life-safety risk of fieldwork at New Zealand's active volcanoes

    NASA Astrophysics Data System (ADS)

    Deligne, Natalia; Jolly, Gill; Taig, Tony; Webb, Terry

    2014-05-01

    Volcano observatories monitor active or potentially active volcanoes. Although the number and scope of remote monitoring instruments and methods continues to grow, in-person field data collection is still required for comprehensive monitoring. Fieldwork anywhere, and especially in mountainous areas, contains an element of risk. However, on volcanoes with signs of unrest, there is an additional risk of volcanic activity escalating while on site, with potentially lethal consequences. As an employer, a volcano observatory is morally and sometimes legally obligated to take reasonable measures to ensure staff safety and to minimise occupational risk. Here we present how GNS Science evaluates life-safety risk for volcanologists engaged in fieldwork on New Zealand volcanoes with signs of volcanic unrest. Our method includes several key elements: (1) an expert elicitation for how likely an eruption is within a given time frame, (2) quantification of, based on historical data when possible, given a small, moderate, or large eruption, the likelihood of exposure to near-vent processes, ballistics, or surge at various distances from the vent, and (3) estimate of fatality rate given exposure to these volcanic hazards. The final product quantifies hourly fatality risk at various distances from a volcanic vent; various thresholds of risk (for example, zones with more than 10-5 hourly fatality risk) trigger different levels of required approval to undertake work. Although an element of risk will always be present when conducting fieldwork on potentially active volcanoes, this is a first step towards providing objective guidance for go/no go decisions for volcanic monitoring.

  7. Possibilities for Practice: An Education of Value for At Risk Students.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education and Training, Winnipeg.

    Public schools in Manitoba (Canada) are encountering an increasing number of students who are considered to be educationally at risk. This paper addresses the provision of effective services relating to at-risk youth and the prevention of early school dropouts. The paper's first section covers the circumstances and conditions associated with early…

  8. AERA Statement on Use of Value-Added Models (VAM) for the Evaluation of Educators and Educator Preparation Programs

    ERIC Educational Resources Information Center

    Educational Researcher, 2015

    2015-01-01

    The purpose of this statement is to inform those using or considering the use of value-added models (VAM) about their scientific and technical limitations in the evaluation of educators and programs that prepare teachers. The statement briefly reviews the background and current context of using VAM for evaluations, enumerates specific psychometric…

  9. Probabilistic flood inundation mapping at ungauged streams due to roughness coefficient uncertainty in hydraulic modelling

    NASA Astrophysics Data System (ADS)

    Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.

    2017-04-01

    Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.

  10. Results from the VALUE perfect predictor experiment: process-based evaluation

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Soares, Pedro; Hertig, Elke; Brands, Swen; Huth, Radan; Cardoso, Rita; Kotlarski, Sven; Casado, Maria; Pongracz, Rita; Bartholy, Judit

    2016-04-01

    Until recently, the evaluation of downscaled climate model simulations has typically been limited to surface climatologies, including long term means, spatial variability and extremes. But these aspects are often, at least partly, tuned in regional climate models to match observed climate. The tuning issue is of course particularly relevant for bias corrected regional climate models. In general, a good performance of a model for these aspects in present climate does therefore not imply a good performance in simulating climate change. It is now widely accepted that, to increase our condidence in climate change simulations, it is necessary to evaluate how climate models simulate relevant underlying processes. In other words, it is important to assess whether downscaling does the right for the right reason. Therefore, VALUE has carried out a broad process-based evaluation study based on its perfect predictor experiment simulations: the downscaling methods are driven by ERA-Interim data over the period 1979-2008, reference observations are given by a network of 85 meteorological stations covering all European climates. More than 30 methods participated in the evaluation. In order to compare statistical and dynamical methods, only variables provided by both types of approaches could be considered. This limited the analysis to conditioning local surface variables on variables from driving processes that are simulated by ERA-Interim. We considered the following types of processes: at the continental scale, we evaluated the performance of downscaling methods for positive and negative North Atlantic Oscillation, Atlantic ridge and blocking situations. At synoptic scales, we considered Lamb weather types for selected European regions such as Scandinavia, the United Kingdom, the Iberian Pensinsula or the Alps. At regional scales we considered phenomena such as the Mistral, the Bora or the Iberian coastal jet. Such process-based evaluation helps to attribute biases in surface

  11. Modeling returns volatility: Realized GARCH incorporating realized risk measure

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Ruan, Qingsong; Li, Jianfeng; Li, Ye

    2018-06-01

    This study applies realized GARCH models by introducing several risk measures of intraday returns into the measurement equation, to model the daily volatility of E-mini S&P 500 index futures returns. Besides using the conventional realized measures, realized volatility and realized kernel as our benchmarks, we also use generalized realized risk measures, realized absolute deviation, and two realized tail risk measures, realized value-at-risk and realized expected shortfall. The empirical results show that realized GARCH models using the generalized realized risk measures provide better volatility estimation for the in-sample and substantial improvement in volatility forecasting for the out-of-sample. In particular, the realized expected shortfall performs best for all of the alternative realized measures. Our empirical results reveal that future volatility may be more attributable to present losses (risk measures). The results are robust to different sample estimation windows.

  12. Using toxicokinetic-toxicodynamic modeling as an acute risk assessment refinement approach in vertebrate ecological risk assessment.

    PubMed

    Ducrot, Virginie; Ashauer, Roman; Bednarska, Agnieszka J; Hinarejos, Silvia; Thorbek, Pernille; Weyman, Gabriel

    2016-01-01

    Recent guidance identified toxicokinetic-toxicodynamic (TK-TD) modeling as a relevant approach for risk assessment refinement. Yet, its added value compared to other refinement options is not detailed, and how to conduct the modeling appropriately is not explained. This case study addresses these issues through 2 examples of individual-level risk assessment for 2 hypothetical plant protection products: 1) evaluating the risk for small granivorous birds and small omnivorous mammals of a single application, as a seed treatment in winter cereals, and 2) evaluating the risk for fish after a pulsed treatment in the edge-of-field zone. Using acute test data, we conducted the first tier risk assessment as defined in the European Food Safety Authority (EFSA) guidance. When first tier risk assessment highlighted a concern, refinement options were discussed. Cases where the use of models should be preferred over other existing refinement approaches were highlighted. We then practically conducted the risk assessment refinement by using 2 different models as examples. In example 1, a TK model accounting for toxicokinetics and relevant feeding patterns in the skylark and in the wood mouse was used to predict internal doses of the hypothetical active ingredient in individuals, based on relevant feeding patterns in an in-crop situation, and identify the residue levels leading to mortality. In example 2, a TK-TD model accounting for toxicokinetics, toxicodynamics, and relevant exposure patterns in the fathead minnow was used to predict the time-course of fish survival for relevant FOCUS SW exposure scenarios and identify which scenarios might lead to mortality. Models were calibrated using available standard data and implemented to simulate the time-course of internal dose of active ingredient or survival for different exposure scenarios. Simulation results were discussed and used to derive the risk assessment refinement endpoints used for decision. Finally, we compared the

  13. [Modeling in value-based medicine].

    PubMed

    Neubauer, A S; Hirneiss, C; Kampik, A

    2010-03-01

    Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.

  14. Values, Valuing, and Evaluation. Research on Evaluation Program, Paper and Report Series. Interim Draft.

    ERIC Educational Resources Information Center

    Gephart, William J.

    The paper discusses the meaning of value and valuing, their roles in evaluation, and the potency of value systems in problem solving logic. Evaluation is defined as a process for facilitating decision making. A decision making situation occurs when there are options which are impossible to treat equivalently, and there is an impact in the…

  15. Evaluation of potential risks from ash disposal site leachate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, W.B.; Loh, J.Y.; Bate, M.C.

    1999-04-01

    A risk-based approach is used to evaluate potential human health risks associated with a discharge from an ash disposal site into a small stream. The RIVRISK model was used to estimate downstream concentrations and corresponding risks. The modeling and risk analyses focus on boron, the constituent of greatest potential concern to public health at the site investigated, in Riddle Run, Pennsylvania. Prior to performing the risk assessment, the model is validated by comparing observed and predicted results. The comparison is good and an uncertainty analysis is provided to explain the comparison. The hazard quotient (HQ) for boron is predicted tomore » be greater than 1 at presently regulated compliance points over a range of flow rates. The reference dose (RfD) currently recommended by the United States Environmental Protection Agency (US EPA) was used for the analyses. However, the toxicity of boron as expressed by the RfD is now under review by both the U.S. EPA and the World Health Organization. Alternative reference doses being examined would produce predicted boron hazard quotients of less than 1 at nearly all flow conditions.« less

  16. GIS-based flood risk model evaluated by Fuzzy Analytic Hierarchy Process (FAHP)

    NASA Astrophysics Data System (ADS)

    Sukcharoen, Tharapong; Weng, Jingnong; Teetat, Charoenkalunyuta

    2016-10-01

    Over the last 2-3 decades, the economy of many countries around the world has been developed rapidly but it was unbalanced development because of expecting on economic growth only. Meanwhile it lacked of effective planning in the use of natural resources. This can significantly induce climate change which is major cause of natural disaster. Hereby, Thailand has also suffered from natural disaster for ages. Especially, the flood which is most hazardous disaster in Thailand can annually result in the great loss of life and property, environment and economy. Since the flood management of country is inadequate efficiency. It is unable to support the flood analysis comprehensively. This paper applied Geographic Information System and Multi-Criteria Decision Making to create flood risk model at regional scale. Angthong province in Thailand was used as the study area. In practical process, Fuzzy logic technique has been used to improve specialist's assessment by implementing with Fuzzy membership because human decision is flawed under uncertainty then AHP technique was processed orderly. The hierarchy structure in this paper was categorized the spatial flood factors into two levels as following: 6 criteria (Meteorology, Geology, Topography, Hydrology, Human and Flood history) and 8 factors (Average Rainfall, Distance from Stream, Soil drainage capability, Slope, Elevation, Land use, Distance from road and Flooded area in the past). The validity of the pair-wise comparison in AHP was shown as C.R. value which indicated that the specialist judgment was reasonably consistent. FAHP computation result has shown that the first priority of criteria was Meteorology. In addition, the Rainfall was the most influencing factor for flooding. Finally, the output was displayed in thematic map of Angthong province with flood risk level processed by GIS tools. The map was classified into: High Risk, Moderate Risk and Low Risk (13.20%, 75.58%, and 11.22% of total area).

  17. Validation of a predictive model for identifying febrile young infants with altered urinalysis at low risk of invasive bacterial infection.

    PubMed

    Velasco, R; Gómez, B; Hernández-Bou, S; Olaciregui, I; de la Torre, M; González, A; Rivas, A; Durán, I; Rubio, A

    2017-02-01

    In 2015, a predictive model for invasive bacterial infection (IBI) in febrile young infants with altered urine dipstick was published. The aim of this study was to externally validate a previously published set of low risk criteria for invasive bacterial infection in febrile young infants with altered urine dipstick. Retrospective multicenter study including nine Spanish hospitals. Febrile infants ≤90 days old with altered urinalysis (presence of leukocyturia and/or nitrituria) were included. According to our predictive model, an infant is classified as low-risk for IBI when meeting all the following: appearing well at arrival to the emergency department, being >21 days old, having a procalcitonin value <0.5 ng/mL and a C-reactive protein value <20 mg/L. IBI was considered as secondary to urinary tract infection if the same pathogen was isolated in the urine culture and in the blood or cerebrospinal fluid culture. A total of 391 patients with altered urine dipstick were included. Thirty (7.7 %) of them developed an IBI, with 26 (86.7 %) of them secondary to UTI. Prevalence of IBI was 2/104 (1.9 %; CI 95% 0.5-6.7) among low-risk patients vs 28/287 (9.7 %; CI 95% 6.8-13.7) among high-risk patients (p < 0.05). Sensitivity of the model was 93.3 % (CI 95% 78.7-98.2) and negative predictive value was 98.1 % (93.3-99.4). Although our predictive model was shown to be less accurate in the validation cohort, it still showed a good discriminatory ability to detect IBI. Larger prospective external validation studies, taking into account fever duration as well as the role of ED observation, should be undertaken before its implementation into clinical practice.

  18. Accidental falls in hospital inpatients: evaluation of sensitivity and specificity of two risk assessment tools.

    PubMed

    Lovallo, Carmela; Rolandi, Stefano; Rossetti, Anna Maria; Lusignani, Maura

    2010-03-01

    This paper is a report of a study comparing the effectiveness of two falls risk assessment tools (Conley Scale and Hendrich Risk Model) by using them simultaneously with the same sample of hospital inpatients. Different risk assessment tools are available in literature. However, neither recent critical reviews nor international guidelines on fall prevention have identified tools that can be generalized to all categories of hospitalized patients. A prospective observational study was carried out in acute medical, surgical wards and rehabilitation units. From October 2007 to January 2008, 1148 patients were assessed with both instruments, subsequently noting the occurrence of falls. The sensitivity, specificity, positive and negative predictive values, and Receiver Operating Characteristics curves were calculated. The number of patients correctly identified with the Conley Scale (n = 41) was higher than with the Hendrich Model (n = 27). The Conley Scale gave sensitivity and specificity values of 69.49% and 61% respectively. The Hendrich Model gave a sensitivity value of 45.76% and a specificity value of 71%. Positive and negative predictive values were comparable. The Conley Scale is indicated for use in the medical sector, on the strength of its high sensitivity. However, since its specificity is very low, it is deemed useful to submit individual patients giving positive results to more in-depth clinical evaluation in order to decide whether preventive measures need to be taken. In surgical sectors, the low sensitivity values given by both scales suggest that further studies are warranted.

  19. Predicting the cumulative risk of death during hospitalization by modeling weekend, weekday and diurnal mortality risks.

    PubMed

    Coiera, Enrico; Wang, Ying; Magrabi, Farah; Concha, Oscar Perez; Gallego, Blanca; Runciman, William

    2014-05-21

    Current prognostic models factor in patient and disease specific variables but do not consider cumulative risks of hospitalization over time. We developed risk models of the likelihood of death associated with cumulative exposure to hospitalization, based on time-varying risks of hospitalization over any given day, as well as day of the week. Model performance was evaluated alone, and in combination with simple disease-specific models. Patients admitted between 2000 and 2006 from 501 public and private hospitals in NSW, Australia were used for training and 2007 data for evaluation. The impact of hospital care delivered over different days of the week and or times of the day was modeled by separating hospitalization risk into 21 separate time periods (morning, day, night across the days of the week). Three models were developed to predict death up to 7-days post-discharge: 1/a simple background risk model using age, gender; 2/a time-varying risk model for exposure to hospitalization (admission time, days in hospital); 3/disease specific models (Charlson co-morbidity index, DRG). Combining these three generated a full model. Models were evaluated by accuracy, AUC, Akaike and Bayesian information criteria. There was a clear diurnal rhythm to hospital mortality in the data set, peaking in the evening, as well as the well-known 'weekend-effect' where mortality peaks with weekend admissions. Individual models had modest performance on the test data set (AUC 0.71, 0.79 and 0.79 respectively). The combined model which included time-varying risk however yielded an average AUC of 0.92. This model performed best for stays up to 7-days (93% of admissions), peaking at days 3 to 5 (AUC 0.94). Risks of hospitalization vary not just with the day of the week but also time of the day, and can be used to make predictions about the cumulative risk of death associated with an individual's hospitalization. Combining disease specific models with such time varying- estimates appears to

  20. Evaluation of the DAVROS (Development And Validation of Risk-adjusted Outcomes for Systems of emergency care) risk-adjustment model as a quality indicator for healthcare

    PubMed Central

    Wilson, Richard; Goodacre, Steve W; Klingbajl, Marcin; Kelly, Anne-Maree; Rainer, Tim; Coats, Tim; Holloway, Vikki; Townend, Will; Crane, Steve

    2014-01-01

    Background and objective Risk-adjusted mortality rates can be used as a quality indicator if it is assumed that the discrepancy between predicted and actual mortality can be attributed to the quality of healthcare (ie, the model has attributional validity). The Development And Validation of Risk-adjusted Outcomes for Systems of emergency care (DAVROS) model predicts 7-day mortality in emergency medical admissions. We aimed to test this assumption by evaluating the attributional validity of the DAVROS risk-adjustment model. Methods We selected cases that had the greatest discrepancy between observed mortality and predicted probability of mortality from seven hospitals involved in validation of the DAVROS risk-adjustment model. Reviewers at each hospital assessed hospital records to determine whether the discrepancy between predicted and actual mortality could be explained by the healthcare provided. Results We received 232/280 (83%) completed review forms relating to 179 unexpected deaths and 53 unexpected survivors. The healthcare system was judged to have potentially contributed to 10/179 (8%) of the unexpected deaths and 26/53 (49%) of the unexpected survivors. Failure of the model to appropriately predict risk was judged to be responsible for 135/179 (75%) of the unexpected deaths and 2/53 (4%) of the unexpected survivors. Some 10/53 (19%) of the unexpected survivors died within a few months of the 7-day period of model prediction. Conclusions We found little evidence that deaths occurring in patients with a low predicted mortality from risk-adjustment could be attributed to the quality of healthcare provided. PMID:23605036

  1. Theory-Based Cartographic Risk Model Development and Application for Home Fire Safety.

    PubMed

    Furmanek, Stephen; Lehna, Carlee; Hanchette, Carol

    There is a gap in the use of predictive risk models to identify areas at risk for home fires and burn injury. The purpose of this study was to describe the creation, validation, and application of such a model using a sample from an intervention study with parents of newborns in Jefferson County, KY, as an example. Performed was a literature search to identify risk factors for home fires and burn injury in the target population. Obtained from the American Community Survey at the census tract level and synthesized to create a predictive cartographic risk model was risk factor data. Model validation was performed through correlation, regression, and Moran's I with fire incidence data from open records. Independent samples t-tests were used to examine the model in relation to geocoded participant addresses. Participant risk level for fire rate was determined and proximity to fire station service areas and hospitals. The model showed high and severe risk clustering in the northwest section of the county. Strongly correlated with fire rate was modeled risk; the best predictive model for fire risk contained home value (low), race (black), and non high school graduates. Applying the model to the intervention sample, the majority of participants were at lower risk and mostly within service areas closest to a fire department and hospital. Cartographic risk models were useful in identifying areas at risk and analyzing participant risk level. The methods outlined in this study are generalizable to other public health issues.

  2. Measuring the coupled risks: A copula-based CVaR model

    NASA Astrophysics Data System (ADS)

    He, Xubiao; Gong, Pu

    2009-01-01

    Integrated risk management for financial institutions requires an approach for aggregating risk types (such as market and credit) whose distributional shapes vary considerably. The financial institutions often ignore risks' coupling influence so as to underestimate the financial risks. We constructed a copula-based Conditional Value-at-Risk (CVaR) model for market and credit risks. This technique allows us to incorporate realistic marginal distributions that capture essential empirical features of these risks, such as skewness and fat-tails while allowing for a rich dependence structure. Finally, the numerical simulation method is used to implement the model. Our results indicate that the coupled risks for the listed company's stock maybe are undervalued if credit risk is ignored, especially for the listed company with bad credit quality.

  3. Evaluation of self-combustion risk in tire derived aggregate fills.

    PubMed

    Arroyo, Marcos; San Martin, Ignacio; Olivella, Sebastian; Saaltink, Maarten W

    2011-01-01

    Lightweight tire derived aggregate (TDA) fills are a proven recycling outlet for waste tires, requiring relatively low cost waste processing and being competitively priced against other lightweight fill alternatives. However its value has been marred as several TDA fills have self-combusted during the early applications of this technique. An empirical review of these cases led to prescriptive guidelines from the ASTM aimed at avoiding this problem. This approach has been successful in avoiding further incidents of self-combustion. However, at present there remains no rational method available to quantify self-combustion risk in TDA fills. This means that it is not clear which aspects of the ASTM guidelines are essential and which are accessory. This hinders the practical use of TDA fills despite their inherent advantages as lightweight fill. Here a quantitative approach to self-combustion risk evaluation is developed and illustrated with a parametric analysis of an embankment case. This is later particularized to model a reported field self-combustion case. The approach is based on the available experimental observations and incorporates well-tested methodological (ISO corrosion evaluation) and theoretical tools (finite element analysis of coupled heat and mass flow). The results obtained offer clear insights into the critical aspects of the problem, allowing already some meaningful recommendations for guideline revision. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Evaluating Ecological Risk to Invertebrate Receptors from PAHs in Sediments at Hazardous Waste Sites (Final Report)

    EPA Science Inventory

    EPA's Ecological Risk Assessment Support Center (ERASC) announced the release of the final report, Evaluating Ecological Risk to Invertebrate Receptors from PAHs in Sediments at Hazardous Waste Sites. The report provides an overview of an approach for assessing risk to ...

  5. Using an extended 2D hydrodynamic model for evaluating damage risk caused by extreme rain events: Flash-Flood-Risk-Map (FFRM) Upper Austria

    NASA Astrophysics Data System (ADS)

    Humer, Günter; Reithofer, Andreas

    2016-04-01

    Using an extended 2D hydrodynamic model for evaluating damage risk caused by extreme rain events: Flash-Flood-Risk-Map (FFRM) Upper Austria Considering the increase in flash flood events causing massive damage during the last years in urban but also rural areas [1-4], the requirement for hydrodynamic calculation of flash flood prone areas and possible countermeasures has arisen to many municipalities and local governments. Besides the German based URBAS project [1], also the EU-funded FP7 research project "SWITCH-ON" [5] addresses the damage risk caused by flash floods in the sub-project "FFRM" (Flash Flood Risk Map Upper Austria) by calculating damage risk for buildings and vulnerable infrastructure like schools and hospitals caused by flash-flood driven inundation. While danger zones in riverine flooding are established as an integral part of spatial planning, flash floods caused by overland runoff from extreme rain events have been for long an underrated safety hazard not only for buildings and infrastructure, but man and animals as well. Based on the widespread 2D-model "hydro_as-2D", an extension was developed, which calculates the runoff formation from a spatially and temporally variable precipitation and determines two dimensionally the land surface area runoff and its concentration. The conception of the model is to preprocess the precipitation data and calculate the effective runoff-volume for a short time step of e.g. five minutes. This volume is applied to the nodes of the 2D-model and the calculation of the hydrodynamic model is started. At the end of each time step, the model run is stopped, the preprocessing step is repeated and the hydraulic model calculation is continued. In view of the later use for the whole of Upper Austria (12.000 km²) a model grid of 25x25 m² was established using digital elevation data. Model parameters could be estimated for the small catchment of river Ach, which was hit by an intense rain event with up to 109 mm per hour

  6. Goals and Values in School: A Model Developed for Describing, Evaluating and Changing the Social Climate of Learning Environments

    ERIC Educational Resources Information Center

    Allodi, Mara Westling

    2010-01-01

    This paper defines a broad model of the psychosocial climate in educational settings. The model was developed from a general theory of learning environments, on a theory of human values and on empirical studies of children's evaluations of their schools. The contents of the model are creativity, stimulation, achievement, self-efficacy, creativity,…

  7. Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.

    PubMed

    Lee, Wen-Chung; Wu, Yun-Chun

    2016-01-01

    The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.

  8. Risk and value analysis of SETI

    NASA Technical Reports Server (NTRS)

    Billingham, J.

    1990-01-01

    This paper attempts to apply a traditional risk and value analysis to the Search for Extraterrestrial Intelligence--SETI. In view of the difficulties of assessing the probability of success, a comparison is made between SETI and a previous search for extraterrestrial life, the biological component of Project Viking. Our application of simple Utility Theory, given some reasonable assumptions, suggests that SETI is at least as worthwhile as the biological experiment on Viking.

  9. Risk and value analysis of SETI.

    PubMed

    Billingham, J

    1990-01-01

    This paper attempts to apply a traditional risk and value analysis to the Search for Extraterrestrial Intelligence--SETI. In view of the difficulties of assessing the probability of success, a comparison is made between SETI and a previous search for extraterrestrial life, the biological component of Project Viking. Our application of simple Utility Theory, given some reasonable assumptions, suggests that SETI is at least as worthwhile as the biological experiment on Viking.

  10. Evaluating the Predictive Value of Growth Prediction Models

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  11. Developing a novel risk prediction model for severe malarial anemia.

    PubMed

    Brickley, E B; Kabyemela, E; Kurtis, J D; Fried, M; Wood, A M; Duffy, P E

    2017-01-01

    As a pilot study to investigate whether personalized medicine approaches could have value for the reduction of malaria-related mortality in young children, we evaluated questionnaire and biomarker data collected from the Mother Offspring Malaria Study Project birth cohort (Muheza, Tanzania, 2002-2006) at the time of delivery as potential prognostic markers for pediatric severe malarial anemia. Severe malarial anemia, defined here as a Plasmodium falciparum infection accompanied by hemoglobin levels below 50 g/L, is a key manifestation of life-threatening malaria in high transmission regions. For this study sample, a prediction model incorporating cord blood levels of interleukin-1β provided the strongest discrimination of severe malarial anemia risk with a C-index of 0.77 (95% CI 0.70-0.84), whereas a pragmatic model based on sex, gravidity, transmission season at delivery, and bed net possession yielded a more modest C-index of 0.63 (95% CI 0.54-0.71). Although additional studies, ideally incorporating larger sample sizes and higher event per predictor ratios, are needed to externally validate these prediction models, the findings provide proof of concept that risk score-based screening programs could be developed to avert severe malaria cases in early childhood.

  12. The ACC/AHA 2013 pooled cohort equations compared to a Korean Risk Prediction Model for atherosclerotic cardiovascular disease.

    PubMed

    Jung, Keum Ji; Jang, Yangsoo; Oh, Dong Joo; Oh, Byung-Hee; Lee, Sang Hoon; Park, Seong-Wook; Seung, Ki-Bae; Kim, Hong-Kyu; Yun, Young Duk; Choi, Sung Hee; Sung, Jidong; Lee, Tae-Yong; Kim, Sung Hi; Koh, Sang Baek; Kim, Moon Chan; Chang Kim, Hyeon; Kimm, Heejin; Nam, Chungmo; Park, Sungha; Jee, Sun Ha

    2015-09-01

    To evaluate the performance of the American College of Cardiology/American Heart Association (ACC/AHA) 2013 Pooled Cohort Equations in the Korean Heart Study (KHS) population and to develop a Korean Risk Prediction Model (KRPM) for atherosclerotic cardiovascular disease (ASCVD) events. The KHS cohort included 200,010 Korean adults aged 40-79 years who were free from ASCVD at baseline. Discrimination, calibration, and recalibration of the ACC/AHA Equations in predicting 10-year ASCVD risk in the KHS cohort were evaluated. The KRPM was derived using Cox model coefficients, mean risk factor values, and mean incidences from the KHS cohort. In the discriminatory analysis, the ACC/AHA Equations' White and African-American (AA) models moderately distinguished cases from non-cases, and were similar to the KRPM: For men, the area under the receiver operating characteristic curve (AUROCs) were 0.727 (White model), 0.725 (AA model), and 0.741 (KRPM); for women, the corresponding AUROCs were 0.738, 0.739, and 0.745. Absolute 10-year ASCVD risk for men in the KHS cohort was overestimated by 56.5% (White model) and 74.1% (AA model), while the risk for women was underestimated by 27.9% (White model) and overestimated by 29.1% (AA model). Recalibration of the ACC/AHA Equations did not affect discriminatory ability but improved calibration substantially, especially in men in the White model. Of the three ASCVD risk prediction models, the KRPM showed best calibration. The ACC/AHA Equations should not be directly applied for ASCVD risk prediction in a Korean population. The KRPM showed best predictive ability for ASCVD risk. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Performance Evaluation Model for Application Layer Firewalls.

    PubMed

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  14. [Health risk assessment of coke oven PAHs emissions].

    PubMed

    Bo, Xin; Wang, Gang; Wen, Rou; Zhao, Chun-Li; Wu, Tie; Li, Shi-Bei

    2014-07-01

    Polycyclic aromatic hydrocarbons (PAHs) produced by coke oven are with strong toxicity and carcinogenicity. Taken typical coke oven of iron and steel enterprises as the case study, the dispersion and migration of 13 kinds of PAHs emitted from coke oven were analyzed using AERMOD dispersion model, the carcinogenic and non-carcinogenic risks at the receptors within the modeling domain were evaluated using BREEZE Risk Analyst and the Human Health Risk Assessment Protocol for Hazardous Waste Combustion (HHRAP) was followed, the health risks caused by PAHs emission from coke oven were quantitatively evaluated. The results indicated that attention should be paid to the non-carcinogenic risk of naphthalene emission (the maximum value was 0.97). The carcinogenic risks of each single pollutant were all below 1.0E-06, while the maximum value of total carcinogenic risk was 2.65E-06, which may have some influence on the health of local residents.

  15. Risk-adjusted performance evaluation in three academic thoracic surgery units using the Eurolung risk models.

    PubMed

    Pompili, Cecilia; Shargall, Yaron; Decaluwe, Herbert; Moons, Johnny; Chari, Madhu; Brunelli, Alessandro

    2018-01-03

    The objective of this study was to evaluate the performance of 3 thoracic surgery centres using the Eurolung risk models for morbidity and mortality. This was a retrospective analysis performed on data collected from 3 academic centres (2014-2016). Seven hundred and twenty-one patients in Centre 1, 857 patients in Centre 2 and 433 patients in Centre 3 who underwent anatomical lung resections were analysed. The Eurolung1 and Eurolung2 models were used to predict risk-adjusted cardiopulmonary morbidity and 30-day mortality rates. Observed and risk-adjusted outcomes were compared within each centre. The observed morbidity of Centre 1 was in line with the predicted morbidity (observed 21.1% vs predicted 22.7%, P = 0.31). Centre 2 performed better than expected (observed morbidity 20.2% vs predicted 26.7%, P < 0.001), whereas the observed morbidity of Centre 3 was higher than the predicted morbidity (observed 41.1% vs predicted 24.3%, P < 0.001). Centre 1 had higher observed mortality when compared with the predicted mortality (3.6% vs 2.1%, P = 0.005), whereas Centre 2 had an observed mortality rate significantly lower than the predicted mortality rate (1.2% vs 2.5%, P = 0.013). Centre 3 had an observed mortality rate in line with the predicted mortality rate (observed 1.4% vs predicted 2.4%, P = 0.17). The observed mortality rates in the patients with major complications were 30.8% in Centre 1 (versus predicted mortality rate 3.8%, P < 0.001), 8.2% in Centre 2 (versus predicted mortality rate 4.1%, P = 0.030) and 9.0% in Centre 3 (versus predicted mortality rate 3.5%, P = 0.014). The Eurolung models were successfully used as risk-adjusting instruments to internally audit the outcomes of 3 different centres, showing their applicability for future quality improvement initiatives. © The Author(s) 2018. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  16. Breast cancer risks and risk prediction models.

    PubMed

    Engel, Christoph; Fischer, Christine

    2015-02-01

    BRCA1/2 mutation carriers have a considerably increased risk to develop breast and ovarian cancer. The personalized clinical management of carriers and other at-risk individuals depends on precise knowledge of the cancer risks. In this report, we give an overview of the present literature on empirical cancer risks, and we describe risk prediction models that are currently used for individual risk assessment in clinical practice. Cancer risks show large variability between studies. Breast cancer risks are at 40-87% for BRCA1 mutation carriers and 18-88% for BRCA2 mutation carriers. For ovarian cancer, the risk estimates are in the range of 22-65% for BRCA1 and 10-35% for BRCA2. The contralateral breast cancer risk is high (10-year risk after first cancer 27% for BRCA1 and 19% for BRCA2). Risk prediction models have been proposed to provide more individualized risk prediction, using additional knowledge on family history, mode of inheritance of major genes, and other genetic and non-genetic risk factors. User-friendly software tools have been developed that serve as basis for decision-making in family counseling units. In conclusion, further assessment of cancer risks and model validation is needed, ideally based on prospective cohort studies. To obtain such data, clinical management of carriers and other at-risk individuals should always be accompanied by standardized scientific documentation.

  17. Improving value assessment of high-risk, high-reward biotechnology research: the role of 'thick tails'.

    PubMed

    Casault, Sébastien; Groen, Aard J; Linton, Jonathan D

    2014-03-25

    This paper presents work toward improving the efficacy of financial models that describe the unique nature of biotechnology firms. We show that using a 'thick tailed' power law distribution to describe the behavior of the value of biotechnology R&D used in a Real Options Pricing model is significantly more accurate than the traditionally used Gaussian approach. A study of 287 North-American biotechnology firms gives insights into common problems faced by investors, managers and other stakeholders when using traditional techniques to calculate the commercial value of R&D. This is important because specific quantitative tools to assess the value of high-risk, high-reward R&D do not currently exist. This often leads to an undervaluation of biotechnology R&D and R&D intensive biotechnology firms. For example, the widely used Net Present Value (NPV) method assumes a fixed risk ignoring management flexibility and the changing environment. However, Real Options Pricing models assume that commercial returns from R&D investments are described by a normal random walk. A normal random walk model eliminates the possibility of drastic changes to the marketplace resulting from the introduction of revolutionary products and/or services. It is possible to better understand and manage biotechnology research projects and portfolios using a model that more accurately considers large non-Gaussian price fluctuations with thick tails, which recognize the unusually large risks and opportunities associated with Biotechnology R&D. Our empirical data show that opportunity overcompensates for the downside risk making biotechnology R&D statistically more valuable than other Gaussian options investments, which may otherwise appear to offer a similar combination of risk and return. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Coronary artery calcium scoring does not add prognostic value to standard 64-section CT angiography protocol in low-risk patients suspected of having coronary artery disease.

    PubMed

    Kwon, Sung Woo; Kim, Young Jin; Shim, Jaemin; Sung, Ji Min; Han, Mi Eun; Kang, Dong Won; Kim, Ji-Ye; Choi, Byoung Wook; Chang, Hyuk-Jae

    2011-04-01

    To evaluate the prognostic outcome of cardiac computed tomography (CT) for prediction of major adverse cardiac events (MACEs) in low-risk patients suspected of having coronary artery disease (CAD) and to explore the differential prognostic values of coronary artery calcium (CAC) scoring and coronary CT angiography. Institutional review committee approval and informed consent were obtained. In 4338 patients who underwent 64-section CT for evaluation of suspected CAD, both CAC scoring and CT angiography were concurrently performed by using standard scanning protocols. Follow-up clinical outcome data regarding composite MACEs were procured. Multivariable Cox proportional hazards models were developed to predict MACEs. Risk-adjusted models incorporated traditional risk factors for CAC scoring and coronary CT angiography. During the mean follow-up of 828 days ± 380, there were 105 MACEs, for an event rate of 3%. The presence of obstructive CAD at coronary CT angiography had independent prognostic value, which escalated according to the number of stenosed vessels (P < .001). In the receiver operating characteristic curve (ROC) analysis, the superiority of coronary CT angiography to CAC scoring was demonstrated by a significantly greater area under the ROC curve (AUC) (0.892 vs 0.810, P < .001), whereas no significant incremental value for the addition of CAC scoring to coronary CT angiography was established (AUC = 0.892 for coronary CT angiography alone vs 0.902 with addition of CAC scoring, P = .198). Coronary CT angiography is better than CAC scoring in predicting MACEs in low-risk patients suspected of having CAD. Furthermore, the current standard multisection CT protocol (coronary CT angiography combined with CAC scoring) has no incremental prognostic value compared with coronary CT angiography alone. Therefore, in terms of determining prognosis, CAC scoring may no longer need to be incorporated in the cardiac CT protocol in this population. © RSNA, 2011.

  19. Evaluation of the prognostic value of platelet to lymphocyte ratio in patients with hepatocellular carcinoma.

    PubMed

    Wang, Yuchen; Attar, Bashar M; Fuentes, Harry E; Jaiswal, Palashkumar; Tafur, Alfonso J

    2017-12-01

    Hepatocellular carcinoma (HCC) is increasingly common, potentially fatal cancer type globally. Platelet-lymphocyte ratio (PLR) as a biomarker for systemic inflammation has recently been recognized as a valuable prognostic marker in multiple cancer types. The aim of the present study was to assess the prognostic value of PLR in HCC patients and determine the optimal cut-off value for risk stratification. We retrospectively analyzed patients with diagnosis of HCC (screened by ICD-9 code, confirmed with radiographic examination and/or biopsy) at a large public hospital during 15 years (Jan 2000 through July 2015). PLR, among other serology laboratory values were collected at diagnosis of HCC. Its association with overall survival was evaluated with Cox proportional hazard model. Among 270 patients with HCC, 57 (21.1%) patients died within an average follow-up of 11.9 months. PLR at diagnosis was significantly different between survivors and deceased (128.9 vs. 186.7; P=0.003). In multivariate analysis, aspartate transaminase (AST) (HR 2.022, P<0.001) and PLR (HR 1.768, P=0.004) independently predicted mortality. The optimal cut-off value for PLR was determined to be 220 by receiver-operating characteristics curve, and high PLR group had significantly higher mortality (HR 3.42, P<0.001). Our results indicated that elevated PLR at diagnosis above 220 predicted poor prognosis in HCC patients. PLR is a low-cost and convenient tool, which may serve as a useful prognostic marker for HCC.

  20. Prediction models for the risk of spontaneous preterm birth based on maternal characteristics: a systematic review and independent external validation.

    PubMed

    Meertens, Linda J E; van Montfort, Pim; Scheepers, Hubertina C J; van Kuijk, Sander M J; Aardenburg, Robert; Langenveld, Josje; van Dooren, Ivo M A; Zwaan, Iris M; Spaanderman, Marc E A; Smits, Luc J M

    2018-04-17

    Prediction models may contribute to personalized risk-based management of women at high risk of spontaneous preterm delivery. Although prediction models are published frequently, often with promising results, external validation generally is lacking. We performed a systematic review of prediction models for the risk of spontaneous preterm birth based on routine clinical parameters. Additionally, we externally validated and evaluated the clinical potential of the models. Prediction models based on routinely collected maternal parameters obtainable during first 16 weeks of gestation were eligible for selection. Risk of bias was assessed according to the CHARMS guidelines. We validated the selected models in a Dutch multicenter prospective cohort study comprising 2614 unselected pregnant women. Information on predictors was obtained by a web-based questionnaire. Predictive performance of the models was quantified by the area under the receiver operating characteristic curve (AUC) and calibration plots for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation. Clinical value was evaluated by means of decision curve analysis and calculating classification accuracy for different risk thresholds. Four studies describing five prediction models fulfilled the eligibility criteria. Risk of bias assessment revealed a moderate to high risk of bias in three studies. The AUC of the models ranged from 0.54 to 0.67 and from 0.56 to 0.70 for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation, respectively. A subanalysis showed that the models discriminated poorly (AUC 0.51-0.56) for nulliparous women. Although we recalibrated the models, two models retained evidence of overfitting. The decision curve analysis showed low clinical benefit for the best performing models. This review revealed several reporting and methodological shortcomings of published prediction models for spontaneous preterm birth. Our external validation study

  1. Cardiovascular disease prevention at the workplace: assessing the prognostic value of lifestyle risk factors and job-related conditions.

    PubMed

    Veronesi, Giovanni; Borchini, Rossana; Landsbergis, Paul; Iacoviello, Licia; Gianfagna, Francesco; Tayoun, Patrick; Grassi, Guido; Cesana, Giancarlo; Ferrario, Marco Mario

    2018-05-25

    The prognostic utility of lifestyle risk factors and job-related conditions (LS&JRC) for cardiovascular disease (CVD) risk stratification remains to be clarified. We investigated discrimination and clinical utility of LS&JRC among 2532 workers, 35-64 years old, CVD-free at the time of recruitment (1989-1996) in four prospective cohorts in Northern Italy, and followed up (median 14 years) until first major coronary event or ischemic stroke, fatal or non-fatal. From a Cox model including cigarette smoking, alcohol intake, occupational and sport physical activity and job strain, we estimated 10-year discrimination as the area under the ROC curve (AUC), and clinical utility as the Net Benefit. N = 162 events occurred during follow-up (10-year risk: 4.3%). The LS&JRC model showed the same discrimination (AUC = 0.753, 95% CI 0.700-0.780) as blood lipids, blood pressure, smoking and diabetes (AUC = 0.753), consistently across occupational classes. Among workers at low CVD risk (n = 1832, 91 CVD events), 687 were at increased LS&JRC risk; of these, 1 every 15 was a case, resulting in a positive Net Benefit (1.27; 95% CI 0.68-2.16). LS&JRC are as accurate as clinical risk factors in identifying future cardiovascular events among working males. Our results support initiatives to improve total health at work as strategies to prevent cardiovascular disease.

  2. A risk-based multi-objective model for optimal placement of sensors in water distribution system

    NASA Astrophysics Data System (ADS)

    Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein

    2018-02-01

    In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value

  3. Evaluating the risk of decompression sickness for a yo-yo dive using a rat model.

    PubMed

    Ofir, Dror; Yanir, Yoav; Abramovich, Amir; Bar, Ronen; Arieli, Yehuda

    2016-01-01

    The frequent ascents made during yo-yo diving may contribute to gas bubble clearance but paradoxically may also increase the risk of central nervous system decompression illness (DCI). We evaluated the risk of DCI due to yo-yo dives with very short surface intervals, using a controlled animal model. Dives were conducted on air to a depth of 90 meters (10 atmospheres absolute) for 32 minutes of bottom time, at a descent/ascent rate of 10 meters/ minute. Sprague-Dawley rats weighing ~ 300 grams were divided randomly into three groups. Group A performed a square dive protocol without any surface intervals, Group B conducted a protocol that included two surface intervals during the dive, and Group C performed a protocol with three surface intervals. Ascent/descent rate for surface intervals, each lasting one minute, was also 10 meters/minute. Manifestations of DCI were observed in 13 of 16 animals in Group A (81.3%), six of 12 in Group B (58.3%), and two of 12 in Group C (16.7%). Mortality rates were similar in all groups. Surface intervals during dives breathing air significantly reduced DCI risk in the rat. Further studies are required using a larger animal model to reinforce the results of the present investigation.

  4. [Evaluation of ecosystem provisioning service and its economic value].

    PubMed

    Wu, Nan; Gao, Ji-Xi; Sudebilige; Ricketts, Taylor H; Olwero, Nasser; Luo, Zun-Lan

    2010-02-01

    Aiming at the fact that the current approaches of evaluating the efficacy of ecosystem provisioning service were lack of spatial information and did not take the accessibility of products into account, this paper established an evaluation model to simulate the spatial distribution of ecosystem provisioning service and its economic value, based on ArcGIS 9. 2 and taking the supply and demand factors of ecosystem products into account. The provision of timber product in Laojunshan in 2000 was analyzed with the model. In 2000, the total physical quantity of the timber' s provisioning service in Laojunshan was 11.12 x 10(4) m3 x a(-1), occupying 3.2% of the total increment of timber stock volume. The total provisioning service value of timber was 6669.27 x 10(4) yuan, among which, coniferous forest contributed most (90.41%). Due to the denser distribution of populations and roads in the eastern area of Laojunshan, some parts of the area being located outside of conservancy district, and forests being in scattered distribution, the spatial distribution pattern of the physical quantity of timber's provisioning service was higher in the eastern than in the western area.

  5. Sensitivity Analysis of the Bone Fracture Risk Model

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including

  6. Human health risk assessment database, "the NHSRC toxicity value database": supporting the risk assessment process at US EPA's National Homeland Security Research Center.

    PubMed

    Moudgal, Chandrika J; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-11-15

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007.

  7. A Prospective Multicenter Evaluation of the Value of the On-Call Orthopedic Resident.

    PubMed

    Jackson, J Benjamin; Vincent, Scott; Davies, James; Phelps, Kevin; Cornett, Chris; Grabowski, Greg; Scannell, Brian; Stotts, Alan; Bice, Miranda

    2018-02-01

    Funding for graduate medical education is at risk despite the services provided by residents. We quantified the potential monetary value of services provided by on-call orthopedic surgery residents. We conducted a prospective, cross-sectional, multicenter cohort study design. Over a 90-day period in 2014, we collected data on consults by on-call orthopedic surgery residents at 4 tertiary academic medical centers in the United States. All inpatient and emergency department consults evaluated by first-call residents during the study period were eligible for inclusion. Based on their current procedural terminology codes, procedures and evaluations for each consult were assigned a relative value unit and converted into a monetary value to determine the value of services provided by residents. The primary outcome measures were the total dollar value of each consult and the percentage of resident salaries that could be funded by the generated value of the resident consult services. In total, 2644 consults seen by 33 residents from the 4 institutions were included for analysis. These yielded an average value of $81,868 per center for the 90-day study period, that is, $327,471 annually. With a median resident stipend of $53,992, the extrapolated average percentage of resident stipends that could be funded by these consult revenues was 73% of the stipends of the residents who took call or 36% of the stipends of the overall resident cohort. The potential monetary value generated by on-call orthopedic surgery residents is substantial.

  8. Solving the Value Equation: Assessing Surgeon Performance Using Risk-Adjusted Quality-Cost Diagrams and Surgical Outcomes.

    PubMed

    Knechtle, William S; Perez, Sebastian D; Raval, Mehul V; Sullivan, Patrick S; Duwayri, Yazan M; Fernandez, Felix; Sharma, Joe; Sweeney, John F

    Quality-cost diagrams have been used previously to assess interventions and their cost-effectiveness. This study explores the use of risk-adjusted quality-cost diagrams to compare the value provided by surgeons by presenting cost and outcomes simultaneously. Colectomy cases from a single institution captured in the National Surgical Quality Improvement Program database were linked to hospital cost-accounting data to determine costs per encounter. Risk adjustment models were developed and observed average cost and complication rates per surgeon were compared to expected cost and complication rates using the diagrams. Surgeons were surveyed to determine if the diagrams could provide information that would result in practice adjustment. Of 55 surgeons surveyed on the utility of the diagrams, 92% of respondents believed the diagrams were useful. The diagrams seemed intuitive to interpret, and making risk-adjusted comparisons accounted for patient differences in the evaluation.

  9. Evaluation of Time- and Concentration-dependent Toxic Effect Models for use in Aquatic Risk Assessments, Oral Presentation

    EPA Science Inventory

    Various models have been proposed for describing the time- and concentration-dependence of toxic effects to aquatic organisms, which would improve characterization of risks in natural systems. Selected models were evaluated using results from a study on the lethality of copper t...

  10. Using a Systematic Conceptual Model for a Process Evaluation of a Middle School Obesity Risk-Reduction Nutrition Curriculum Intervention: Choice, Control & Change

    PubMed Central

    Lee, Heewon; Contento, Isobel R.; Koch, Pamela

    2012-01-01

    Objective To use and review a conceptual model of process evaluation and to examine the implementation of a nutrition education curriculum, Choice, Control & Change, designed to promote dietary and physical activity behaviors that reduce obesity risk. Design A process evaluation study based on a systematic conceptual model. Setting Five middle schools in New York City. Participants 562 students in 20 classes and their science teachers (n=8). Main Outcome Measures Based on the model, teacher professional development, teacher implementation, and student reception were evaluated. Also measured were teacher characteristics, teachers’ curriculum evaluation, and satisfaction with teaching the curriculum. Analysis Descriptive statistics and Spearman’s Rho Correlation for quantitative analysis and content analysis for qualitative data were used. Results Mean score of the teacher professional development evaluation was 4.75 on a 5-point scale. Average teacher implementation rate was 73%, and student reception rate was 69%. Ongoing teacher support was highly valued by teachers. Teachers’ satisfaction with teaching the curriculum was highly correlated with students’ satisfaction (p <.05). Teachers’ perception of amount of student work was negatively correlated with implementation and with student satisfaction (p<.05). Conclusions and implications Use of a systematic conceptual model and comprehensive process measures improves understanding of the implementation process and helps educators to better implement interventions as designed. PMID:23321021

  11. Using a systematic conceptual model for a process evaluation of a middle school obesity risk-reduction nutrition curriculum intervention: choice, control & change.

    PubMed

    Lee, Heewon; Contento, Isobel R; Koch, Pamela

    2013-03-01

    To use and review a conceptual model of process evaluation and to examine the implementation of a nutrition education curriculum, Choice, Control & Change, designed to promote dietary and physical activity behaviors that reduce obesity risk. A process evaluation study based on a systematic conceptual model. Five middle schools in New York City. Five hundred sixty-two students in 20 classes and their science teachers (n = 8). Based on the model, teacher professional development, teacher implementation, and student reception were evaluated. Also measured were teacher characteristics, teachers' curriculum evaluation, and satisfaction with teaching the curriculum. Descriptive statistics and Spearman ρ correlation for quantitative analysis and content analysis for qualitative data were used. Mean score of the teacher professional development evaluation was 4.75 on a 5-point scale. Average teacher implementation rate was 73%, and the student reception rate was 69%. Ongoing teacher support was highly valued by teachers. Teacher satisfaction with teaching the curriculum was highly correlated with student satisfaction (P < .05). Teacher perception of amount of student work was negatively correlated with implementation and with student satisfaction (P < .05). Use of a systematic conceptual model and comprehensive process measures improves understanding of the implementation process and helps educators to better implement interventions as designed. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  12. Cost-effectiveness and value of information analysis of nutritional support for preventing pressure ulcers in high-risk patients: implement now, research later.

    PubMed

    Tuffaha, Haitham W; Roberts, Shelley; Chaboyer, Wendy; Gordon, Louisa G; Scuffham, Paul A

    2015-04-01

    Pressure ulcers are a major cause of mortality, morbidity, and increased healthcare cost. Nutritional support may reduce the incidence of pressure ulcers in hospitalised patients who are at risk of pressure ulcer and malnutrition. To evaluate the cost-effectiveness of nutritional support in preventing pressure ulcers in high-risk hospitalised patients, and to assess the value of further research to inform the decision to implement this intervention using value of information analysis (VOI). The analysis was from the perspective of Queensland Health, Australia using a decision model with evidence derived from a systematic review and meta-analysis. Resources were valued using 2014 prices and the time horizon of the analysis was one year. Monte Carlo simulation was used to estimate net monetary benefits (NB) and to calculate VOI measures. Compared with standard hospital diet, nutritional support was cost saving at AU$425 per patient, and more effective with an average 0.005 quality-adjusted life years (QALY) gained. At a willingness-to-pay of AU$50,000 per QALY, the incremental NB was AU$675 per patient, with a probability of 87 % that nutritional support is cost-effective. The expected value of perfect information was AU$5 million and the expected value of perfect parameter information was highest for the relative risk of developing a pressure ulcer at AU$2.5 million. For a future trial investigating the relative effectiveness of the interventions, the expected net benefit of research would be maximised at AU$100,000 with 1,200 patients in each arm if nutritional support was perfectly implemented. The opportunity cost of withholding the decision to implement the intervention until the results of the future study are available would be AU$14 million. Nutritional support is cost-effective in preventing pressure ulcers in high-risk hospitalised patients compared with standard diet. Future research to reduce decision uncertainty is worthwhile; however, given the

  13. Non-use Economic Values for Little-Known Aquatic Species at Risk: Comparing Choice Experiment Results from Surveys Focused on Species, Guilds, and Ecosystems

    NASA Astrophysics Data System (ADS)

    Rudd, Murray A.; Andres, Sheri; Kilfoil, Mary

    2016-09-01

    Accounting for non-market economic values of biological diversity is important to fully assess the benefits of environmental policies and regulations. This study used three choice experiments (species-, guild-, and ecosystem-based surveys) in parallel to quantify non-use values for little-known aquatic species at risk in southern Ontario. Mean willingness-to-pay (WTP) ranged from 9.45 to 21.41 per listing status increment under Canada's Species at Risk Act for both named and unnamed little-known species. Given the broad range of valuable ecosystem services likely to accrue to residents from substantial increases in water quality and the rehabilitation of coastal wetlands, the difference in WTP between species- and ecosystem-based surveys seemed implausibly small. It appeared that naming species—the `iconization' of species in two of the three surveys—had an important effect on WTP. The results suggest that reasonable annual household-level WTP values for little-known aquatic species may be 10 to 25 per species or 10 to 20 per listing status increment. The results highlighted the utility of using parallel surveys to triangulate on non-use economic values for little-known species at risk.

  14. Non-use Economic Values for Little-Known Aquatic Species at Risk: Comparing Choice Experiment Results from Surveys Focused on Species, Guilds, and Ecosystems.

    PubMed

    Rudd, Murray A; Andres, Sheri; Kilfoil, Mary

    2016-09-01

    Accounting for non-market economic values of biological diversity is important to fully assess the benefits of environmental policies and regulations. This study used three choice experiments (species-, guild-, and ecosystem-based surveys) in parallel to quantify non-use values for little-known aquatic species at risk in southern Ontario. Mean willingness-to-pay (WTP) ranged from $9.45 to $21.41 per listing status increment under Canada's Species at Risk Act for both named and unnamed little-known species. Given the broad range of valuable ecosystem services likely to accrue to residents from substantial increases in water quality and the rehabilitation of coastal wetlands, the difference in WTP between species- and ecosystem-based surveys seemed implausibly small. It appeared that naming species-the 'iconization' of species in two of the three surveys-had an important effect on WTP. The results suggest that reasonable annual household-level WTP values for little-known aquatic species may be $10 to $25 per species or $10 to $20 per listing status increment. The results highlighted the utility of using parallel surveys to triangulate on non-use economic values for little-known species at risk.

  15. Unlocking Public Value: An Evaluation of the Impact of Missouri's Great Northwest Day at the Capitol Program

    ERIC Educational Resources Information Center

    Majee, Wilson; Maltsberger, Beverly A.

    2013-01-01

    The study reported here is an evaluation of the public value of a regional public policy engagement program. Data were obtained through surveys and document analysis. The study observed peer-learning and networking opportunities as some of the most impactful elements of GNWD at the Capitol in creating public value. Building coalitions of interest…

  16. Mining geriatric assessment data for in-patient fall prediction models and high-risk subgroups

    PubMed Central

    2012-01-01

    Background Hospital in-patient falls constitute a prominent problem in terms of costs and consequences. Geriatric institutions are most often affected, and common screening tools cannot predict in-patient falls consistently. Our objectives are to derive comprehensible fall risk classification models from a large data set of geriatric in-patients' assessment data and to evaluate their predictive performance (aim#1), and to identify high-risk subgroups from the data (aim#2). Methods A data set of n = 5,176 single in-patient episodes covering 1.5 years of admissions to a geriatric hospital were extracted from the hospital's data base and matched with fall incident reports (n = 493). A classification tree model was induced using the C4.5 algorithm as well as a logistic regression model, and their predictive performance was evaluated. Furthermore, high-risk subgroups were identified from extracted classification rules with a support of more than 100 instances. Results The classification tree model showed an overall classification accuracy of 66%, with a sensitivity of 55.4%, a specificity of 67.1%, positive and negative predictive values of 15% resp. 93.5%. Five high-risk groups were identified, defined by high age, low Barthel index, cognitive impairment, multi-medication and co-morbidity. Conclusions Our results show that a little more than half of the fallers may be identified correctly by our model, but the positive predictive value is too low to be applicable. Non-fallers, on the other hand, may be sorted out with the model quite well. The high-risk subgroups and the risk factors identified (age, low ADL score, cognitive impairment, institutionalization, polypharmacy and co-morbidity) reflect domain knowledge and may be used to screen certain subgroups of patients with a high risk of falling. Classification models derived from a large data set using data mining methods can compete with current dedicated fall risk screening tools, yet lack diagnostic precision. High-risk

  17. Mining geriatric assessment data for in-patient fall prediction models and high-risk subgroups.

    PubMed

    Marschollek, Michael; Gövercin, Mehmet; Rust, Stefan; Gietzelt, Matthias; Schulze, Mareike; Wolf, Klaus-Hendrik; Steinhagen-Thiessen, Elisabeth

    2012-03-14

    Hospital in-patient falls constitute a prominent problem in terms of costs and consequences. Geriatric institutions are most often affected, and common screening tools cannot predict in-patient falls consistently. Our objectives are to derive comprehensible fall risk classification models from a large data set of geriatric in-patients' assessment data and to evaluate their predictive performance (aim#1), and to identify high-risk subgroups from the data (aim#2). A data set of n = 5,176 single in-patient episodes covering 1.5 years of admissions to a geriatric hospital were extracted from the hospital's data base and matched with fall incident reports (n = 493). A classification tree model was induced using the C4.5 algorithm as well as a logistic regression model, and their predictive performance was evaluated. Furthermore, high-risk subgroups were identified from extracted classification rules with a support of more than 100 instances. The classification tree model showed an overall classification accuracy of 66%, with a sensitivity of 55.4%, a specificity of 67.1%, positive and negative predictive values of 15% resp. 93.5%. Five high-risk groups were identified, defined by high age, low Barthel index, cognitive impairment, multi-medication and co-morbidity. Our results show that a little more than half of the fallers may be identified correctly by our model, but the positive predictive value is too low to be applicable. Non-fallers, on the other hand, may be sorted out with the model quite well. The high-risk subgroups and the risk factors identified (age, low ADL score, cognitive impairment, institutionalization, polypharmacy and co-morbidity) reflect domain knowledge and may be used to screen certain subgroups of patients with a high risk of falling. Classification models derived from a large data set using data mining methods can compete with current dedicated fall risk screening tools, yet lack diagnostic precision. High-risk subgroups may be identified

  18. EVALUATING RISK IN OLDER ADULTS USING PHYSIOLOGICALLY BASED PHARMACOKINETIC MODELS

    EPA Science Inventory

    The rapid growth in the number of older Americans has many implications for public health, including the need to better understand the risks posed by environmental exposures to older adults. An important element for evaluating risk is the understanding of the doses of environment...

  19. A quantitative risk assessment model to evaluate effective border control measures for rabies prevention.

    PubMed

    Weng, Hsin-Yi; Wu, Pei-I; Yang, Ping-Cheng; Tsai, Yi-Lun; Chang, Chao-Chin

    2010-01-01

    Border control is the primary method to prevent rabies emergence. This study developed a quantitative risk model incorporating stochastic processes to evaluate whether border control measures could efficiently prevent rabies introduction through importation of cats and dogs using Taiwan as an example. Both legal importation and illegal smuggling were investigated. The impacts of reduced quarantine and/or waiting period on the risk of rabies introduction were also evaluated. The results showed that Taiwan's current animal importation policy could effectively prevent rabies introduction through legal importation of cats and dogs. The median risk of a rabid animal to penetrate current border control measures and enter Taiwan was 5.33 x 10(-8) (95th percentile: 3.20 x 10(-7)). However, illegal smuggling may pose Taiwan to the great risk of rabies emergence. Reduction of quarantine and/or waiting period would affect the risk differently, depending on the applied assumptions, such as increased vaccination coverage, enforced custom checking, and/or change in number of legal importations. Although the changes in the estimated risk under the assumed alternatives were not substantial except for completely abolishing quarantine, the consequences of rabies introduction may yet be considered to be significant in a rabies-free area. Therefore, a comprehensive benefit-cost analysis needs to be conducted before recommending these alternative measures.

  20. A quantitative risk assessment model to evaluate effective border control measures for rabies prevention

    PubMed Central

    Weng, Hsin-Yi; Wu, Pei-I; Yang, Ping-Cheng; Tsai, Yi-Lun; Chang, Chao-Chin

    2009-01-01

    Border control is the primary method to prevent rabies emergence. This study developed a quantitative risk model incorporating stochastic processes to evaluate whether border control measures could efficiently prevent rabies introduction through importation of cats and dogs using Taiwan as an example. Both legal importation and illegal smuggling were investigated. The impacts of reduced quarantine and/or waiting period on the risk of rabies introduction were also evaluated. The results showed that Taiwan’s current animal importation policy could effectively prevent rabies introduction through legal importation of cats and dogs. The median risk of a rabid animal to penetrate current border control measures and enter Taiwan was 5.33 × 10−8 (95th percentile: 3.20 × 10−7). However, illegal smuggling may pose Taiwan to the great risk of rabies emergence. Reduction of quarantine and/or waiting period would affect the risk differently, depending on the applied assumptions, such as increased vaccination coverage, enforced custom checking, and/or change in number of legal importations. Although the changes in the estimated risk under the assumed alternatives were not substantial except for completely abolishing quarantine, the consequences of rabies introduction may yet be considered to be significant in a rabies-free area. Therefore, a comprehensive benefit-cost analysis needs to be conducted before recommending these alternative measures. PMID:19822125

  1. The Application of a Residual Risk Evaluation Technique Used for Expendable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Latimer, John A.

    2009-01-01

    This presentation provides a Residual Risk Evaluation Technique (RRET) developed by Kennedy Space Center (KSC) Safety and Mission Assurance (S&MA) Launch Services Division. This technique is one of many procedures used by S&MA at KSC to evaluate residual risks for each Expendable Launch Vehicle (ELV) mission. RRET is a straight forward technique that incorporates the proven methodology of risk management, fault tree analysis, and reliability prediction. RRET derives a system reliability impact indicator from the system baseline reliability and the system residual risk reliability values. The system reliability impact indicator provides a quantitative measure of the reduction in the system baseline reliability due to the identified residual risks associated with the designated ELV mission. An example is discussed to provide insight into the application of RRET.

  2. Street Gangs: A Modeling Approach to Evaluating At Risk Youth

    DTIC Science & Technology

    2008-03-01

    Kaoru Ishikawa (Ryan, 2000; Herrmann, 2001). Ishikawa diagrams stem from the area of quality control but have been used in many other areas such as...Overview................................................................................................................43 Ishikawa Diagram...Figure 1. Example Ishikawa “Fishbone” Diagram.......................................................31 Figure 2. Example Value Hierarchy

  3. Risk model for estimating the 1-year risk of deferred lesion intervention following deferred revascularization after fractional flow reserve assessment.

    PubMed

    Depta, Jeremiah P; Patel, Jayendrakumar S; Novak, Eric; Gage, Brian F; Masrani, Shriti K; Raymer, David; Facey, Gabrielle; Patel, Yogesh; Zajarias, Alan; Lasala, John M; Amin, Amit P; Kurz, Howard I; Singh, Jasvindar; Bach, Richard G

    2015-02-21

    Although lesions deferred revascularization following fractional flow reserve (FFR) assessment have a low risk of adverse cardiac events, variability in risk for deferred lesion intervention (DLI) has not been previously evaluated. The aim of this study was to develop a prediction model to estimate 1-year risk of DLI for coronary lesions where revascularization was not performed following FFR assessment. A prediction model for DLI was developed from a cohort of 721 patients with 882 coronary lesions where revascularization was deferred based on FFR between 10/2002 and 7/2010. Deferred lesion intervention was defined as any revascularization of a lesion previously deferred following FFR. The final DLI model was developed using stepwise Cox regression and validated using bootstrapping techniques. An algorithm was constructed to predict the 1-year risk of DLI. During a mean (±SD) follow-up period of 4.0 ± 2.3 years, 18% of lesions deferred after FFR underwent DLI; the 1-year incidence of DLI was 5.3%, while the predicted risk of DLI varied from 1 to 40%. The final Cox model included the FFR value, age, current or former smoking, history of coronary artery disease (CAD) or prior percutaneous coronary intervention, multi-vessel CAD, and serum creatinine. The c statistic for the DLI prediction model was 0.66 (95% confidence interval, CI: 0.61-0.70). Patients deferred revascularization based on FFR have variation in their risk for DLI. A clinical prediction model consisting of five clinical variables and the FFR value can help predict the risk of DLI in the first year following FFR assessment. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.

  4. Causal modelling applied to the risk assessment of a wastewater discharge.

    PubMed

    Paul, Warren L; Rokahr, Pat A; Webb, Jeff M; Rees, Gavin N; Clune, Tim S

    2016-03-01

    Bayesian networks (BNs), or causal Bayesian networks, have become quite popular in ecological risk assessment and natural resource management because of their utility as a communication and decision-support tool. Since their development in the field of artificial intelligence in the 1980s, however, Bayesian networks have evolved and merged with structural equation modelling (SEM). Unlike BNs, which are constrained to encode causal knowledge in conditional probability tables, SEMs encode this knowledge in structural equations, which is thought to be a more natural language for expressing causal information. This merger has clarified the causal content of SEMs and generalised the method such that it can now be performed using standard statistical techniques. As it was with BNs, the utility of this new generation of SEM in ecological risk assessment will need to be demonstrated with examples to foster an understanding and acceptance of the method. Here, we applied SEM to the risk assessment of a wastewater discharge to a stream, with a particular focus on the process of translating a causal diagram (conceptual model) into a statistical model which might then be used in the decision-making and evaluation stages of the risk assessment. The process of building and testing a spatial causal model is demonstrated using data from a spatial sampling design, and the implications of the resulting model are discussed in terms of the risk assessment. It is argued that a spatiotemporal causal model would have greater external validity than the spatial model, enabling broader generalisations to be made regarding the impact of a discharge, and greater value as a tool for evaluating the effects of potential treatment plant upgrades. Suggestions are made on how the causal model could be augmented to include temporal as well as spatial information, including suggestions for appropriate statistical models and analyses.

  5. Monitoring risk-adjusted outcomes in congenital heart surgery: does the appropriateness of a risk model change with time?

    PubMed

    Tsang, Victor T; Brown, Katherine L; Synnergren, Mats Johanssen; Kang, Nicholas; de Leval, Marc R; Gallivan, Steve; Utley, Martin

    2009-02-01

    Risk adjustment of outcomes in pediatric congenital heart surgery is challenging due to the great diversity in diagnoses and procedures. We have previously shown that variable life-adjusted display (VLAD) charts provide an effective graphic display of risk-adjusted outcomes in this specialty. A question arises as to whether the risk model used remains appropriate over time. We used a recently developed graphic technique to evaluate the performance of an existing risk model among those patients at a single center during 2000 to 2003 originally used in model development. We then compared the distribution of predicted risk among these patients with that among patients in 2004 to 2006. Finally, we constructed a VLAD chart of risk-adjusted outcomes for the latter period. Among 1083 patients between April 2000 and March 2003, the risk model performed well at predicted risks above 3%, underestimated mortality at 2% to 3% predicted risk, and overestimated mortality below 2% predicted risk. There was little difference in the distribution of predicted risk among these patients and among 903 patients between June 2004 and October 2006. Outcomes for the more recent period were appreciably better than those expected according to the risk model. This finding cannot be explained by any apparent bias in the risk model combined with changes in case-mix. Risk models can, and hopefully do, become out of date. There is scope for complacency in the risk-adjusted audit if the risk model used is not regularly recalibrated to reflect changing standards and expectations.

  6. Validation of Risk Assessment Models of Venous Thromboembolism in Hospitalized Medical Patients.

    PubMed

    Greene, M Todd; Spyropoulos, Alex C; Chopra, Vineet; Grant, Paul J; Kaatz, Scott; Bernstein, Steven J; Flanders, Scott A

    2016-09-01

    Patients hospitalized for acute medical illness are at increased risk for venous thromboembolism. Although risk assessment is recommended and several at-admission risk assessment models have been developed, these have not been adequately derived or externally validated. Therefore, an optimal approach to evaluate venous thromboembolism risk in medical patients is not known. We conducted an external validation study of existing venous thromboembolism risk assessment models using data collected on 63,548 hospitalized medical patients as part of the Michigan Hospital Medicine Safety (HMS) Consortium. For each patient, cumulative venous thromboembolism risk scores and risk categories were calculated. Cox regression models were used to quantify the association between venous thromboembolism events and assigned risk categories. Model discrimination was assessed using Harrell's C-index. Venous thromboembolism incidence in hospitalized medical patients is low (1%). Although existing risk assessment models demonstrate good calibration (hazard ratios for "at-risk" range 2.97-3.59), model discrimination is generally poor for all risk assessment models (C-index range 0.58-0.64). The performance of several existing risk assessment models for predicting venous thromboembolism among acutely ill, hospitalized medical patients at admission is limited. Given the low venous thromboembolism incidence in this nonsurgical patient population, careful consideration of how best to utilize existing venous thromboembolism risk assessment models is necessary, and further development and validation of novel venous thromboembolism risk assessment models for this patient population may be warranted. Published by Elsevier Inc.

  7. Interpreting incremental value of markers added to risk prediction models.

    PubMed

    Pencina, Michael J; D'Agostino, Ralph B; Pencina, Karol M; Janssens, A Cecile J W; Greenland, Philip

    2012-09-15

    The discrimination of a risk prediction model measures that model's ability to distinguish between subjects with and without events. The area under the receiver operating characteristic curve (AUC) is a popular measure of discrimination. However, the AUC has recently been criticized for its insensitivity in model comparisons in which the baseline model has performed well. Thus, 2 other measures have been proposed to capture improvement in discrimination for nested models: the integrated discrimination improvement and the continuous net reclassification improvement. In the present study, the authors use mathematical relations and numerical simulations to quantify the improvement in discrimination offered by candidate markers of different strengths as measured by their effect sizes. They demonstrate that the increase in the AUC depends on the strength of the baseline model, which is true to a lesser degree for the integrated discrimination improvement. On the other hand, the continuous net reclassification improvement depends only on the effect size of the candidate variable and its correlation with other predictors. These measures are illustrated using the Framingham model for incident atrial fibrillation. The authors conclude that the increase in the AUC, integrated discrimination improvement, and net reclassification improvement offer complementary information and thus recommend reporting all 3 alongside measures characterizing the performance of the final model.

  8. Joint and separate evaluation of risk reduction: impact on sensitivity to risk reduction magnitude in the context of 4 different risk information formats.

    PubMed

    Gyrd-Hansen, Dorte; Halvorsen, Peder; Nexøe, Jørgen; Nielsen, Jesper; Støvring, Henrik; Kristiansen, Ivar

    2011-01-01

    When people make choices, they may have multiple options presented simultaneously or, alternatively, have options presented 1 at a time. It has been shown that if decision makers have little experience with or difficulties in understanding certain attributes, these attributes will have greater impact in joint evaluations than in separate evaluations. The authors investigated the impact of separate versus joint evaluations in a health care context in which laypeople were presented with the possibility of participating in risk-reducing drug therapies. In a randomized study comprising 895 subjects aged 40 to 59 y in Odense, Denmark, subjects were randomized to receive information in terms of absolute risk reduction (ARR), relative risk reduction (RRR), number needed to treat (NNT), or prolongation of life (POL), all with respect to heart attack, and they were asked whether they would be willing to receive a specified treatment. Respondents were randomly allocated to valuing the interventions separately (either great effect or small effect) or jointly (small effect and large effect). Joint evaluation reduced the propensity to accept the intervention that offered the smallest effect. Respondents were more sensitive to scale when faced with a joint evaluation for information formats ARR, RRR, and POL but not for NNT. Evaluability bias appeared to be most pronounced for POL and ARR. Risk information appears to be prone to evaluability bias. This suggests that numeric information on health gains is difficult to evaluate in isolation. Consequently, such information may bear too little weight in separate evaluations of risk-reducing interventions.

  9. Dental Environmental Noise Evaluation and Health Risk Model Construction to Dental Professionals.

    PubMed

    Ma, Kuen Wai; Wong, Hai Ming; Mak, Cheuk Ming

    2017-09-19

    Occupational noise is unavoidably produced from dental equipment, building facilities, and human voices in the dental environment. The purpose of this study was to investigate the effect of occupational noise exposure on the dental professionals' health condition. The psychoacoustics approach noise exposure assessment followed by the health risk assessment was carried on at the paediatric dentistry clinic and the dental laboratory in the Prince Philip Dental Hospital of Hong Kong. The A-weighted equivalent sound level, total loudness, and sharpness values were statistically significantly higher for the noise at the laboratory than that at the clinic. The degree of perceived influences and sharpness of noise were found to have the impacts on the dental professionals' working performance and health. Moreover, the risk of having a bad hearing state would a have 26% and 31% higher chance for a unit increment of the short-term and long-term impact scores, respectively. The dental professionals with the service length more than 10 years and the daily working hours of more than eight showed the highest risk to their hearing state. The worse the hearing state was, the worse the health state was found for the dental professionals. Also, the risk of dissatisfaction would be increased by 4.41 and 1.22 times for those who worked at the laboratory and a unit increment of the long-term impact score. The constructed health risk mode with the scientific and statistical evidence is hence important for the future noise management of environmental improvement.

  10. Combined Endoscopic/Sonographic-Based Risk Matrix Model for Predicting One-Year Risk of Surgery: A Prospective Observational Study of a Tertiary Center Severe/Refractory Crohn's Disease Cohort.

    PubMed

    Rispo, Antonio; Imperatore, Nicola; Testa, Anna; Bucci, Luigi; Luglio, Gaetano; De Palma, Giovanni Domenico; Rea, Matilde; Nardone, Olga Maria; Caporaso, Nicola; Castiglione, Fabiana

    2018-03-08

    In the management of Crohn's Disease (CD) patients, having a simple score combining clinical, endoscopic and imaging features to predict the risk of surgery could help to tailor treatment more effectively. AIMS: to prospectively evaluate the one-year risk factors for surgery in refractory/severe CD and to generate a risk matrix for predicting the probability of surgery at one year. CD patients needing a disease re-assessment at our tertiary IBD centre underwent clinical, laboratory, endoscopy and bowel sonography (BS) examinations within one week. The optimal cut-off values in predicting surgery were identified using ROC curves for Simple Endoscopic Score for CD (SES-CD), bowel wall thickness (BWT) at BS, and small bowel CD extension at BS. Binary logistic regression and Cox's regression were then carried out. Finally, the probabilities of surgery were calculated for selected baseline levels of covariates and results were arranged in a prediction matrix. Of 100 CD patients, 30 underwent surgery within one year. SES-CD©9 (OR 15.3; p<0.001), BWT©7 mm (OR 15.8; p<0.001), small bowel CD extension at BS©33 cm (OR 8.23; p<0.001) and stricturing/penetrating behavior (OR 4.3; p<0.001) were the only independent factors predictive of surgery at one-year based on binary logistic and Cox's regressions. Our matrix model combined these risk factors and the probability of surgery ranged from 0.48% to 87.5% (sixteen combinations). Our risk matrix combining clinical, endoscopic and ultrasonographic findings can accurately predict the one-year risk of surgery in patients with severe/refractory CD requiring a disease re-evaluation. This tool could be of value in clinical practice, serving as the basis for a tailored management of CD patients.

  11. Can we improve clinical prediction of at-risk older drivers?

    PubMed Central

    Bowers, Alex R.; Anastasio, R. Julius; Sheldon, Sarah S.; O’Connor, Margaret G.; Hollis, Ann M.; Howe, Piers D.; Horowitz, Todd S.

    2013-01-01

    Objectives To conduct a pilot study to evaluate the predictive value of the Montreal Cognitive Assessment test (MoCA) and a brief test of multiple object tracking (MOT) relative to other tests of cognition and attention in identifying at-risk older drivers, and to determine which combination of tests provided the best overall prediction. Methods Forty-seven currently-licensed drivers (58 to 95 years), primarily from a clinical driving evaluation program, participated. Their performance was measured on: (1) a screening test battery, comprising MoCA, MOT, MiniMental State Examination (MMSE), Trail-Making Test, visual acuity, contrast sensitivity, and Useful Field of View (UFOV); and (2) a standardized road test. Results Eighteen participants were rated at-risk on the road test. UFOV subtest 2 was the best single predictor with an area under the curve (AUC) of .84. Neither MoCA nor MOT was a better predictor of the at-risk outcome than either MMSE or UFOV, respectively. The best four-test combination (MMSE, UFOV subtest 2, visual acuity and contrast sensitivity) was able to identify at-risk drivers with 95% specificity and 80% sensitivity (.91 AUC). Conclusions Although the best four-test combination was much better than a single test in identifying at-risk drivers, there is still much work to do in this field to establish test batteries that have both high sensitivity and specificity. PMID:23954688

  12. Value of Progression of Coronary Artery Calcification for Risk Prediction of Coronary and Cardiovascular Events: Result of the HNR Study (Heinz Nixdorf Recall).

    PubMed

    Lehmann, Nils; Erbel, Raimund; Mahabadi, Amir A; Rauwolf, Michael; Möhlenkamp, Stefan; Moebus, Susanne; Kälsch, Hagen; Budde, Thomas; Schmermund, Axel; Stang, Andreas; Führer-Sakel, Dagmar; Weimar, Christian; Roggenbuck, Ulla; Dragano, Nico; Jöckel, Karl-Heinz

    2018-02-13

    Computed tomography (CT) allows estimation of coronary artery calcium (CAC) progression. We evaluated several progression algorithms in our unselected, population-based cohort for risk prediction of coronary and cardiovascular events. In 3281 participants (45-74 years of age), free from cardiovascular disease until the second visit, risk factors, and CTs at baseline (b) and after a mean of 5.1 years (5y) were measured. Hard coronary and cardiovascular events, and total cardiovascular events including revascularization, as well, were recorded during a follow-up time of 7.8±2.2 years after the second CT. The added predictive value of 10 CAC progression algorithms on top of risk factors including baseline CAC was evaluated by using survival analysis, C-statistics, net reclassification improvement, and integrated discrimination index. A subgroup analysis of risk in CAC categories was performed. We observed 85 (2.6%) hard coronary, 161 (4.9%) hard cardiovascular, and 241 (7.3%) total cardiovascular events. Absolute CAC progression was higher with versus without subsequent coronary events (median, 115 [Q1-Q3, 23-360] versus 8 [0-83], P <0.0001; similar for hard/total cardiovascular events). Some progression algorithms added to the predictive value of baseline CT and risk assessment in terms of C-statistic or integrated discrimination index, especially for total cardiovascular events. However, CAC progression did not improve models including CAC 5y and 5-year risk factors. An excellent prognosis was found for 921 participants with double-zero CAC b =CAC 5y =0 (10-year coronary and hard/total cardiovascular risk: 1.4%, 2.0%, and 2.8%), which was for participants with incident CAC 1.8%, 3.8%, and 6.6%, respectively. When CAC b progressed from 1 to 399 to CAC 5y ≥400, coronary and total cardiovascular risk were nearly 2-fold in comparison with subjects who remained below CAC 5y =400. Participants with CAC b ≥400 had high rates of hard coronary and hard

  13. Landslide risk models for decision making.

    PubMed

    Bonachea, Jaime; Remondo, Juan; de Terán, José Ramón Díaz; González-Díez, Alberto; Cendrero, Antonio

    2009-11-01

    This contribution presents a quantitative procedure for landslide risk analysis and zoning considering hazard, exposure (or value of elements at risk), and vulnerability. The method provides the means to obtain landslide risk models (expressing expected damage due to landslides on material elements and economic activities in monetary terms, according to different scenarios and periods) useful to identify areas where mitigation efforts will be most cost effective. It allows identifying priority areas for the implementation of actions to reduce vulnerability (elements) or hazard (processes). The procedure proposed can also be used as a preventive tool, through its application to strategic environmental impact analysis (SEIA) of land-use plans. The underlying hypothesis is that reliable predictions about hazard and risk can be made using models based on a detailed analysis of past landslide occurrences in connection with conditioning factors and data on past damage. The results show that the approach proposed and the hypothesis formulated are essentially correct, providing estimates of the order of magnitude of expected losses for a given time period. Uncertainties, strengths, and shortcomings of the procedure and results obtained are discussed and potential lines of research to improve the models are indicated. Finally, comments and suggestions are provided to generalize this type of analysis.

  14. The Readmission Risk Flag: Using the Electronic Health Record to Automatically Identify Patients at Risk for 30-day Readmission

    PubMed Central

    Baillie, Charles A.; VanZandbergen, Christine; Tait, Gordon; Hanish, Asaf; Leas, Brian; French, Benjamin; Hanson, C. William; Behta, Maryam; Umscheid, Craig A.

    2015-01-01

    Background Identification of patients at high risk for readmission is a crucial step toward improving care and reducing readmissions. The adoption of electronic health records (EHR) may prove important to strategies designed to risk stratify patients and introduce targeted interventions. Objective To develop and implement an automated prediction model integrated into our health system’s EHR that identifies on admission patients at high risk for readmission within 30 days of discharge. Design Retrospective and prospective cohort. Setting Healthcare system consisting of three hospitals. Patients All adult patients admitted from August 2009 to September 2012. Interventions An automated readmission risk flag integrated into the EHR. Measures Thirty-day all-cause and 7-day unplanned healthcare system readmissions. Results Using retrospective data, a single risk factor, ≥2 inpatient admissions in the past 12 months, was found to have the best balance of sensitivity (40%), positive predictive value (31%), and proportion of patients flagged (18%), with a c-statistic of 0.62. Sensitivity (39%), positive predictive value (30%), proportion of patients flagged (18%) and c-statistic (0.61) during the 12-month period after implementation of the risk flag were similar. There was no evidence for an effect of the intervention on 30-day all-cause and 7-day unplanned readmission rates in the 12-month period after implementation. Conclusions An automated prediction model was effectively integrated into an existing EHR and identified patients on admission who were at risk for readmission within 30 days of discharge. PMID:24227707

  15. The readmission risk flag: using the electronic health record to automatically identify patients at risk for 30-day readmission.

    PubMed

    Baillie, Charles A; VanZandbergen, Christine; Tait, Gordon; Hanish, Asaf; Leas, Brian; French, Benjamin; Hanson, C William; Behta, Maryam; Umscheid, Craig A

    2013-12-01

    Identification of patients at high risk for readmission is a crucial step toward improving care and reducing readmissions. The adoption of electronic health records (EHR) may prove important to strategies designed to risk stratify patients and introduce targeted interventions. To develop and implement an automated prediction model integrated into our health system's EHR that identifies on admission patients at high risk for readmission within 30 days of discharge. Retrospective and prospective cohort. Healthcare system consisting of 3 hospitals. All adult patients admitted from August 2009 to September 2012. An automated readmission risk flag integrated into the EHR. Thirty-day all-cause and 7-day unplanned healthcare system readmissions. Using retrospective data, a single risk factor, ≥ 2 inpatient admissions in the past 12 months, was found to have the best balance of sensitivity (40%), positive predictive value (31%), and proportion of patients flagged (18%), with a C statistic of 0.62. Sensitivity (39%), positive predictive value (30%), proportion of patients flagged (18%), and C statistic (0.61) during the 12-month period after implementation of the risk flag were similar. There was no evidence for an effect of the intervention on 30-day all-cause and 7-day unplanned readmission rates in the 12-month period after implementation. An automated prediction model was effectively integrated into an existing EHR and identified patients on admission who were at risk for readmission within 30 days of discharge. © 2013 Society of Hospital Medicine.

  16. Evaluation of pro-convulsant risk in the rat: spontaneous and provoked convulsions.

    PubMed

    Esneault, Elise; Peyon, Guillaume; Froger-Colléaux, Christelle; Castagné, Vincent

    2015-01-01

    The aim of the present study was to evaluate the utility of different tests performed in the absence or presence of factors promoting seizures in order to evaluate the pro-convulsant effects of drugs. We studied the effects of theophylline in the rat since this is a well-known pro-convulsant substance in humans. The occurrence of spontaneous convulsions following administration of theophylline was evaluated by observation in the Irwin Test and by measuring brain activity using video-EEG recording in conscious telemetered animals. Theophylline was also tested in the electroconvulsive shock (ECS) threshold and pentylenetetrazole (PTZ)-induced convulsions tests, two commonly used models of provoked convulsions. In the Irwin test, theophylline induced convulsions in 1 out of 6 rats at 128 mg/kg. Paroxysmal/seizure activity was also observed by video-EEG recording in 4 out of the 12 animals tested at 128 mg/kg, in presence of clonic convulsions in 3 out of the 4 rats. Paroxysmal activity was observed in two rats in the absence of clear behavioral symptoms, indicating that some precursor signs can be detected using video-EEG. Clear pro-convulsant activity was shown over the dose-range 32-128 mg/kg in the ECS threshold and PTZ-induced convulsions tests. Evaluation of spontaneous convulsions provides information on the therapeutic window of a drug and the translational value of the approach is increased by the use of video-EEG. Tests based on provoked convulsions further complement the evaluation since they try to mimic high risk situations. Measurement of both spontaneous and provoked convulsions improves the evaluation of the pro-convulsant risk of novel pharmacological substances. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Mechanistic modeling of insecticide risks to breeding birds in ...

    EPA Pesticide Factsheets

    Insecticide usage in the United States is ubiquitous in urban, suburban, and rural environments. In evaluating data for an insecticide registration application and for registration review, scientists at the United States Environmental Protection Agency (USEPA) assess the fate of the insecticide and the risk the insecticide poses to the environment and non-target wildlife. At the present time, current USEPA risk assessments do not include population-level endpoints. In this paper, we present a new mechanistic model, which allows risk assessors to estimate the effects of insecticide exposure on the survival and seasonal productivity of birds known to use agricultural fields during their breeding season. The new model was created from two existing USEPA avian risk assessment models, the Terrestrial Investigation Model (TIM v.3.0) and the Markov Chain Nest Productivity model (MCnest). The integrated TIM/MCnest model has been applied to assess the relative risk of 12 insecticides used to control corn pests on a suite of 31 avian species known to use cornfields in midwestern agroecosystems. The 12 insecticides that were assessed in this study are all used to treat major pests of corn (corn root worm borer, cutworm, and armyworm). After running the integrated TIM/MCnest model, we found extensive differences in risk to birds among insecticides, with chlorpyrifos and malathion (organophosphates) generally posing the greatest risk, and bifenthrin and ë-cyhalothrin (

  18. Determination of osteoporosis risk factors using a multiple logistic regression model in postmenopausal Turkish women.

    PubMed

    Akkus, Zeki; Camdeviren, Handan; Celik, Fatma; Gur, Ali; Nas, Kemal

    2005-09-01

    To determine the risk factors of osteoporosis using a multiple binary logistic regression method and to assess the risk variables for osteoporosis, which is a major and growing health problem in many countries. We presented a case-control study, consisting of 126 postmenopausal healthy women as control group and 225 postmenopausal osteoporotic women as the case group. The study was carried out in the Department of Physical Medicine and Rehabilitation, Dicle University, Diyarbakir, Turkey between 1999-2002. The data from the 351 participants were collected using a standard questionnaire that contains 43 variables. A multiple logistic regression model was then used to evaluate the data and to find the best regression model. We classified 80.1% (281/351) of the participants using the regression model. Furthermore, the specificity value of the model was 67% (84/126) of the control group while the sensitivity value was 88% (197/225) of the case group. We found the distribution of residual values standardized for final model to be exponential using the Kolmogorow-Smirnow test (p=0.193). The receiver operating characteristic curve was found successful to predict patients with risk for osteoporosis. This study suggests that low levels of dietary calcium intake, physical activity, education, and longer duration of menopause are independent predictors of the risk of low bone density in our population. Adequate dietary calcium intake in combination with maintaining a daily physical activity, increasing educational level, decreasing birth rate, and duration of breast-feeding may contribute to healthy bones and play a role in practical prevention of osteoporosis in Southeast Anatolia. In addition, the findings of the present study indicate that the use of multivariate statistical method as a multiple logistic regression in osteoporosis, which maybe influenced by many variables, is better than univariate statistical evaluation.

  19. VTE Risk assessment - a prognostic Model: BATER Cohort Study of young women.

    PubMed

    Heinemann, Lothar Aj; Dominh, Thai; Assmann, Anita; Schramm, Wolfgang; Schürmann, Rolf; Hilpert, Jan; Spannagl, Michael

    2005-04-18

    BACKGROUND: Community-based cohort studies are not available that evaluated the predictive power of both clinical and genetic risk factors for venous thromboembolism (VTE). There is, however, clinical need to forecast the likelihood of future occurrence of VTE, at least qualitatively, to support decisions about intensity of diagnostic or preventive measures. MATERIALS AND METHODS: A 10-year observation period of the Bavarian Thromboembolic Risk (BATER) study, a cohort study of 4337 women (18-55 years), was used to develop a predictive model of VTE based on clinical and genetic variables at baseline (1993). The objective was to prepare a probabilistic scheme that discriminates women with virtually no VTE risk from those at higher levels of absolute VTE risk in the foreseeable future. A multivariate analysis determined which variables at baseline were the best predictors of a future VTE event, provided a ranking according to the predictive power, and permitted to design a simple graphic scheme to assess the individual VTE risk using five predictor variables. RESULTS: Thirty-four new confirmed VTEs occurred during the observation period of over 32,000 women-years (WYs). A model was developed mainly based on clinical information (personal history of previous VTE and family history of VTE, age, BMI) and one composite genetic risk markers (combining Factor V Leiden and Prothrombin G20210A Mutation). Four levels of increasing VTE risk were arbitrarily defined to map the prevalence in the study population: No/low risk of VTE (61.3%), moderate risk (21.1%), high risk (6.0%), very high risk of future VTE (0.9%). In 10.6% of the population the risk assessment was not possible due to lacking VTE cases. The average incidence rates for VTE in these four levels were: 4.1, 12.3, 47.2, and 170.5 per 104 WYs for no, moderate, high, and very high risk, respectively. CONCLUSION: Our prognostic tool - containing clinical information (and if available also genetic data) - seems to be

  20. Empirical evaluation of spatial and non-spatial European-scale multimedia fate models: results and implications for chemical risk assessment.

    PubMed

    Armitage, James M; Cousins, Ian T; Hauck, Mara; Harbers, Jasper V; Huijbregts, Mark A J

    2007-06-01

    Multimedia environmental fate models are commonly-applied tools for assessing the fate and distribution of contaminants in the environment. Owing to the large number of chemicals in use and the paucity of monitoring data, such models are often adopted as part of decision-support systems for chemical risk assessment. The purpose of this study was to evaluate the performance of three multimedia environmental fate models (spatially- and non-spatially-explicit) at a European scale. The assessment was conducted for four polycyclic aromatic hydrocarbons (PAHs) and hexachlorobenzene (HCB) and compared predicted and median observed concentrations using monitoring data collected for air, water, sediments and soils. Model performance in the air compartment was reasonable for all models included in the evaluation exercise as predicted concentrations were typically within a factor of 3 of the median observed concentrations. Furthermore, there was good correspondence between predictions and observations in regions that had elevated median observed concentrations for both spatially-explicit models. On the other hand, all three models consistently underestimated median observed concentrations in sediment and soil by 1-3 orders of magnitude. Although regions with elevated median observed concentrations in these environmental media were broadly identified by the spatially-explicit models, the magnitude of the discrepancy between predicted and median observed concentrations is of concern in the context of chemical risk assessment. These results were discussed in terms of factors influencing model performance such as the steady-state assumption, inaccuracies in emission estimates and the representativeness of monitoring data.

  1. Changing Health Behaviors to Improve Health Outcomes after Angioplasty: A Randomized Trial of Net Present Value versus Future Value Risk Communication

    ERIC Educational Resources Information Center

    Charlson, M. E.; Peterson, J. C.; Boutin-Foster, C.; Briggs, W. M.; Ogedegbe, G. G.; McCulloch, C. E.; Hollenberg, J.; Wong, C.; Allegrante, J. P.

    2008-01-01

    Patients who have undergone angioplasty experience difficulty modifying at-risk behaviors for subsequent cardiac events. The purpose of this study was to test whether an innovative approach to framing of risk, based on "net present value" economic theory, would be more effective in behavioral intervention than the standard "future value approach"…

  2. What's the Value of VAM (Value-Added Modeling)?

    ERIC Educational Resources Information Center

    Scherrer, Jimmy

    2012-01-01

    The use of value-added modeling (VAM) in school accountability is expanding, but deciding how to embrace VAM is difficult. Various experts say it's too unreliable, causes more harm than good, and has a big margin for error. Others assert VAM is imperfect but useful, and provides valuable feedback. A closer look at the models, and their use,…

  3. Utility of Risk Models in Decision Making After Radical Prostatectomy: Lessons from a Natural History Cohort of Intermediate- and High-Risk Men.

    PubMed

    Ross, Ashley E; Yousefi, Kasra; Davicioni, Elai; Ghadessi, Mercedeh; Johnson, Michael H; Sundi, Debasish; Tosoian, Jeffery J; Han, Misop; Humphreys, Elizabeth B; Partin, Alan W; Walsh, Patrick C; Trock, Bruce J; Schaeffer, Edward M

    2016-03-01

    Current guidelines suggest adjuvant radiation therapy for men with adverse pathologic features (APFs) at radical prostatectomy (RP). We examine at-risk men treated only with RP until the time of metastasis. To evaluate whether clinicopathologic risk models can help guide postoperative therapeutic decision making. Men with National Comprehensive Cancer Network intermediate- or high-risk localized prostate cancer undergoing RP in the prostate-specific antigen (PSA) era were identified (n=3089). Only men with initial undetectable PSA after surgery and who received no therapy prior to metastasis were included. APFs were defined as pT3 disease or positive surgical margins. Area under the receiver operating characteristic curve (AUC) for time to event data was used to measure the discrimination performance of the risk factors. Cumulative incidence curves were constructed using Fine and Gray competing risks analysis to estimate the risk of biochemical recurrence (BCR) or metastasis, taking censoring and death due to other causes into consideration. Overall, 43% of the cohort (n=1327) had APFs at RP. Median follow-up for censored patients was 5 yr. Cumulative incidence of metastasis was 6% at 10 yr after RP for all patients. Cumulative incidence of metastasis among men with APFs was 7.5% at 10 yr after RP. Among men with BCR, the incidence of metastasis was 38% 5 yr after BCR. At 10 yr after RP, time-dependent AUC for predicting metastasis by Cancer of the Prostate Risk Assessment Postsurgical or Eggener risk models was 0.81 (95% confidence interval [CI], 0.72-0.97) and 0.78 (95% CI, 0.67-0.97) in the APF population, respectively. At 5 yr after BCR, these values were lower (0.58 [95% CI, 0.50-0.66] and 0.70 [95% CI, 0.63-0.76]) among those who developed BCR. Use of risk model cut points could substantially reduce overtreatment while minimally increasing undertreatment (ie, use of an Eggener cut point of 2.5% for treatment of men with APFs would spare 46% from treatment

  4. Evaluating risks and benefits of wildland fire at landscape scales

    Treesearch

    Carol Miller; Peter B. Landres; Paul B. Alaback

    2000-01-01

    Fire suppression has resulted in severe management challenges, especially in the wildland-urban interface zone. Fire managers seek to reduce fuels and risks in the interface zone, while striving to return the natural role of fire to wildland ecosystems. Managers must balance the benefits of wildland fire on ecosystem health against the values that need to be protected...

  5. At-Risk Students: Evaluating the Impact of School Counselors Regarding Academic Achievement

    ERIC Educational Resources Information Center

    Williams, Tuawana

    2012-01-01

    According to the American School Counselor Association (ASCA; "The ASCA National Model for School Counseling Programs," 2003), school counselors are trained to counsel students regarding academics, social and emotional issues, attendance, and so forth. Because of the growing number of students who are at risk of academic failure, it…

  6. Frontiers Of Education: Japan As ``Global Model'' Or ``Nation At Risk''?

    NASA Astrophysics Data System (ADS)

    Willis, David Blake; Yamamura, Satoshi; Rappleye, Jeremy

    2008-07-01

    The Japanese educational system is undergoing extensive change, affecting all stages from pre-school programmes to higher education. As Japan has moved from a nation at the top to "A Nation at Risk," certain dichotomies have been highlighted. Viewing Japan as either educational super-power or educational tragedy, depending on the era of research or background of the researchers, has been especially provocative for educators and policy-makers. At the same time, the controversies in America surrounding the report A Nation at Risk (National Commission on Excellence in Education) are well known, a major impetus for the report of course being Japan. Central to the question of whether Japan is best understood as a Global Model or A Nation at Risk are themes of cross-national attraction and educational transfer. What can the world learn from Japan? What does Japan need to learn from the world? The answers to these questions have particular significance for Japanese higher education, which we take up as a case study here, with its urgent task to innovate in the face of a steep demographic downward trend. For those in Japan who feel that Japanese education is in a dismal state, what are the origins of this serious decline? For observers in other national contexts who envisage Japan as a model, how do the calls to ‹learn from Japan' reflect genuine attempts to improve practice at home? Or are they simply rhetorical tools in support of domestic political projects?

  7. Evaluations of Risks from the Lunar and Mars Radiation Environments

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Hayat, Matthew J.; Feiveson, Alan H.; Cucinotta, Francis A.

    2008-01-01

    Protecting astronauts from the space radiation environments requires accurate projections of radiation in future space missions. Characterization of the ionizing radiation environment is challenging because the interplanetary plasma and radiation fields are modulated by solar disturbances and the radiation doses received by astronauts in interplanetary space are likewise influenced. The galactic cosmic radiation (GCR) flux for the next solar cycle was estimated as a function of interplanetary deceleration potential, which has been derived from GCR flux and Climax neutron monitor rate measurements over the last 4 decades. For the chaotic nature of solar particle event (SPE) occurrence, the mean frequency of SPE at any given proton fluence threshold during a defined mission duration was obtained from a Poisson process model using proton fluence measurements of SPEs during the past 5 solar cycles (19-23). Analytic energy spectra of 34 historically large SPEs were constructed over broad energy ranges extending to GeV. Using an integrated space radiation model (which includes the transport codes HZETRN [1] and BRYNTRN [2], and the quantum nuclear interaction model QMSFRG[3]), the propagation and interaction properties of the energetic nucleons through various media were predicted. Risk assessment from GCR and SPE was evaluated at the specific organs inside a typical spacecraft using CAM [4] model. The representative risk level at each event size and their standard deviation were obtained from the analysis of 34 SPEs. Risks from different event sizes and their frequency of occurrences in a specified mission period were evaluated for the concern of acute health effects especially during extra-vehicular activities (EVA). The results will be useful for the development of an integrated strategy of optimizing radiation protection on the lunar and Mars missions. Keywords: Space Radiation Environments; Galactic Cosmic Radiation; Solar Particle Event; Radiation Risk; Risk

  8. Risk evaluation of highway engineering project based on the fuzzy-AHP

    NASA Astrophysics Data System (ADS)

    Yang, Qian; Wei, Yajun

    2011-10-01

    Engineering projects are social activities, which integrate with technology, economy, management and organization. There are uncertainties in each respect of engineering projects, and it needs to strengthen risk management urgently. Based on the analysis of the characteristics of highway engineering, and the study of the basic theory on risk evaluation, the paper built an index system of highway project risk evaluation. Besides based on fuzzy mathematics principle, analytical hierarchy process was used and as a result, the model of the comprehensive appraisal method of fuzzy and AHP was set up for the risk evaluation of express way concessionary project. The validity and the practicability of the risk evaluation of expressway concessionary project were verified after the model was applied to the practice of a project.

  9. In Search of Black Swans: Identifying Students at Risk of Failing Licensing Examinations.

    PubMed

    Barber, Cassandra; Hammond, Robert; Gula, Lorne; Tithecott, Gary; Chahine, Saad

    2018-03-01

    To determine which admissions variables and curricular outcomes are predictive of being at risk of failing the Medical Council of Canada Qualifying Examination Part 1 (MCCQE1), how quickly student risk of failure can be predicted, and to what extent predictive modeling is possible and accurate in estimating future student risk. Data from five graduating cohorts (2011-2015), Schulich School of Medicine & Dentistry, Western University, were collected and analyzed using hierarchical generalized linear models (HGLMs). Area under the receiver operating characteristic curve (AUC) was used to evaluate the accuracy of predictive models and determine whether they could be used to predict future risk, using the 2016 graduating cohort. Four predictive models were developed to predict student risk of failure at admissions, year 1, year 2, and pre-MCCQE1. The HGLM analyses identified gender, MCAT verbal reasoning score, two preclerkship course mean grades, and the year 4 summative objective structured clinical examination score as significant predictors of student risk. The predictive accuracy of the models varied. The pre-MCCQE1 model was the most accurate at predicting a student's risk of failing (AUC 0.66-0.93), while the admissions model was not predictive (AUC 0.25-0.47). Key variables predictive of students at risk were found. The predictive models developed suggest, while it is not possible to identify student risk at admission, we can begin to identify and monitor students within the first year. Using such models, programs may be able to identify and monitor students at risk quantitatively and develop tailored intervention strategies.

  10. Towards a better reliability of risk assessment: development of a qualitative & quantitative risk evaluation model (Q2REM) for different trades of construction works in Hong Kong.

    PubMed

    Fung, Ivan W H; Lo, Tommy Y; Tung, Karen C F

    2012-09-01

    Since the safety professionals are the key decision makers dealing with project safety and risk assessment in the construction industry, their perceptions of safety risk would directly affect the reliability of risk assessment. The safety professionals generally tend to heavily rely on their own past experiences to make subjective decisions on risk assessment without systematic decision making. Indeed, understanding of the underlying principles of risk assessment is significant. In this study, the qualitative analysis on the safety professionals' beliefs of risk assessment and their perceptions towards risk assessment, including their recognitions of possible accident causes, the degree of differentiations on their perceptions of risk levels of different trades of works, recognitions of the occurrence of different types of accidents, and their inter-relationships with safety performance in terms of accident rates will be explored in the Stage 1. At the second stage, the deficiencies of the current general practice for risk assessment can be sorted out firstly. Based on the findings from Stage 1 and the historical accident data from 15 large-scaled construction projects in 3-year average, a risk evaluation model prioritizing the risk levels of different trades of works and which cause different types of site accident due to various accident causes will be developed quantitatively. With the suggested systematic accident recording techniques, this model can be implemented in the construction industry at both project level and organizational level. The model (Q(2)REM) not only act as a useful supplementary guideline of risk assessment for the construction safety professionals, but also assists them to pinpoint the potential risks on site for the construction workers under respective trades of works through safety trainings and education. It, in turn, arouses their awareness on safety risk. As the Q(2)REM can clearly show the potential accident causes leading to

  11. Value-Added Teacher Estimates as Part of Teacher Evaluations: Exploring the Effects of Data and Model Specifications on the Stability of Teacher Value-Added Scores

    ERIC Educational Resources Information Center

    Kersting, Nicole B.; Chen, Mei-kuang; Stigler, James W.

    2013-01-01

    If teacher value-added estimates (VAEs) are to be used as indicators of individual teacher performance in teacher evaluation and accountability systems, it is important to understand how much VAEs are affected by the data and model specifications used to estimate them. In this study we explored the effects of three conditions on the stability of…

  12. Risk assessment models to evaluate the necessity of prostate biopsies in North Chinese patients with 4-50 ng/mL PSA.

    PubMed

    Zhao, Jing; Liu, Shuai; Gao, Dexuan; Ding, Sentai; Niu, Zhihong; Zhang, Hui; Huang, Zhilong; Qiu, Juhui; Li, Qing; Li, Ning; Xie, Fang; Cui, Jilei; Lu, Jiaju

    2017-02-07

    Prostate-specific antigen (PSA) is widely used for prostate cancer screening, but low specificity results in high false positive rates of prostate biopsies. To develop new risk assessment models to overcome the diagnostic limitation of PSA and reduce unnecessary prostate biopsies in North Chinese patients with 4-50 ng/mL PSA. A total of 702 patients in seven hospitals with 4-10 and 10-50 ng/mL PSA, respectively, who had undergone transrectal ultrasound-guided prostate biopsies, were assessed. Analysis-modeling stage for several clinical indexes related to prostate cancer and renal function was carried out. Multiple logistic regression analyses were used to develop new risk assessment models of prostate cancer for both PSA level ranges 4-10 and 10-50 ng/mL. External validation stage of the new models was performed to assess the necessity of biopsy. The new models for both PSA ranges performed significantly better than PSA for detecting prostate cancers. Both models showed higher areas under the curves (0.937 and 0.873, respectively) compared with PSA alone (0.624 and 0.595), at pre-determined cut-off values of 0.1067 and 0.6183, respectively. Patients above the cut-off values were recommended for immediate biopsy, while the others were actively observed. External validation of the models showed significantly increased detection rates for prostate cancer (4-10 ng/mL group, 39.29% vs 17.79%, p=0.006; 10-50 ng/mL group, 71.83% vs 50.0%, p=0.015). We developed risk assessment models for North Chinese patients with 4-50 ng/mL PSA to reduce unnecessary prostate biopsies and increase the detection rate of prostate cancer.

  13. Risk assessment model for development of advanced age-related macular degeneration.

    PubMed

    Klein, Michael L; Francis, Peter J; Ferris, Frederick L; Hamon, Sara C; Clemons, Traci E

    2011-12-01

    To design a risk assessment model for development of advanced age-related macular degeneration (AMD) incorporating phenotypic, demographic, environmental, and genetic risk factors. We evaluated longitudinal data from 2846 participants in the Age-Related Eye Disease Study. At baseline, these individuals had all levels of AMD, ranging from none to unilateral advanced AMD (neovascular or geographic atrophy). Follow-up averaged 9.3 years. We performed a Cox proportional hazards analysis with demographic, environmental, phenotypic, and genetic covariates and constructed a risk assessment model for development of advanced AMD. Performance of the model was evaluated using the C statistic and the Brier score and externally validated in participants in the Complications of Age-Related Macular Degeneration Prevention Trial. The final model included the following independent variables: age, smoking history, family history of AMD (first-degree member), phenotype based on a modified Age-Related Eye Disease Study simple scale score, and genetic variants CFH Y402H and ARMS2 A69S. The model did well on performance measures, with very good discrimination (C statistic = 0.872) and excellent calibration and overall performance (Brier score at 5 years = 0.08). Successful external validation was performed, and a risk assessment tool was designed for use with or without the genetic component. We constructed a risk assessment model for development of advanced AMD. The model performed well on measures of discrimination, calibration, and overall performance and was successfully externally validated. This risk assessment tool is available for online use.

  14. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-17

    ... Evaluation and Research (CBER) and suggestions for further development. The public workshop will include... Evaluation and Research (HFM-210), Food and Drug Administration, 1401 Rockville Pike, suite 200N, Rockville... models to generate quantitative estimates of the benefits and risks of influenza vaccination. The public...

  15. Acute radiation risk models

    NASA Astrophysics Data System (ADS)

    Smirnova, Olga

    Biologically motivated mathematical models, which describe the dynamics of the major hematopoietic lineages (the thrombocytopoietic, lymphocytopoietic, granulocytopoietic, and erythropoietic systems) in acutely/chronically irradiated humans are developed. These models are implemented as systems of nonlinear differential equations, which variables and constant parameters have clear biological meaning. It is shown that the developed models are capable of reproducing clinical data on the dynamics of these systems in humans exposed to acute radiation in the result of incidents and accidents, as well as in humans exposed to low-level chronic radiation. Moreover, the averaged value of the "lethal" dose rates of chronic irradiation evaluated within models of these four major hematopoietic lineages coincides with the real minimal dose rate of lethal chronic irradiation. The demonstrated ability of the models of the human thrombocytopoietic, lymphocytopoietic, granulocytopoietic, and erythropoietic systems to predict the dynamical response of these systems to acute/chronic irradiation in wide ranges of doses and dose rates implies that these mathematical models form an universal tool for the investigation and prediction of the dynamics of the major human hematopoietic lineages for a vast pattern of irradiation scenarios. In particular, these models could be applied for the radiation risk assessment for health of astronauts exposed to space radiation during long-term space missions, such as voyages to Mars or Lunar colonies, as well as for health of people exposed to acute/chronic irradiation due to environmental radiological events.

  16. Minimizing metastatic risk in radiotherapy fractionation schedules

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Ramakrishnan, Jagdish; Leder, Kevin

    2015-11-01

    Metastasis is the process by which cells from a primary tumor disperse and form new tumors at distant anatomical locations. The treatment and prevention of metastatic cancer remains an extremely challenging problem. This work introduces a novel biologically motivated objective function to the radiation optimization community that takes into account metastatic risk instead of the status of the primary tumor. In this work, we consider the problem of developing fractionated irradiation schedules that minimize production of metastatic cancer cells while keeping normal tissue damage below an acceptable level. A dynamic programming framework is utilized to determine the optimal fractionation scheme. We evaluated our approach on a breast cancer case using the heart and the lung as organs-at-risk (OAR). For small tumor α /β values, hypo-fractionated schedules were optimal, which is consistent with standard models. However, for relatively larger α /β values, we found the type of schedule depended on various parameters such as the time when metastatic risk was evaluated, the α /β values of the OARs, and the normal tissue sparing factors. Interestingly, in contrast to standard models, hypo-fractionated and semi-hypo-fractionated schedules (large initial doses with doses tapering off with time) were suggested even with large tumor α/β values. Numerical results indicate the potential for significant reduction in metastatic risk.

  17. Evaluating the Role of Intermediaries in the Electronic Value Chain.

    ERIC Educational Resources Information Center

    Janssen, Marijn; Sol, Henk G.

    2000-01-01

    Presents a business engineering methodology that supports the identification of electronic intermediary roles in the electronic value chain. The goal of this methodology is to give stakeholders insight into their current, and possible alternative, situations by means of visualization, to evaluate the added value of business models using…

  18. [Diagnostic evaluation of the developmental level in children identified at risk of delay through the Child Development Evaluation Test].

    PubMed

    Rizzoli-Córdoba, Antonio; Campos-Maldonado, Martha Carmen; Vélez-Andrade, Víctor Hugo; Delgado-Ginebra, Ismael; Baqueiro-Hernández, César Iván; Villasís-Keever, Miguel Ángel; Reyes-Morales, Hortensia; Ojeda-Lara, Lucía; Davis-Martínez, Erika Berenice; O'Shea-Cuevas, Gabriel; Aceves-Villagrán, Daniel; Carrasco-Mendoza, Joaquín; Villagrán-Muñoz, Víctor Manuel; Halley-Castillo, Elizabeth; Sidonio-Aguayo, Beatriz; Palma-Tavera, Josuha Alexander; Muñoz-Hernández, Onofre

    The Child Development Evaluation (or CDE Test) was developed in Mexico as a screening tool for child developmental problems. It yields three possible results: normal, slow development or risk of delay. The modified version was elaborated using the information obtained during the validation study but its properties according to the base population are not known. The objective of this work was to establish diagnostic confirmation of developmental delay in children 16- to 59-months of age previously identified as having risk of delay through the CDE Test in primary care facilities. A population-based cross-sectional study was conducted in one Mexican state. CDE test was administered to 11,455 children 16- to 59-months of age from December/2013 to March/2014. The eligible population represented the 6.2% of the children (n=714) who were identified at risk of delay through the CDE Test. For inclusion in the study, a block randomization stratified by sex and age group was performed. Each participant included in the study had a diagnostic evaluation using the Battelle Development Inventory, 2 nd edition. From the 355 participants included with risk of delay, 65.9% were male and 80.2% were from rural areas; 6.5% were false positives (Total Development Quotient ˃90) and 6.8% did not have any domain with delay (Domain Developmental Quotient <80). The proportion of delay for each domain was as follows: communication 82.5%; cognitive 80.8%; social-personal 33.8%; motor 55.5%; and adaptive 41.7%. There were significant differences in the percentages of delay both by age and by domain/subdomain evaluated. In 93.2% of the participants, developmental delay was corroborated in at least one domain evaluated. Copyright © 2015 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.

  19. Application and comparison of the FADES, MADIT, and SHFM-D risk models for risk stratification of prophylactic implantable cardioverter-defibrillator treatment

    PubMed Central

    van der Heijden, Aafke C.; van Rees, Johannes B.; Levy, Wayne C.; van der Bom, Johanna G.; Cannegieter, Suzanne C.; de Bie, Mihàly K.; van Erven, Lieselot; Schalij, Martin J.; Borleffs, C. Jan  Willem

    2017-01-01

    Aims Implantable cardioverter-defibrillator (ICD) treatment is beneficial in selected patients. However, it remains difficult to accurately predict which patients benefit most from ICD implantation. For this purpose, different risk models have been developed. The aim was to validate and compare the FADES, MADIT, and SHFM-D models. Methods and results All patients receiving a prophylactic ICD at the Leiden University Medical Center were evaluated. Individual model performance was evaluated by C-statistics. Model performances were compared using net reclassification improvement (NRI) and integrated differentiation improvement (IDI). The primary endpoint was non-benefit of ICD treatment, defined as mortality without prior ventricular arrhythmias requiring ICD intervention. A total of 1969 patients were included (age 63 ± 11 years; 79% male). During a median follow-up of 4.5 ± 3.9 years, 318 (16%) patients died without prior ICD intervention. All three risk models were predictive for event-free mortality (all: P < 0.001). The C-statistics were 0.66, 0.69, and 0.75, respectively, for FADES, MADIT, and SHFM-D (all: P < 0.001). Application of the SHFM-D resulted in an improved IDI of 4% and NRI of 26% compared with MADIT; IDI improved 11% with the use of SHFM-D instead of FADES (all: P < 0.001), but NRI remained unchanged (P = 0.71). Patients in the highest-risk category of the MADIT and SHFM-D models had 1.7 times higher risk to experience ICD non-benefit than receive appropriate ICD interventions [MADIT: mean difference (MD) 20% (95% CI: 7–33%), P = 0.001; SHFM-D: MD 16% (95% CI: 5–27%), P = 0.005]. Patients in the highest-risk category of FADES were as likely to experience ICD intervention as ICD non-benefit [MD 3% (95% CI: –8 to 14%), P = 0.60]. Conclusion The predictive and discriminatory value of SHFM-D to predict non-benefit of ICD treatment is superior to FADES and MADIT in patients receiving prophylactic ICD treatment. PMID:28130376

  20. A Methodology to Evaluate Ecological Resources and Risk Using Two Case Studies at the Department of Energy's Hanford Site

    NASA Astrophysics Data System (ADS)

    Burger, Joanna; Gochfeld, Michael; Bunn, Amoret; Downs, Janelle; Jeitner, Christian; Pittfield, Taryn; Salisbury, Jennifer; Kosson, David

    2017-03-01

    An assessment of the potential risks to ecological resources from remediation activities or other perturbations should involve a quantitative evaluation of resources on the remediation site and in the surrounding environment. We developed a risk methodology to rapidly evaluate potential impact on ecological resources for the U.S. Department of Energy's Hanford Site in southcentral Washington State. We describe the application of the risk evaluation for two case studies to illustrate its applicability. The ecological assessment involves examining previous sources of information for the site, defining different resource levels from 0 to 5. We also developed a risk rating scale from non-discernable to very high. Field assessment is the critical step to determine resource levels or to determine if current conditions are the same as previously evaluated. We provide a rapid assessment method for current ecological conditions that can be compared to previous site-specific data, or that can be used to assess resource value on other sites where ecological information is not generally available. The method is applicable to other Department of Energy's sites, where its development may involve a range of state regulators, resource trustees, Tribes and other stakeholders. Achieving consistency across Department of Energy's sites for valuation of ecological resources on remediation sites will assure Congress and the public that funds and personnel are being deployed appropriately.

  1. A Methodology to Evaluate Ecological Resources and Risk Using Two Case Studies at the Department of Energy's Hanford Site.

    PubMed

    Burger, Joanna; Gochfeld, Michael; Bunn, Amoret; Downs, Janelle; Jeitner, Christian; Pittfield, Taryn; Salisbury, Jennifer; Kosson, David

    2017-03-01

    An assessment of the potential risks to ecological resources from remediation activities or other perturbations should involve a quantitative evaluation of resources on the remediation site and in the surrounding environment. We developed a risk methodology to rapidly evaluate potential impact on ecological resources for the U.S. Department of Energy's Hanford Site in southcentral Washington State. We describe the application of the risk evaluation for two case studies to illustrate its applicability. The ecological assessment involves examining previous sources of information for the site, defining different resource levels from 0 to 5. We also developed a risk rating scale from non-discernable to very high. Field assessment is the critical step to determine resource levels or to determine if current conditions are the same as previously evaluated. We provide a rapid assessment method for current ecological conditions that can be compared to previous site-specific data, or that can be used to assess resource value on other sites where ecological information is not generally available. The method is applicable to other Department of Energy's sites, where its development may involve a range of state regulators, resource trustees, Tribes and other stakeholders. Achieving consistency across Department of Energy's sites for valuation of ecological resources on remediation sites will assure Congress and the public that funds and personnel are being deployed appropriately.

  2. Risk Prediction Models of Locoregional Failure After Radical Cystectomy for Urothelial Carcinoma: External Validation in a Cohort of Korean Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ku, Ja Hyeon; Kim, Myong; Jeong, Chang Wook

    2014-08-01

    Purpose: To evaluate the predictive accuracy and general applicability of the locoregional failure model in a different cohort of patients treated with radical cystectomy. Methods and Materials: A total of 398 patients were included in the analysis. Death and isolated distant metastasis were considered competing events, and patients without any events were censored at the time of last follow-up. The model included the 3 variables pT classification, the number of lymph nodes identified, and margin status, as follows: low risk (≤pT2), intermediate risk (≥pT3 with ≥10 nodes removed and negative margins), and high risk (≥pT3 with <10 nodes removed ormore » positive margins). Results: The bootstrap-corrected concordance index of the model 5 years after radical cystectomy was 66.2%. When the risk stratification was applied to the validation cohort, the 5-year locoregional failure estimates were 8.3%, 21.2%, and 46.3% for the low-risk, intermediate-risk, and high-risk groups, respectively. The risk of locoregional failure differed significantly between the low-risk and intermediate-risk groups (subhazard ratio [SHR], 2.63; 95% confidence interval [CI], 1.35-5.11; P<.001) and between the low-risk and high-risk groups (SHR, 4.28; 95% CI, 2.17-8.45; P<.001). Although decision curves were appropriately affected by the incidence of the competing risk, decisions about the value of the models are not likely to be affected because the model remains of value over a wide range of threshold probabilities. Conclusions: The model is not completely accurate, but it demonstrates a modest level of discrimination, adequate calibration, and meaningful net benefit gain for prediction of locoregional failure after radical cystectomy.« less

  3. Psychometric assessment of HIV/STI sexual risk scale among MSM: a Rasch model approach.

    PubMed

    Li, Jian; Liu, Hongjie; Liu, Hui; Feng, Tiejian; Cai, Yumao

    2011-10-05

    Little research has assessed the degree of severity and ordering of different types of sexual behaviors for HIV/STI infection in a measurement scale. The purpose of this study was to apply the Rasch model on psychometric assessment of an HIV/STI sexual risk scale among men who have sex with men (MSM). A cross-sectional study using respondent driven sampling was conducted among 351 MSM in Shenzhen, China. The Rasch model was used to examine the psychometric properties of an HIV/STI sexual risk scale including nine types of sexual behaviors. The Rasch analysis of the nine items met the unidimensionality and local independence assumption. Although the person reliability was low at 0.35, the item reliability was high at 0.99. The fit statistics provided acceptable infit and outfit values. Item difficulty invariance analysis showed that the item estimates of the risk behavior items were invariant (within error). The findings suggest that the Rasch model can be utilized for measuring the level of sexual risk for HIV/STI infection as a single latent construct and for establishing the relative degree of severity of each type of sexual behavior in HIV/STI transmission and acquisition among MSM. The measurement scale provides a useful measurement tool to inform, design and evaluate behavioral interventions for HIV/STI infection among MSM.

  4. [Pressure ulcer prevention--evaluation of awarness in families of patients at risk].

    PubMed

    Kwiczala-Szydłowska, Seweryna; Skalska, Anna; Grodzicki, Tomasz

    2005-01-01

    Widespread use of risk assessing scales and standards in health care of chronically ill patients resulted in improvement of pressure ulcer prevention and treatment in institutional care. However many bed-ridden patients depend on awareness and preparation of families and caregivers, who provide home care after discharge from the hospital. The aim of this study was to evaluate the knowledge of pressure ulcers prevention in families of patient at risk. During a 4 month period, 62 caregivers (78% family members and 22% non-related) filled out the questionnaire enquiring about the issue related to pressure ulcer prevention and treatment. Only 11% of questioned person knew what the pressure ulcer was, 42% of caregivers were not aware of possible pressure ulcer causes, and 54.8% were not able to mention any pressure ulcer risk factor. Most of caregivers did not know basic principles of prevention including devices useful in pressure ulcer prevention, did not know about pressure reducing mattresses nor dressings used in pressure ulcers treatment. Fifty three percent of questioned persons never received any information about pressure ulcer prevention, and only 23% received such information from nurses--which reflects low involvement of professional staff in education of families of patients at risk in principles of pressure ulcers prevention. Families and caregivers of bed-ridden patients have insufficient knowledge of pressure ulcer prevention. Contribution of medical staff in education of families of patients at risk in pressure ulcer prevention is minimal, indicating the need of preparation and implementation of an educational program for bed-ridden patients' caregivers.

  5. Computed Tomography Angiography Evaluation of Risk Factors for Unstable Intracranial Aneurysms.

    PubMed

    Wang, Guang-Xian; Gong, Ming-Fu; Wen, Li; Liu, Lan-Lan; Yin, Jin-Bo; Duan, Chun-Mei; Zhang, Dong

    2018-03-19

    To evaluate risk factors for instability in intracranial aneurysms (IAs) using computed tomography angiography (CTA). A total of 614 consecutive patients diagnosed with 661 IAs between August 2011 and February 2016 were reviewed. Patients and IAs were divided into stable and unstable groups. Along with clinical characteristics, IA characteristics were evaluated by CTA. Multiple logistic regression analysis was used to identify the independent risk factors associated with unstable IAs. Receiver operating characteristic (ROC) curve analysis was performed on the final model, and optimal thresholds were obtained. Patient age (odds ratio [OR], 0.946), cerebral atherosclerosis (CA; OR, 0.525), and IAs located at the middle cerebral artery (OR, 0.473) or internal carotid artery (OR, 0.512) were negatively correlated with instability, whereas IAs with irregular shape (OR, 2.157), deep depth (OR, 1.557), or large flow angle (FA; OR, 1.015) were more likely to be unstable. ROC analysis revealed threshold values of age, depth, and FA of 59.5 years, 4.25 mm, and 87.8°, respectively. The stability of IAs is significantly affected by several factors, including patient age and the presence of CA. IA shape and location also have an impact on the stability of IAs. Growth into an irregular shape, with a deep depth, and a large FA are risk factors for a change in IAs from stable to unstable. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Evaluation of a model of violence risk assessment among forensic psychiatric patients.

    PubMed

    Douglas, Kevin S; Ogloff, James R P; Hart, Stephen D

    2003-10-01

    This study tested the interrater reliability and criterion-related validity of structured violence risk judgments made by using one application of the structured professional judgment model of violence risk assessment, the HCR-20 violence risk assessment scheme, which assesses 20 key risk factors in three domains: historical, clinical, and risk management. The HCR-20 was completed for a sample of 100 forensic psychiatric patients who had been found not guilty by reason of a mental disorder and were subsequently released to the community. Violence in the community was determined from multiple file-based sources. Interrater reliability of structured final risk judgments of low, moderate, or high violence risk made on the basis of the structured professional judgment model was acceptable (weighted kappa=.61). Structured final risk judgments were significantly predictive of postrelease community violence, yielding moderate to large effect sizes. Event history analyses showed that final risk judgments made with the structured professional judgment model added incremental validity to the HCR-20 used in an actuarial (numerical) sense. The findings support the structured professional judgment model of risk assessment as well as the HCR-20 specifically and suggest that clinical judgment, if made within a structured context, can contribute in meaningful ways to the assessment of violence risk.

  7. Value of the distant future: Model-independent results

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  8. Development of a Risk Prediction Model and Clinical Risk Score for Isolated Tricuspid Valve Surgery.

    PubMed

    LaPar, Damien J; Likosky, Donald S; Zhang, Min; Theurer, Patty; Fonner, C Edwin; Kern, John A; Bolling, Stephen F; Drake, Daniel H; Speir, Alan M; Rich, Jeffrey B; Kron, Irving L; Prager, Richard L; Ailawadi, Gorav

    2018-02-01

    While tricuspid valve (TV) operations remain associated with high mortality (∼8-10%), no robust prediction models exist to support clinical decision-making. We developed a preoperative clinical risk model with an easily calculable clinical risk score (CRS) to predict mortality and major morbidity after isolated TV surgery. Multi-state Society of Thoracic Surgeons database records were evaluated for 2,050 isolated TV repair and replacement operations for any etiology performed at 50 hospitals (2002-2014). Parsimonious preoperative risk prediction models were developed using multi-level mixed effects regression to estimate mortality and composite major morbidity risk. Model results were utilized to establish a novel CRS for patients undergoing TV operations. Models were evaluated for discrimination and calibration. Operative mortality and composite major morbidity rates were 9% and 42%, respectively. Final regression models performed well (both P<0.001, AUC = 0.74 and 0.76) and included preoperative factors: age, gender, stroke, hemodialysis, ejection fraction, lung disease, NYHA class, reoperation and urgent or emergency status (all P<0.05). A simple CRS from 0-10+ was highly associated (P<0.001) with incremental increases in predicted mortality and major morbidity. Predicted mortality risk ranged from 2%-34% across CRS categories, while predicted major morbidity risk ranged from 13%-71%. Mortality and major morbidity after isolated TV surgery can be predicted using preoperative patient data from the STS Adult Cardiac Database. A simple clinical risk score predicts mortality and major morbidity after isolated TV surgery. This score may facilitate perioperative counseling and identification of suitable patients for TV surgery. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Applying Costs, Risks and Values Evaluation (CRAVE) methodology to Engineering Support Request (ESR) prioritization

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla N.

    1994-01-01

    Given limited budget, the problem of prioritization among Engineering Support Requests (ESR's) with varied sizes, shapes, and colors is a difficult one. At the Kennedy Space Center (KSC), the recently developed 4-Matrix (4-M) method represents a step in the right direction as it attempts to combine the traditional criteria of technical merits only with the new concern for cost-effectiveness. However, the 4-M method was not adequately successful in the actual prioritization of ESRs for the fiscal year 1995 (FY95). This research identifies a number of design issues that should help us to develop better methods. It emphasizes that given the variety and diversity of ESR's one should not expect that a single method could help in the assessment of all ESR's. One conclusion is that a methodology such as Costs, Risks, and Values Evaluation (CRAVE) should be adopted. It also is clear that the development of methods such as 4-M requires input not only from engineers with technical expertise in ESR's but also from personnel with adequate background in the theory and practice of cost-effectiveness analysis. At KSC, ESR prioritization is one part of the Ground Support Working Teams (GSWT) Integration Process. It was discovered that the more important barriers to the incorporation of cost-effectiveness considerations in ESR prioritization lie in this process. The culture of integration, and the corresponding structure of review by a committee of peers, is not conducive to the analysis and confrontation necessary in the assessment and prioritization of ESR's. Without assistance from appropriately trained analysts charged with the responsibility to analyze and be confrontational about each ESR, the GSWT steering committee will continue to make its decisions based on incomplete understanding, inconsistent numbers, and at times, colored facts. The current organizational separation of the prioritization and the funding processes is also identified as an important barrier to the

  10. Evaluating Approaches to a Coupled Model for Arctic Coastal Erosion, Infrastructure Risk, and Associated Coastal Hazards

    NASA Astrophysics Data System (ADS)

    Frederick, J. M.; Bull, D. L.; Jones, C.; Roberts, J.; Thomas, M. A.

    2016-12-01

    Arctic coastlines are receding at accelerated rates, putting existing and future activities in the developing coastal Arctic environment at extreme risk. For example, at Oliktok Long Range Radar Site, erosion that was not expected until 2040 was reached as of 2014 (Alaska Public Media). As the Arctic Ocean becomes increasingly ice-free, rates of coastal erosion will likely continue to increase as (a) increased ice-free waters generate larger waves, (b) sea levels rise, and (c) coastal permafrost soils warm and lose strength/cohesion. Due to the complex and rapidly varying nature of the Arctic region, little is known about the increasing waves, changing circulation, permafrost soil degradation, and the response of the coastline to changes in these combined conditions. However, as scientific focus has been shifting towards the polar regions, Arctic science is rapidly advancing, increasing our understanding of complex Arctic processes. Our present understanding allows us to begin to develop and evaluate the coupled models necessary for the prediction of coastal erosion in support of Arctic risk assessments. What are the best steps towards the development of a coupled model for Arctic coastal erosion? This work focuses on our current understanding of Arctic conditions and identifying the tools and methods required to develop an integrated framework capable of accurately predicting Arctic coastline erosion and assessing coastal risk and hazards. We will present a summary of the state-of-the-science, and identify existing tools and methods required to develop an integrated diagnostic and monitoring framework capable of accurately predicting and assessing Arctic coastline erosion, infrastructure risk, and coastal hazards. The summary will describe the key coastal processes to simulate, appropriate models to use, effective methods to couple existing models, and identify gaps in knowledge that require further attention to make progress in our understanding of Arctic coastal

  11. Modeling Research Project Risks with Fuzzy Maps

    ERIC Educational Resources Information Center

    Bodea, Constanta Nicoleta; Dascalu, Mariana Iuliana

    2009-01-01

    The authors propose a risks evaluation model for research projects. The model is based on fuzzy inference. The knowledge base for fuzzy process is built with a causal and cognitive map of risks. The map was especially developed for research projects, taken into account their typical lifecycle. The model was applied to an e-testing research…

  12. Risk and Protective Factors for Suicidality at 6-Month Follow-up in Adolescent Inpatients Who Attempted Suicide: An Exploratory Model

    PubMed Central

    Consoli, Angèle; Cohen, David; Bodeau, Nicolas; Guilé, Jean-Marc; Mirkovic, Bojan; Knafo, Alexandra; Mahé, Vincent; Laurent, Claudine; Renaud, Johanne; Labelle, Réal; Breton, Jean-Jacques; Gérardin, Priscille

    2015-01-01

    Objective: To assess risk and protective factors for suicidality at 6-month follow-up in adolescent inpatients after a suicide attempt. Methods: One hundred seven adolescents from 5 inpatient units who had a suicide attempt were seen at 6-month follow-up. Baseline measures included sociodemographics, mood and suicidality, dependence, borderline symptomatology, temperament and character inventory (TCI), reasons for living, spirituality, and coping scores. Results: At 6-month follow-up, 41 (38%) subjects relapsed from suicidal behaviours. Among them, 15 (14%) had repeated a suicide attempt. Higher depression and hopelessness scores, the occurrence of a new suicide attempt, or a new hospitalization belonged to the same factorial dimension (suicidality). Derived from the best-fit structural equation modelling for suicidality as an outcome measure at 6-month follow-up, risk factors among the baseline variables included: major depressive disorder, high depression scores, and high scores for TCI self-transcendence. Only one protective factor emerged: coping–hard work and achievement. Conclusion: In this very high-risk population, some established risk factors (for example, a history of suicide attempts) may not predict suicidality. Our results suggest that adolescents who retain high scores for depression or hopelessness, who remain depressed, or who express a low value for life or an abnormally high connection with the universe are at higher risk for suicidality and should be targeted for more intense intervention. Improving adolescent motivation in school and in work may be protective. Given the sample size, the model should be regarded as exploratory. PMID:25886668

  13. Reliability and Validity of Observational Risk Screening in Evaluating Dynamic Knee Valgus

    PubMed Central

    Ekegren, Christina L.; Miller, William C.; Celebrini, Richard G.; Eng, Janice J.; MacIntyre, Donna L.

    2012-01-01

    Study Design Nonexperimental methodological study. Objectives To determine the interrater and intrarater reliability and validity of using observational risk screening guidelines to evaluate dynamic knee valgus. Background A deficiency in the neuromuscular control of the hip has been identified as a key risk factor for non-contact anterior cruciate ligament (ACL) injury in post pubescent females. This deficiency can manifest itself as a valgus knee alignment during tasks involving hip and knee flexion. There are currently no scientifically tested methods to screen for dynamic knee valgus in the clinic or on the field. Methods Three physiotherapists used observational risk screening guidelines to rate 40 adolescent female soccer players according to their risk of ACL injury. The rating was based on the amount of dynamic knee valgus observed on a drop jump landing. Ratings were evaluated for intrarater and interrater agreement using kappa coefficients. Sensitivity and specificity of ratings were evaluated by comparing observational ratings with measurements obtained using 3-dimensional (3D) motion analysis. Results Kappa coefficients for intrarater and interrater agreement ranged from 0.75 to 0.85, indicating that ratings were reasonably consistent over time and between physiotherapists. Sensitivity values were inadequate, ranging from 67–87%. This indicated that raters failed to detect up to a third of “truly high risk” individuals. Specificity values ranged from 60–72% which was considered adequate for the purposes of the screen. Conclusion Observational risk screening is a practical and cost-effective method of screening for ACL injury risk. Rater agreement and specificity were acceptable for this method but sensitivity was not. To detect a greater proportion of individuals at risk of ACL injury, coaches and clinicians should ensure that they include additional tests for other high risk characteristics in their screening protocols. PMID:19721212

  14. Control of Risks Through the Use of Procedures: A Method for Evaluating the Change in Risk

    NASA Technical Reports Server (NTRS)

    Praino, Gregory T.; Sharit, Joseph

    2010-01-01

    not. The model used for describing the Failure Likelihood considers how well a task was established by evaluating that task on five components. The components selected to define a well established task are: that it be defined, assigned to someone capable, that they be trained appropriately, that the actions be organized to enable proper completion and that some form of independent monitoring be performed. Validation of the method was based on the information provided by a group of experts in Space Shuttle ground processing when they were presented with 5 scenarios that identified a clause from a procedure. For each scenario, they recorded their perception of how important the associated rule was and how likely it was to fail. They then rated the components of Control Value and Failure Likelihood for all the scenarios. The order in which each reviewer ranked the scenarios Control Value and Failure Likelihood was compared to the order in which they ranked the scenarios for each of the associated components; inevitability and opportunity for Control Value and definition, assignment, training, organization and monitoring for Failure Likelihood. This order comparison showed how the components contributed to a relative relationship to the substitute risk element. With the relationship established for Space Shuttle ground processing, this method can be used to gauge if the introduction or removal of a particular rule will increase or decrease the .risk associated with the hazard it is intended to control.

  15. Fate and risk of atrazine and sulfentrazone to nontarget species at an agriculture site.

    PubMed

    Thorngren, Jordan L; Harwood, Amanda D; Murphy, Tracye M; Huff Hartz, Kara E; Fung, Courtney Y; Lydy, Michael J

    2017-05-01

    The present study evaluated the risk associated with the application and co-occurrence of 2 herbicides, atrazine and sulfentrazone, applied to a 32-ha corn and soybean rotational field. Field concentrations of the compounds were measured in soil, runoff water, and groundwater, with peak mean atrazine and sulfentrazone concentrations found in the soil (144 ng/g dry wt, and 318 ng/g dry wt, respectively). Individual and mixture laboratory bioassays were conducted to determine the effects of atrazine and sulfentrazone on the survival of Daphnia magna and Pimephales promelas, the germination of Lactuca sativa, and the growth of Pseudokirchneriella subcapita and Lemna minor. Pseudokirchneriella subcapita and L. minor were the most susceptible species tested, and the effects on growth of the herbicides in mixtures best fit an independent action model. Risk quotients and margin of safety of 10% (MOS10) values were used to estimate risk and were calculated using runoff water concentrations. The MOS10 values were more sensitive than risk quotients in estimating risk. The MOS10 value for sulfentrazone runoff water concentration effects on P. subcapita was 7.8, and for L. minor was 1.1, with MOS10 values < 1 indicating potential risk. Overall, the environmentally relevant concentrations fell below the effect concentrations; therefore, atrazine and sulfentrazone posed little to no risk to the nontarget species tested. Environ Toxicol Chem 2017;36:1301-1310. © 2016 SETAC. © 2016 SETAC.

  16. Modeling perceptions of climatic risk in crop production.

    PubMed

    Reinmuth, Evelyn; Parker, Phillip; Aurbacher, Joachim; Högy, Petra; Dabbert, Stephan

    2017-01-01

    In agricultural production, land-use decisions are components of economic planning that result in the strategic allocation of fields. Climate variability represents an uncertainty factor in crop production. Considering yield impact, climatic influence is perceived during and evaluated at the end of crop production cycles. In practice, this information is then incorporated into planning for the upcoming season. This process contributes to attitudes toward climate-induced risk in crop production. In the literature, however, the subjective valuation of risk is modeled as a risk attitude toward variations in (monetary) outcomes. Consequently, climatic influence may be obscured by political and market influences so that risk perceptions during the production process are neglected. We present a utility concept that allows the inclusion of annual risk scores based on mid-season risk perceptions that are incorporated into field-planning decisions. This approach is exemplified and implemented for winter wheat production in the Kraichgau, a region in Southwest Germany, using the integrated bio-economic simulation model FarmActor and empirical data from the region. Survey results indicate that a profitability threshold for this crop, the level of "still-good yield" (sgy), is 69 dt ha-1 (regional mean Kraichgau sample) for a given season. This threshold governs the monitoring process and risk estimators. We tested the modeled estimators against simulation results using ten projected future weather time series for winter wheat production. The mid-season estimators generally proved to be effective. This approach can be used to improve the modeling of planning decisions by providing a more comprehensive evaluation of field-crop response to climatic changes from an economic risk point of view. The methodology further provides economic insight in an agrometeorological context where prices for crops or inputs are lacking, but farmer attitudes toward risk should still be included in

  17. Modeling perceptions of climatic risk in crop production

    PubMed Central

    Parker, Phillip; Aurbacher, Joachim; Högy, Petra; Dabbert, Stephan

    2017-01-01

    In agricultural production, land-use decisions are components of economic planning that result in the strategic allocation of fields. Climate variability represents an uncertainty factor in crop production. Considering yield impact, climatic influence is perceived during and evaluated at the end of crop production cycles. In practice, this information is then incorporated into planning for the upcoming season. This process contributes to attitudes toward climate-induced risk in crop production. In the literature, however, the subjective valuation of risk is modeled as a risk attitude toward variations in (monetary) outcomes. Consequently, climatic influence may be obscured by political and market influences so that risk perceptions during the production process are neglected. We present a utility concept that allows the inclusion of annual risk scores based on mid-season risk perceptions that are incorporated into field-planning decisions. This approach is exemplified and implemented for winter wheat production in the Kraichgau, a region in Southwest Germany, using the integrated bio-economic simulation model FarmActor and empirical data from the region. Survey results indicate that a profitability threshold for this crop, the level of “still-good yield” (sgy), is 69 dt ha-1 (regional mean Kraichgau sample) for a given season. This threshold governs the monitoring process and risk estimators. We tested the modeled estimators against simulation results using ten projected future weather time series for winter wheat production. The mid-season estimators generally proved to be effective. This approach can be used to improve the modeling of planning decisions by providing a more comprehensive evaluation of field-crop response to climatic changes from an economic risk point of view. The methodology further provides economic insight in an agrometeorological context where prices for crops or inputs are lacking, but farmer attitudes toward risk should still be included

  18. Illustrative case using the RISK21 roadmap and matrix: prioritization for evaluation of chemicals found in drinking water

    PubMed Central

    Wolf, Douglas C.; Bachman, Ammie; Barrett, Gordon; Bellin, Cheryl; Goodman, Jay I.; Jensen, Elke; Moretto, Angelo; McMullin, Tami; Pastoor, Timothy P.; Schoeny, Rita; Slezak, Brian; Wend, Korinna; Embry, Michelle R.

    2016-01-01

    ABSTRACT The HESI-led RISK21 effort has developed a framework supporting the use of twenty-first century technology in obtaining and using information for chemical risk assessment. This framework represents a problem formulation-based, exposure-driven, tiered data acquisition approach that leads to an informed decision on human health safety to be made when sufficient evidence is available. It provides a transparent and consistent approach to evaluate information in order to maximize the ability of assessments to inform decisions and to optimize the use of resources. To demonstrate the application of the framework’s roadmap and matrix, this case study evaluates a large number of chemicals that could be present in drinking water. The focus is to prioritize which of these should be considered for human health risk as individual contaminants. The example evaluates 20 potential drinking water contaminants, using the tiered RISK21 approach in combination with graphical representation of information at each step, using the RISK21 matrix. Utilizing the framework, 11 of the 20 chemicals were assigned low priority based on available exposure data alone, which demonstrated that exposure was extremely low. The remaining nine chemicals were further evaluated, using refined estimates of toxicity based on readily available data, with three deemed high priority for further evaluation. In the present case study, it was determined that the greatest value of additional information would be from improved exposure models and not from additional hazard characterization. PMID:26451723

  19. Illustrative case using the RISK21 roadmap and matrix: prioritization for evaluation of chemicals found in drinking water.

    PubMed

    Wolf, Douglas C; Bachman, Ammie; Barrett, Gordon; Bellin, Cheryl; Goodman, Jay I; Jensen, Elke; Moretto, Angelo; McMullin, Tami; Pastoor, Timothy P; Schoeny, Rita; Slezak, Brian; Wend, Korinna; Embry, Michelle R

    2016-01-01

    The HESI-led RISK21 effort has developed a framework supporting the use of twenty-first century technology in obtaining and using information for chemical risk assessment. This framework represents a problem formulation-based, exposure-driven, tiered data acquisition approach that leads to an informed decision on human health safety to be made when sufficient evidence is available. It provides a transparent and consistent approach to evaluate information in order to maximize the ability of assessments to inform decisions and to optimize the use of resources. To demonstrate the application of the framework's roadmap and matrix, this case study evaluates a large number of chemicals that could be present in drinking water. The focus is to prioritize which of these should be considered for human health risk as individual contaminants. The example evaluates 20 potential drinking water contaminants, using the tiered RISK21 approach in combination with graphical representation of information at each step, using the RISK21 matrix. Utilizing the framework, 11 of the 20 chemicals were assigned low priority based on available exposure data alone, which demonstrated that exposure was extremely low. The remaining nine chemicals were further evaluated, using refined estimates of toxicity based on readily available data, with three deemed high priority for further evaluation. In the present case study, it was determined that the greatest value of additional information would be from improved exposure models and not from additional hazard characterization.

  20. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    PubMed

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  1. Risk Assessment of Groundwater Contamination: A Multilevel Fuzzy Comprehensive Evaluation Approach Based on DRASTIC Model

    PubMed Central

    Zhang, Yan; Zhong, Ming

    2013-01-01

    Groundwater contamination is a serious threat to water supply. Risk assessment of groundwater contamination is an effective way to protect the safety of groundwater resource. Groundwater is a complex and fuzzy system with many uncertainties, which is impacted by different geological and hydrological factors. In order to deal with the uncertainty in the risk assessment of groundwater contamination, we propose an approach with analysis hierarchy process and fuzzy comprehensive evaluation integrated together. Firstly, the risk factors of groundwater contamination are identified by the sources-pathway-receptor-consequence method, and a corresponding index system of risk assessment based on DRASTIC model is established. Due to the complexity in the process of transitions between the possible pollution risks and the uncertainties of factors, the method of analysis hierarchy process is applied to determine the weights of each factor, and the fuzzy sets theory is adopted to calculate the membership degrees of each factor. Finally, a case study is presented to illustrate and test this methodology. It is concluded that the proposed approach integrates the advantages of both analysis hierarchy process and fuzzy comprehensive evaluation, which provides a more flexible and reliable way to deal with the linguistic uncertainty and mechanism uncertainty in groundwater contamination without losing important information. PMID:24453883

  2. Evaluation of the carcinogenic risks at the influence of POPs.

    PubMed

    Nazhmetdinova, Aiman; Kassymbayev, Adlet; Chalginbayeva, Altinay

    2017-12-20

    Kazakhstan is included in the list of environmentally vulnerable countries and Kyzylorda oblast in particular. This is due to its geographical, spatial and temporal and socioeconomic features. As part of the program "Integrated approaches in the management of public health in the Aral region", we have carried out an expertise on many samples of natural environments and products. Samples were selected in accordance with sampling procedures according to regulatory documents by specialists of the Pesticide Toxicology Laboratory. It is accredited by the State Standard of the Republic of Kazakhstan, for compliance with ST RK ISO/IEC 17025-2007 "General requirements for the competence of test and calibration laboratories". Gas chromatograph was used for the determination of residues of organochlorine pesticides. For the determination of dioxins, polychlorinated biphenyl was conducted on the gas chromatomass spectrometer with quadruple detector produce by Agilent Company, USA. To assess the risk, we carried out the mathematical calculations according to the risk of chemicals polluting (No P 2.1.10.1920-04, Russia). Calculation of the carcinogenic risk was carried out with the use of data on the size of the exposure and meanings of carcinogenic potential factors (slope factor and unit risk). The evaluation of persistent organic pollutants (POPs), based on the previous results of the research concerning water, soil and food products, was held in five population settlements in Kyzylorda oblast villages: Ayteke bi, Zhalagash, Zhosaly, Shieli and Aralsk town. Pollution with the POPs in the environmental objects by means of exposition and evaluation of the carcinogenic risk to human health is confirmed by the data of the statistical reporting about some morbidity in Kyzylorda oblast, such as skin diseases and subcutaneous tissue, endocrine system diseases, pregnancy complications etc. The received levels of carcinogenic risks, which were first carried out in the Republic of

  3. Risk Assessment of Alzheimer's Disease using the Information Diffusion Model from Structural Magnetic Resonance Imaging.

    PubMed

    Beheshti, Iman; Olya, Hossain G T; Demirel, Hasan

    2016-04-05

    Recently, automatic risk assessment methods have been a target for the detection of Alzheimer's disease (AD) risk. This study aims to develop an automatic computer-aided AD diagnosis technique for risk assessment of AD using information diffusion theory. Information diffusion is a fuzzy mathematics logic of set-value that is used for risk assessment of natural phenomena, which attaches fuzziness (uncertainty) and incompleteness. Data were obtained from voxel-based morphometry analysis of structural magnetic resonance imaging. The information diffusion model results revealed that the risk of AD increases with a reduction of the normalized gray matter ratio (p > 0.5, normalized gray matter ratio <40%). The information diffusion model results were evaluated by calculation of the correlation of two traditional risk assessments of AD, the Mini-Mental State Examination and the Clinical Dementia Rating. The correlation results revealed that the information diffusion model findings were in line with Mini-Mental State Examination and Clinical Dementia Rating results. Application of information diffusion model contributes to the computerization of risk assessment of AD, which has a practical implication for the early detection of AD.

  4. Forest landowner decisions and the value of information under fire risk.

    Treesearch

    Gregory S. Amacher; Arun S. Malik; Robert G. Haight

    2005-01-01

    We estimate the value of three types of information about fire risk to a nonindustrial forest landowner: the relationship between fire arrival rates and stand age, the magnitude of fire arrival rates, and the efficacy of fuel reduction treatment. Our model incorporates planting density and the level and timing of fuel reduction treatment as landowner decisions. These...

  5. [The evaluation of color vision and its diagnostic value in predicting the risk of diabetic retinopathy in patients with glucose metabolism disorders].

    PubMed

    Jończyk-Skórka, Katarzyna; Kowalski, Jan

    2017-07-21

    The aim of the study was to evaluate color vision and its diagnostic value in predicting the risk of diabetic retinopathy in patients with glucose metabolism disorders. The study involved 197 people, 92 women and 105 men aged 63.21 ± 8.74 years. In order to assess glucose metabolism disorders, patients were divided into three groups. The first group (DM) consisted of 60 people (16 women and 44 men aged 61.92 ± 8.46 years). These were people with type 2 diabetes. Second group (IFG IGT) consisted of 67 people (35 women and 32 men aged 65 ± 8.5 years). These were people who were diagnosed with impaired fasting glucose or impaired glucose tolerance. The third group, the control one (K) consisted of 70 people (41 women and 29 men aged 62.6 ± 9.06 years). They were healthy individuals. In order to assess diabetic retinopathy study population was divided into two groups. The first group (BZ) consisted of 177 patients (84 women and 93 men aged 62.9 ± 8.78 years) without diabetic retinopathy. The second group (NPDR) consisted of 20 patients (8 women and 12 men aged 65.95 ± 8.17 years) with diabetic retinopathy. Glucose metabolism disorders were diagnosed with glucose tolerance test (OGTT). Evaluation of retinopathy was based on eye examination. All patients underwent binocular Farnsworth-Munsell 100 Hue color vision test (test result is a Total Error Score - TES). In the healthy control group (K) there were less patients with diabetic retinopathy (p = 0,0101), and less patients with abnormal color vision test (p = 0,0001) than in other groups. Majority of patients in K group had generalized abnormalities of color vision while other groups demonstrated tritanomalią (p = 0,0018). It was discovered that sTES value adequately distinguishes group K from group IFG, IGT, DM (AUC = 0,673), group K from group DM (AUC = 0,701), and group K from group IFG IGT (AUC = 0,648) sTES does not differentiate groups IGT, IFG and DM (AUC = 0,563). It was shown that in IGT, IFG group s

  6. Incremental Value of Repeated Risk Factor Measurements for Cardiovascular Disease Prediction in Middle-Aged Korean Adults: Results From the NHIS-HEALS (National Health Insurance System-National Health Screening Cohort).

    PubMed

    Cho, In-Jeong; Sung, Ji Min; Chang, Hyuk-Jae; Chung, Namsik; Kim, Hyeon Chang

    2017-11-01

    Increasing evidence suggests that repeatedly measured cardiovascular disease (CVD) risk factors may have an additive predictive value compared with single measured levels. Thus, we evaluated the incremental predictive value of incorporating periodic health screening data for CVD prediction in a large nationwide cohort with periodic health screening tests. A total of 467 708 persons aged 40 to 79 years and free from CVD were randomly divided into development (70%) and validation subcohorts (30%). We developed 3 different CVD prediction models: a single measure model using single time point screening data; a longitudinal average model using average risk factor values from periodic screening data; and a longitudinal summary model using average values and the variability of risk factors. The development subcohort included 327 396 persons who had 3.2 health screenings on average and 25 765 cases of CVD over 12 years. The C statistics (95% confidence interval [CI]) for the single measure, longitudinal average, and longitudinal summary models were 0.690 (95% CI, 0.682-0.698), 0.695 (95% CI, 0.687-0.703), and 0.752 (95% CI, 0.744-0.760) in men and 0.732 (95% CI, 0.722-0.742), 0.735 (95% CI, 0.725-0.745), and 0.790 (95% CI, 0.780-0.800) in women, respectively. The net reclassification index from the single measure model to the longitudinal average model was 1.78% in men and 1.33% in women, and the index from the longitudinal average model to the longitudinal summary model was 32.71% in men and 34.98% in women. Using averages of repeatedly measured risk factor values modestly improves CVD predictability compared with single measurement values. Incorporating the average and variability information of repeated measurements can lead to great improvements in disease prediction. URL: https://www.clinicaltrials.gov. Unique identifier: NCT02931500. © 2017 American Heart Association, Inc.

  7. A Dynamic Risk Model for Evaluation of Space Shuttle Abort Scenarios

    NASA Technical Reports Server (NTRS)

    Henderson, Edward M.; Maggio, Gaspare; Elrada, Hassan A.; Yazdpour, Sabrina J.

    2003-01-01

    The Space Shuttle is an advanced manned launch system with a respectable history of service and a demonstrated level of safety. Recent studies have shown that the Space Shuttle has a relatively low probability of having a failure that is instantaneously catastrophic during nominal flight as compared with many US and international launch systems. However, since the Space Shuttle is a manned. system, a number of mission abort contingencies exist to primarily ensure the safety of the crew during off-nominal situations and to attempt to maintain the integrity of the Orbiter. As the Space Shuttle ascends to orbit it transverses various intact abort regions evaluated and planned before the flight to ensure that the Space Shuttle Orbiter, along with its crew, may be returned intact either to the original launch site, a transoceanic landing site, or returned from a substandard orbit. An intact abort may be initiated due to a number of system failures but the highest likelihood and most challenging abort scenarios are initiated by a premature shutdown of a Space Shuttle Main Engine (SSME). The potential consequences of such a shutdown vary as a function of a number of mission parameters but all of them may be related to mission time for a specific mission profile. This paper focuses on the Dynamic Abort Risk Evaluation (DARE) model process, applications, and its capability to evaluate the risk of Loss Of Vehicle (LOV) due to the complex systems interactions that occur during Space Shuttle intact abort scenarios. In addition, the paper will examine which of the Space Shuttle subsystems are critical to ensuring a successful return of the Space Shuttle Orbiter and crew from such a situation.

  8. Combined Hydrologic (AGWA-KINEROS2) and Hydraulic (HEC2) Modeling for Post-Fire Runoff and Inundation Risk Assessment through a Set of Python Tools

    NASA Astrophysics Data System (ADS)

    Barlow, J. E.; Goodrich, D. C.; Guertin, D. P.; Burns, I. S.

    2016-12-01

    Wildfires in the Western United States can alter landscapes by removing vegetation and changing soil properties. These altered landscapes produce more runoff than pre-fire landscapes which can lead to post-fire flooding that can damage infrastructure and impair natural resources. Resources, structures, historical artifacts and others that could be impacted by increased runoff are considered values at risk. .The Automated Geospatial Watershed Assessment tool (AGWA) allows users to quickly set up and execute the Kinematic Runoff and Erosion model (KINEROS2 or K2) in the ESRI ArcMap environment. The AGWA-K2 workflow leverages the visualization capabilities of GIS to facilitate evaluation of rapid watershed assessments for post-fire planning efforts. High relative change in peak discharge, as simulated by K2, provides a visual and numeric indicator to investigate those channels in the watershed that should be evaluated for more detailed analysis, especially if values at risk are within or near that channel. Modeling inundation extent along a channel would provide more specific guidance about risk along a channel. HEC-2 and HEC-RAS can be used for hydraulic modeling efforts at the reach and river system scale. These models have been used to address flood boundaries and, accordingly, flood risk. However, data collection and organization for hydraulic models can be time consuming and therefore a combined hydrologic-hydraulic modeling approach is not often employed for rapid assessments. A simplified approach could streamline this process and provide managers with a simple workflow and tool to perform a quick risk assessment for a single reach. By focusing on a single reach highlighted by large relative change in peak discharge, data collection efforts can be minimized and the hydraulic computations can be performed to supplement risk analysis. The incorporation of hydraulic analysis through a suite of Python tools (as outlined by HEC-2) with AGWA-K2 will allow more rapid

  9. Admission Models for At-Risk Graduate Students in Different Academic Disciplines.

    ERIC Educational Resources Information Center

    Nelson, C. Van; Nelson, Jacquelyn S.; Malone, Bobby G.

    In this study, models were constructed for eight academic areas, including applied sciences, communication sciences, education, physical sciences, life sciences, humanities and arts, psychology, and social sciences, to predict whether or not an at-risk graduate student would be successful in obtaining a master's degree. Records were available for…

  10. Accepting uncertainty, assessing risk: decision quality in managing wildfire, forest resource values, and new technology

    Treesearch

    Jeffrey G. Borchers

    2005-01-01

    The risks, uncertainties, and social conflicts surrounding uncharacteristic wildfire and forest resource values have defied conventional approaches to planning and decision-making. Paradoxically, the adoption of technological innovations such as risk assessment, decision analysis, and landscape simulation models by land management organizations has been limited. The...

  11. Evaluation of a Genetic Risk Score to Improve Risk Prediction for Alzheimer's Disease.

    PubMed

    Chouraki, Vincent; Reitz, Christiane; Maury, Fleur; Bis, Joshua C; Bellenguez, Celine; Yu, Lei; Jakobsdottir, Johanna; Mukherjee, Shubhabrata; Adams, Hieab H; Choi, Seung Hoan; Larson, Eric B; Fitzpatrick, Annette; Uitterlinden, Andre G; de Jager, Philip L; Hofman, Albert; Gudnason, Vilmundur; Vardarajan, Badri; Ibrahim-Verbaas, Carla; van der Lee, Sven J; Lopez, Oscar; Dartigues, Jean-François; Berr, Claudine; Amouyel, Philippe; Bennett, David A; van Duijn, Cornelia; DeStefano, Anita L; Launer, Lenore J; Ikram, M Arfan; Crane, Paul K; Lambert, Jean-Charles; Mayeux, Richard; Seshadri, Sudha

    2016-06-18

    Effective prevention of Alzheimer's disease (AD) requires the development of risk prediction tools permitting preclinical intervention. We constructed a genetic risk score (GRS) comprising common genetic variants associated with AD, evaluated its association with incident AD and assessed its capacity to improve risk prediction over traditional models based on age, sex, education, and APOEɛ4. In eight prospective cohorts included in the International Genomics of Alzheimer's Project (IGAP), we derived weighted sum of risk alleles from the 19 top SNPs reported by the IGAP GWAS in participants aged 65 and older without prevalent dementia. Hazard ratios (HR) of incident AD were estimated in Cox models. Improvement in risk prediction was measured by the difference in C-index (Δ-C), the integrated discrimination improvement (IDI) and continuous net reclassification improvement (NRI>0). Overall, 19,687 participants at risk were included, of whom 2,782 developed AD. The GRS was associated with a 17% increase in AD risk (pooled HR = 1.17; 95% CI =   [1.13-1.21] per standard deviation increase in GRS; p-value =  2.86×10-16). This association was stronger among persons with at least one APOEɛ4 allele (HRGRS = 1.24; 95% CI =   [1.15-1.34]) than in others (HRGRS = 1.13; 95% CI =   [1.08-1.18]; pinteraction = 3.45×10-2). Risk prediction after seven years of follow-up showed a small improvement when adding the GRS to age, sex, APOEɛ4, and education (Δ-Cindex =  0.0043 [0.0019-0.0067]). Similar patterns were observed for IDI and NRI>0. In conclusion, a risk score incorporating common genetic variation outside the APOEɛ4 locus improved AD risk prediction and may facilitate risk stratification for prevention trials.

  12. Expected value information improves financial risk taking across the adult life span.

    PubMed

    Samanez-Larkin, Gregory R; Wagner, Anthony D; Knutson, Brian

    2011-04-01

    When making decisions, individuals must often compensate for cognitive limitations, particularly in the face of advanced age. Recent findings suggest that age-related variability in striatal activity may increase financial risk-taking mistakes in older adults. In two studies, we sought to further characterize neural contributions to optimal financial risk taking and to determine whether decision aids could improve financial risk taking. In Study 1, neuroimaging analyses revealed that individuals whose mesolimbic activation correlated with the expected value estimates of a rational actor made more optimal financial decisions. In Study 2, presentation of expected value information improved decision making in both younger and older adults, but the addition of a distracting secondary task had little impact on decision quality. Remarkably, provision of expected value information improved the performance of older adults to match that of younger adults at baseline. These findings are consistent with the notion that mesolimbic circuits play a critical role in optimal choice, and imply that providing simplified information about expected value may improve financial risk taking across the adult life span.

  13. Role of mathematical models in assessment of risk and in attempts to define management strategy.

    PubMed

    Flamm, W G; Winbush, J S

    1984-06-01

    Risk assessment of food-borne carcinogens is becoming a common practice at FDA. Actual risk is not being estimated, only the upper limit of risk. The risk assessment process involves a large number of steps and assumptions, many of which affect the numerical value estimated. The mathematical model which is to be applied is only one of the factors which affect these numerical values. To fulfill the policy objective of using the "worst plausible case" in estimating the upper limit of risk, recognition needs to be given to a proper balancing of assumptions and decisions. Interaction between risk assessors and risk managers should avoid making or giving the appearance of making specific technical decisions such as the choice of the mathematical model. The importance of this emerging field is too great to jeopardize it by inappropriately mixing scientific judgments with policy judgments. The risk manager should understand fully the points and range of uncertainty involved in arriving at the estimates of risk which must necessarily affect the choice of the policy or regulatory options available.

  14. Development of a Value Inquiry Model in Biology Education.

    ERIC Educational Resources Information Center

    Jeong, Eun-Young; Kim, Young-Soo

    2000-01-01

    Points out the rapid advances in biology, increasing bioethical issues, and how students need to make rational decisions. Introduces a value inquiry model development that includes identifying and clarifying value problems; understanding biological knowledge related to conflict situations; considering, selecting, and evaluating each alternative;…

  15. Cultural ecosystem services of mountain regions: Modelling the aesthetic value.

    PubMed

    Schirpke, Uta; Timmermann, Florian; Tappeiner, Ulrike; Tasser, Erich

    2016-10-01

    Mountain regions meet an increasing demand for pleasant landscapes, offering many cultural ecosystem services to both their residents and tourists. As a result of global change, land managers and policy makers are faced with changes to this landscape and need efficient evaluation techniques to assess cultural ecosystem services. This study provides a spatially explicit modelling approach to estimating aesthetic landscape values by relating spatial landscape patterns to human perceptions via a photo-based survey. The respondents attributed higher aesthetic values to the Alpine landscape in respect to areas with settlements, infrastructure or intensive agricultural use. The aesthetic value of two study areas in the Central Alps (Stubai Valley, Austria and Vinschgau, Italy) was modelled for 10,215 viewpoints along hiking trails according to current land cover and a scenario considering the spontaneous reforestation of abandoned land. Viewpoints with high aesthetic values were mainly located at high altitude, allowing long vistas, and included views of lakes or glaciers, and the lowest values were for viewpoints close to streets and in narrow valleys with little view. The aesthetic values of the reforestation scenario decreased mainly at higher altitudes, but the whole area was affected, reducing aesthetic value by almost 10% in Stubai Valley and 15% in Vinschgau. Our proposed modelling approach allows the estimation of aesthetic values in spatial and qualitative terms for most viewpoints in the European Alps. The resulting maps can be used as information and the basis for discussion by stakeholders, to support the decision-making process and landscape planning. This paper also discusses the role of mountain farming in preserving an attractive landscape and related cultural values.

  16. Cultural ecosystem services of mountain regions: Modelling the aesthetic value

    PubMed Central

    Schirpke, Uta; Timmermann, Florian; Tappeiner, Ulrike; Tasser, Erich

    2016-01-01

    Mountain regions meet an increasing demand for pleasant landscapes, offering many cultural ecosystem services to both their residents and tourists. As a result of global change, land managers and policy makers are faced with changes to this landscape and need efficient evaluation techniques to assess cultural ecosystem services. This study provides a spatially explicit modelling approach to estimating aesthetic landscape values by relating spatial landscape patterns to human perceptions via a photo-based survey. The respondents attributed higher aesthetic values to the Alpine landscape in respect to areas with settlements, infrastructure or intensive agricultural use. The aesthetic value of two study areas in the Central Alps (Stubai Valley, Austria and Vinschgau, Italy) was modelled for 10,215 viewpoints along hiking trails according to current land cover and a scenario considering the spontaneous reforestation of abandoned land. Viewpoints with high aesthetic values were mainly located at high altitude, allowing long vistas, and included views of lakes or glaciers, and the lowest values were for viewpoints close to streets and in narrow valleys with little view. The aesthetic values of the reforestation scenario decreased mainly at higher altitudes, but the whole area was affected, reducing aesthetic value by almost 10% in Stubai Valley and 15% in Vinschgau. Our proposed modelling approach allows the estimation of aesthetic values in spatial and qualitative terms for most viewpoints in the European Alps. The resulting maps can be used as information and the basis for discussion by stakeholders, to support the decision-making process and landscape planning. This paper also discusses the role of mountain farming in preserving an attractive landscape and related cultural values. PMID:27482152

  17. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding.

    PubMed

    Reverter, Enric; Tandon, Puneeta; Augustin, Salvador; Turon, Fanny; Casu, Stefania; Bastiampillai, Ravin; Keough, Adam; Llop, Elba; González, Antonio; Seijo, Susana; Berzigotti, Annalisa; Ma, Mang; Genescà, Joan; Bosch, Jaume; García-Pagán, Joan Carles; Abraldes, Juan G

    2014-02-01

    Patients with cirrhosis with acute variceal bleeding (AVB) have high mortality rates (15%-20%). Previously described models are seldom used to determine prognoses of these patients, partially because they have not been validated externally and because they include subjective variables, such as bleeding during endoscopy and Child-Pugh score, which are evaluated inconsistently. We aimed to improve determination of risk for patients with AVB. We analyzed data collected from 178 patients with cirrhosis (Child-Pugh scores of A, B, and C: 15%, 57%, and 28%, respectively) and esophageal AVB who received standard therapy from 2007 through 2010. We tested the performance (discrimination and calibration) of previously described models, including the model for end-stage liver disease (MELD), and developed a new MELD calibration to predict the mortality of patients within 6 weeks of presentation with AVB. MELD-based predictions were validated in cohorts of patients from Canada (n = 240) and Spain (n = 221). Among study subjects, the 6-week mortality rate was 16%. MELD was the best model in terms of discrimination; it was recalibrated to predict the 6-week mortality rate with logistic regression (logit, -5.312 + 0.207 • MELD; bootstrapped R(2), 0.3295). MELD values of 19 or greater predicted 20% or greater mortality, whereas MELD scores less than 11 predicted less than 5% mortality. The model performed well for patients from Canada at all risk levels. In the Spanish validation set, in which all patients were treated with banding ligation, MELD predictions were accurate up to the 20% risk threshold. We developed a MELD-based model that accurately predicts mortality among patients with AVB, based on objective variables available at admission. This model could be useful to evaluate the efficacy of new therapies and stratify patients in randomized trials. Copyright © 2014 AGA Institute. Published by Elsevier Inc. All rights reserved.

  18. Evaluation of an inpatient fall risk screening tool to identify the most critical fall risk factors in inpatients.

    PubMed

    Hou, Wen-Hsuan; Kang, Chun-Mei; Ho, Mu-Hsing; Kuo, Jessie Ming-Chuan; Chen, Hsiao-Lien; Chang, Wen-Yin

    2017-03-01

    To evaluate the accuracy of the inpatient fall risk screening tool and to identify the most critical fall risk factors in inpatients. Variations exist in several screening tools applied in acute care hospitals for examining risk factors for falls and identifying high-risk inpatients. Secondary data analysis. A subset of inpatient data for the period from June 2011-June 2014 was extracted from the nursing information system and adverse event reporting system of an 818-bed teaching medical centre in Taipei. Data were analysed using descriptive statistics, receiver operating characteristic curve analysis and logistic regression analysis. During the study period, 205 fallers and 37,232 nonfallers were identified. The results revealed that the inpatient fall risk screening tool (cut-off point of ≥3) had a low sensitivity level (60%), satisfactory specificity (87%), a positive predictive value of 2·0% and a negative predictive value of 99%. The receiver operating characteristic curve analysis revealed an area under the curve of 0·805 (sensitivity, 71·8%; specificity, 78%). To increase the sensitivity values, the Youden index suggests at least 1·5 points to be the most suitable cut-off point for the inpatient fall risk screening tool. Multivariate logistic regression analysis revealed a considerably increased fall risk in patients with impaired balance and impaired elimination. The fall risk factor was also significantly associated with days of hospital stay and with admission to surgical wards. The findings can raise awareness about the two most critical risk factors for falls among future clinical nurses and other healthcare professionals and thus facilitate the development of fall prevention interventions. This study highlights the needs for redefining the cut-off points of the inpatient fall risk screening tool to effectively identify inpatients at a high risk of falls. Furthermore, inpatients with impaired balance and impaired elimination should be closely

  19. Claims-based risk model for first severe COPD exacerbation.

    PubMed

    Stanford, Richard H; Nag, Arpita; Mapel, Douglas W; Lee, Todd A; Rosiello, Richard; Schatz, Michael; Vekeman, Francis; Gauthier-Loiselle, Marjolaine; Merrigan, J F Philip; Duh, Mei Sheng

    2018-02-01

    To develop and validate a predictive model for first severe chronic obstructive pulmonary disease (COPD) exacerbation using health insurance claims data and to validate the risk measure of controller medication to total COPD treatment (controller and rescue) ratio (CTR). A predictive model was developed and validated in 2 managed care databases: Truven Health MarketScan database and Reliant Medical Group database. This secondary analysis assessed risk factors, including CTR, during the baseline period (Year 1) to predict risk of severe exacerbation in the at-risk period (Year 2). Patients with COPD who were 40 years or older and who had at least 1 COPD medication dispensed during the year following COPD diagnosis were included. Subjects with severe exacerbations in the baseline year were excluded. Risk factors in the baseline period were included as potential predictors in multivariate analysis. Performance was evaluated using C-statistics. The analysis included 223,824 patients. The greatest risk factors for first severe exacerbation were advanced age, chronic oxygen therapy usage, COPD diagnosis type, dispensing of 4 or more canisters of rescue medication, and having 2 or more moderate exacerbations. A CTR of 0.3 or greater was associated with a 14% lower risk of severe exacerbation. The model performed well with C-statistics, ranging from 0.711 to 0.714. This claims-based risk model can predict the likelihood of first severe COPD exacerbation. The CTR could also potentially be used to target populations at greatest risk for severe exacerbations. This could be relevant for providers and payers in approaches to prevent severe exacerbations and reduce costs.

  20. Developing a multi-component immune model for evaluating the risk of respiratory illness in athletes.

    PubMed

    Gleeson, Maree; Pyne, David B; Elkington, Lisa J; Hall, Sharron T; Attia, John R; Oldmeadow, Christopher; Wood, Lisa G; Callister, Robin

    2017-01-01

    Clinical and laboratory identification of the underlying risk of respiratory illness in athletes has proved problematic. The aim of this study was to determine whether clinical data, combined with immune responses to standardised exercise protocols and genetic cytokine polymorphism status, could identify the risk of respiratory illness (symptoms) in a cohort of highly-trained athletes. Male endurance athletes (n=16; VO2max 66.5 ± 5.1 mL.kg-1.min-1) underwent a clinical evaluation of known risk factors by a physician and comprehensive laboratory analysis of immune responses both at rest and after two cycling ergometer tests: 60 min at 65% VO2max (LONG); and 6 x 3 min intervals at 90% VO2max (INTENSE). Blood tests were performed to determine Epstein Barr virus (EBV) status and DNA was genotyped for a panel of cytokine gene polymorphisms. Saliva was collected for measurement of IgA and detection of EBV DNA. Athletes were then followed for 9 months for self-reported episodes of respiratory illness, with confirmation of the underlying cause by a sports physician. There were no associations with risk of respiratory illness identified for any parameter assessed in the clinical evaluations. The laboratory parameters associated with an increased risk of respiratory illnesses in highly-trained athletes were cytokine gene polymorphisms for the high expression of IL-6 and IFN-ɣ; expression of EBV-DNA in saliva; and low levels of salivary IgA concentration. A genetic risk score was developed for the cumulative number of minor alleles for the cytokines evaluated. Athletes prone to recurrent respiratory illness were more likely to have immune disturbances that allow viral reactivation, and a genetic predisposition to pro-inflammatory cytokine responses to intense exercise. Copyright © 2016 International Society of Exercise and Immunology. All rights reserved.

  1. Tryon Trekkers: An Evaluation of a STEM Based Afterschool Program for At-Risk Youth

    NASA Astrophysics Data System (ADS)

    Eckels Anderson, Chessa

    This study contributed to the body of research that supports a holistic model of afterschool learning through the design of an afterschool intervention that benefits elementary school students of low socioeconomic status. This qualitative study evaluated a science focused afterschool curriculum that was designed using principles from Risk and Resiliency Theory, academic motivation theories, science core ideas from the Next Generation Science Standards, and used environmental education philosophy. The research question of this study is: how does an outdoor and STEM based afterschool program impact at-risk students' self-efficacy, belonging and engagement and ability to apply conceptual knowledge of environmental science topics? The study collected information about the participants' affective experiences during the intervention using structured and ethnographic observations and semi-structured interviews. Observations and interviews were coded and analyzed to find patterns in participants' responses. Three participant profiles were developed using the structured observations and ethnographic observations to provide an in depth understanding of the participant experience. The study also assessed the participants' abilities to apply conceptual understanding of the program's science topics by integrating an application of conceptual knowledge task into the curriculum. This task in the form of a participant project was assessed using an adapted version of the Portland Metro STEM Partnership's Application of Conceptual Knowledge Rubric. Results in the study showed that participants demonstrated self-efficacy, a sense of belonging and engagement during the program. Over half of the participants in the study demonstrated a proficient understanding of program concepts. Overall, this holistic afterschool program demonstrated that specific instructional practices and a multi-modal science curriculum helped to support the social and emotional needs of at-risk children.

  2. Rethinking Teacher Evaluation: A Conversation about Statistical Inferences and Value-Added Models

    ERIC Educational Resources Information Center

    Callister Everson, Kimberlee; Feinauer, Erika; Sudweeks, Richard R.

    2013-01-01

    In this article, the authors provide a methodological critique of the current standard of value-added modeling forwarded in educational policy contexts as a means of measuring teacher effectiveness. Conventional value-added estimates of teacher quality are attempts to determine to what degree a teacher would theoretically contribute, on average,…

  3. Prognostic value of heart rate turbulence for risk assessment in patients with unstable angina and non-ST elevation myocardial infarction

    PubMed Central

    Harris, Patricia RE; Stein, Phyllis K; Fung, Gordon L; Drew, Barbara J

    2013-01-01

    Background We sought to examine the prognostic value of heart rate turbulence derived from electrocardiographic recordings initiated in the emergency department for patients with non-ST elevation myocardial infarction (NSTEMI) or unstable angina. Methods Twenty-four-hour Holter recordings were started in patients with cardiac symptoms approximately 45 minutes after arrival in the emergency department. Patients subsequently diagnosed with NSTEMI or unstable angina who had recordings with ≥18 hours of sinus rhythm and sufficient data to compute Thrombolysis In Myocardial Infarction (TIMI) risk scores were chosen for analysis (n = 166). Endpoints were emergent re-entry to the cardiac emergency department and/or death at 30 days and one year. Results In Cox regression models, heart rate turbulence and TIMI risk scores together were significant predictors of 30-day (model chi square 13.200, P = 0.001, C-statistic 0.725) and one-year (model chi square 31.160, P < 0.001, C-statistic 0.695) endpoints, outperforming either measure alone. Conclusion Measurement of heart rate turbulence, initiated upon arrival at the emergency department, may provide additional incremental value in the risk assessment for patients with NSTEMI or unstable angina. PMID:23976860

  4. The role of building models in the evaluation of heat-related risks

    NASA Astrophysics Data System (ADS)

    Buchin, Oliver; Jänicke, Britta; Meier, Fred; Scherer, Dieter; Ziegler, Felix

    2016-04-01

    Hazard-risk relationships in epidemiological studies are generally based on the outdoor climate, despite the fact that most of humans' lifetime is spent indoors. By coupling indoor and outdoor climates with a building model, the risk concept developed can still be based on the outdoor conditions but also includes exposure to the indoor climate. The influence of non-linear building physics and the impact of air conditioning on heat-related risks can be assessed in a plausible manner using this risk concept. For proof of concept, the proposed risk concept is compared to a traditional risk analysis. As an example, daily and city-wide mortality data of the age group 65 and older in Berlin, Germany, for the years 2001-2010 are used. Four building models with differing complexity are applied in a time-series regression analysis. This study shows that indoor hazard better explains the variability in the risk data compared to outdoor hazard, depending on the kind of building model. Simplified parameter models include the main non-linear effects and are proposed for the time-series analysis. The concept shows that the definitions of heat events, lag days, and acclimatization in a traditional hazard-risk relationship are influenced by the characteristics of the prevailing building stock.

  5. Uncertainty in surface water flood risk modelling

    NASA Astrophysics Data System (ADS)

    Butler, J. B.; Martin, D. N.; Roberts, E.; Domuah, R.

    2009-04-01

    Two thirds of the flooding that occurred in the UK during summer 2007 was as a result of surface water (otherwise known as ‘pluvial') rather than river or coastal flooding. In response, the Environment Agency and Interim Pitt Reviews have highlighted the need for surface water risk mapping and warning tools to identify, and prepare for, flooding induced by heavy rainfall events. This need is compounded by the likely increase in rainfall intensities due to climate change. The Association of British Insurers has called for the Environment Agency to commission nationwide flood risk maps showing the relative risk of flooding from all sources. At the wider European scale, the recently-published EC Directive on the assessment and management of flood risks will require Member States to evaluate, map and model flood risk from a variety of sources. As such, there is now a clear and immediate requirement for the development of techniques for assessing and managing surface water flood risk across large areas. This paper describes an approach for integrating rainfall, drainage network and high-resolution topographic data using Flowroute™, a high-resolution flood mapping and modelling platform, to produce deterministic surface water flood risk maps. Information is provided from UK case studies to enable assessment and validation of modelled results using historical flood information and insurance claims data. Flowroute was co-developed with flood scientists at Cambridge University specifically to simulate river dynamics and floodplain inundation in complex, congested urban areas in a highly computationally efficient manner. It utilises high-resolution topographic information to route flows around individual buildings so as to enable the prediction of flood depths, extents, durations and velocities. As such, the model forms an ideal platform for the development of surface water flood risk modelling and mapping capabilities. The 2-dimensional component of Flowroute employs

  6. Evaluation of a Mysis bioenergetics model

    USGS Publications Warehouse

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  7. Early Identification of Patients at Risk of Acute Lung Injury

    PubMed Central

    Gajic, Ognjen; Dabbagh, Ousama; Park, Pauline K.; Adesanya, Adebola; Chang, Steven Y.; Hou, Peter; Anderson, Harry; Hoth, J. Jason; Mikkelsen, Mark E.; Gentile, Nina T.; Gong, Michelle N.; Talmor, Daniel; Bajwa, Ednan; Watkins, Timothy R.; Festic, Emir; Yilmaz, Murat; Iscimen, Remzi; Kaufman, David A.; Esper, Annette M.; Sadikot, Ruxana; Douglas, Ivor; Sevransky, Jonathan

    2011-01-01

    Rationale: Accurate, early identification of patients at risk for developing acute lung injury (ALI) provides the opportunity to test and implement secondary prevention strategies. Objectives: To determine the frequency and outcome of ALI development in patients at risk and validate a lung injury prediction score (LIPS). Methods: In this prospective multicenter observational cohort study, predisposing conditions and risk modifiers predictive of ALI development were identified from routine clinical data available during initial evaluation. The discrimination of the model was assessed with area under receiver operating curve (AUC). The risk of death from ALI was determined after adjustment for severity of illness and predisposing conditions. Measurements and Main Results: Twenty-two hospitals enrolled 5,584 patients at risk. ALI developed a median of 2 (interquartile range 1–4) days after initial evaluation in 377 (6.8%; 148 ALI-only, 229 adult respiratory distress syndrome) patients. The frequency of ALI varied according to predisposing conditions (from 3% in pancreatitis to 26% after smoke inhalation). LIPS discriminated patients who developed ALI from those who did not with an AUC of 0.80 (95% confidence interval, 0.78–0.82). When adjusted for severity of illness and predisposing conditions, development of ALI increased the risk of in-hospital death (odds ratio, 4.1; 95% confidence interval, 2.9–5.7). Conclusions: ALI occurrence varies according to predisposing conditions and carries an independently poor prognosis. Using routinely available clinical data, LIPS identifies patients at high risk for ALI early in the course of their illness. This model will alert clinicians about the risk of ALI and facilitate testing and implementation of ALI prevention strategies. Clinical trial registered with www.clinicaltrials.gov (NCT00889772). PMID:20802164

  8. The Common Risk Model for Dams: A Portfolio Approach to Security Risk Assessments

    DTIC Science & Technology

    2013-06-01

    and threat estimates in a way that accounts for the relationships among these variables. The CRM -D can effectively quantify the benefits of...consequence, vulnerability, and threat estimates in a way that properly accounts for the relationships among these variables. The CRM -D can effectively...Common RiskModel ( CRM ) for evaluating and comparing risks associated with the nation’s critical infrastructure. This model incorporates commonly used risk

  9. Research on Liquidity Risk Evaluation of Chinese A-Shares Market Based on Extension Theory

    NASA Astrophysics Data System (ADS)

    Bai-Qing, Sun; Peng-Xiang, Liu; Lin, Zhang; Yan-Ge, Li

    This research defines the liquidity risk of stock market in matter-element theory and affair-element theory, establishes the indicator system of the forewarning for liquidity risks,designs the model and the process of early warning using the extension set method, extension dependent function and the comprehensive evaluation model. And the paper studies empirically A-shares market through the data of 1A0001, which prove that the model can better describe liquidity risk of China’s A-share market. At last, it gives the corresponding policy recommendations.

  10. Development and Evaluation of an Automated Machine Learning Algorithm for In-Hospital Mortality Risk Adjustment Among Critical Care Patients.

    PubMed

    Delahanty, Ryan J; Kaufman, David; Jones, Spencer S

    2018-06-01

    Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death

  11. Competing risks model in screening for preeclampsia by maternal factors and biomarkers at 11-13 weeks gestation.

    PubMed

    O'Gorman, Neil; Wright, David; Syngelaki, Argyro; Akolekar, Ranjit; Wright, Alan; Poon, Leona C; Nicolaides, Kypros H

    2016-01-01

    Preeclampsia affects approximately 3% of all pregnancies and is a major cause of maternal and perinatal morbidity and death. In the last decade, extensive research has been devoted to early screening for preeclampsia with the aim of reducing the prevalence of the disease through pharmacologic intervention in the high-risk group starting from the first trimester of pregnancy. The purpose of this study was to develop a model for preeclampsia based on maternal demographic characteristics and medical history (maternal factors) and biomarkers. The data for this study were derived from prospective screening for adverse obstetric outcomes in women who attended for their routine first hospital visit at 11-13 weeks gestation in 2 maternity hospitals in England. We screened 35,948 singleton pregnancies that included 1058 pregnancies (2.9%) that experienced preeclampsia. Bayes theorem was used to combine the a priori risk from maternal factors with various combinations of uterine artery pulsatility index, mean arterial pressure, serum pregnancy-associated plasma protein-A, and placental growth factor multiple of the median values. Five-fold cross validation was used to assess the performance of screening for preeclampsia that delivered at <37 weeks gestation (preterm-preeclampsia) and ≥37 weeks gestation (term-preeclampsia) by models that combined maternal factors with individual biomarkers and their combination with screening by maternal factors alone. In pregnancies that experienced preeclampsia, the values of uterine artery pulsatility index and mean arterial pressure were increased, and the values of serum pregnancy-associated plasma protein-A and placental growth factor were decreased. For all biomarkers, the deviation from normal was greater for early than late preeclampsia; therefore, the performance of screening was related inversely to the gestational age at which delivery became necessary for maternal and/or fetal indications. Combined screening by maternal

  12. A Methodology to Evaluate Ecological Resources and Risk Using Two Case Studies at the Department of Energy’s Hanford Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burger, Joanna; Gochfeld, Michael; Bunn, Amoret

    An assessment of the potential risks to ecological resources from remediation activities or other perturbations should involve a quantitative evaluation of resources on the remediation site and in the surrounding environment. We developed a risk methodology to rapidly evaluate potential impact on ecological resources for the U.S. Department of Energy’s Hanford Site in southcentral Washington State. We describe the application of the risk evaluation for two case studies to illustrate its applicability. The ecological assessment involves examining previous sources of information for the site, defining different resource levels from 0 to 5. We also developed a risk rating scale frommore » nondiscernable to very high. Field assessment is the critical step to determine resource levels or to determine if current conditions are the same as previously evaluated. We provide a rapid assessment method for current ecological conditions that can be compared to previous site-specific data, or that can be used to assess resource value on other sites where ecological information is not generally available. The method is applicable to other Department of Energy’s sites, where its development may involve a range of state regulators, resource trustees, Tribes and other stakeholders. Achieving consistency across Department of Energy’s sites for valuation of ecological resources on remediation sites will assure Congress and the public that funds and personnel are being deployed appropriately.« less

  13. Human Health Risk Assessment Simulations in a Distributed Environment for Shuttle Launch

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar; Bardina, Jorge

    2004-01-01

    During the launch of a rocket under prevailing weather conditions, commanders at Cape Canaveral Air Force station evaluate the possibility of whether wind blown toxic emissions might reach civilian and military personnel in the near by area. In our model, we focused mainly on Hydrogen chloride (HCL), Nitrogen oxides (NOx) and Nitric acid (HNO3), which are non-carcinogenic chemicals as per United States Environmental Protection Agency (USEPA) classification. We have used the hazard quotient model to estimate the number of people at risk. It is based on the number of people with exposure above a reference exposure level that is unlikely to cause adverse health effects. The risk to the exposed population is calculated by multiplying the individual risk and the number in exposed population. The risk values are compared against the acceptable risk values and GO or NO-go situation is decided based on risk values for the Shuttle launch. The entire model is simulated over the web and different scenaria can be generated which allows management to choose an optimum decision.

  14. Proposal of a risk-factor-based analytical approach for integrating occupational health and safety into project risk evaluation.

    PubMed

    Badri, Adel; Nadeau, Sylvie; Gbodossou, André

    2012-09-01

    Excluding occupational health and safety (OHS) from project management is no longer acceptable. Numerous industrial accidents have exposed the ineffectiveness of conventional risk evaluation methods as well as negligence of risk factors having major impact on the health and safety of workers and nearby residents. Lack of reliable and complete evaluations from the beginning of a project generates bad decisions that could end up threatening the very existence of an organization. This article supports a systematic approach to the evaluation of OHS risks and proposes a new procedure based on the number of risk factors identified and their relative significance. A new concept called risk factor concentration along with weighting of risk factor categories as contributors to undesirable events are used in the analytical hierarchy process multi-criteria comparison model with Expert Choice(©) software. A case study is used to illustrate the various steps of the risk evaluation approach and the quick and simple integration of OHS at an early stage of a project. The approach allows continual reassessment of criteria over the course of the project or when new data are acquired. It was thus possible to differentiate the OHS risks from the risk of drop in quality in the case of the factory expansion project. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. A spatial assessment framework for evaluating flood risk under extreme climates.

    PubMed

    Chen, Yun; Liu, Rui; Barrett, Damian; Gao, Lei; Zhou, Mingwei; Renzullo, Luigi; Emelyanova, Irina

    2015-12-15

    Australian coal mines have been facing a major challenge of increasing risk of flooding caused by intensive rainfall events in recent years. In light of growing climate change concerns and the predicted escalation of flooding, estimating flood inundation risk becomes essential for understanding sustainable mine water management in the Australian mining sector. This research develops a spatial multi-criteria decision making prototype for the evaluation of flooding risk at a regional scale using the Bowen Basin and its surroundings in Queensland as a case study. Spatial gridded data, including climate, hydrology, topography, vegetation and soils, were collected and processed in ArcGIS. Several indices were derived based on time series of observations and spatial modeling taking account of extreme rainfall, evapotranspiration, stream flow, potential soil water retention, elevation and slope generated from a digital elevation model (DEM), as well as drainage density and proximity extracted from a river network. These spatial indices were weighted using the analytical hierarchy process (AHP) and integrated in an AHP-based suitability assessment (AHP-SA) model under the spatial risk evaluation framework. A regional flooding risk map was delineated to represent likely impacts of criterion indices at different risk levels, which was verified using the maximum inundation extent detectable by a time series of remote sensing imagery. The result provides baseline information to help Bowen Basin coal mines identify and assess flooding risk when making adaptation strategies and implementing mitigation measures in future. The framework and methodology developed in this research offers the Australian mining industry, and social and environmental studies around the world, an effective way to produce reliable assessment on flood risk for managing uncertainty in water availability under climate change. Copyright © 2015. Published by Elsevier B.V.

  16. Optimizing ACS NSQIP modeling for evaluation of surgical quality and risk: patient risk adjustment, procedure mix adjustment, shrinkage adjustment, and surgical focus.

    PubMed

    Cohen, Mark E; Ko, Clifford Y; Bilimoria, Karl Y; Zhou, Lynn; Huffman, Kristopher; Wang, Xue; Liu, Yaoming; Kraemer, Kari; Meng, Xiangju; Merkow, Ryan; Chow, Warren; Matel, Brian; Richards, Karen; Hart, Amy J; Dimick, Justin B; Hall, Bruce L

    2013-08-01

    The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) collects detailed clinical data from participating hospitals using standardized data definitions, analyzes these data, and provides participating hospitals with reports that permit risk-adjusted comparisons with a surgical quality standard. Since its inception, the ACS NSQIP has worked to refine surgical outcomes measurements and enhance statistical methods to improve the reliability and validity of this hospital profiling. From an original focus on controlling for between-hospital differences in patient risk factors with logistic regression, ACS NSQIP has added a variable to better adjust for the complexity and risk profile of surgical procedures (procedure mix adjustment) and stabilized estimates derived from small samples by using a hierarchical model with shrinkage adjustment. New models have been developed focusing on specific surgical procedures (eg, "Procedure Targeted" models), which provide opportunities to incorporate indication and other procedure-specific variables and outcomes to improve risk adjustment. In addition, comparative benchmark reports given to participating hospitals have been expanded considerably to allow more detailed evaluations of performance. Finally, procedures have been developed to estimate surgical risk for individual patients. This article describes the development of, and justification for, these new statistical methods and reporting strategies in ACS NSQIP. Copyright © 2013 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  17. Calibration plots for risk prediction models in the presence of competing risks.

    PubMed

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-08-15

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Are Youth at Risk? Reevaluating the Deficit Model of Youth Development.

    ERIC Educational Resources Information Center

    Astroth, Kirk A.

    1993-01-01

    Puts the label "at risk" in perspective as it relates to youth. Points out that today's adolescents have lower rates of suicide, unwed pregnancy, drug abuse, smoking, and drunk driving than young and middle-aged adults. Suggests that extension youth education moves toward a condition-focused, resiliency model that recognizes the vitality and…

  19. Cumulative Risk Assessment: An Overview of Methodological Approaches for Evaluating Combined Health Effects from Exposure to Multiple Environmental Stressors

    PubMed Central

    Sexton, Ken

    2012-01-01

    Systematic evaluation of cumulative health risks from the combined effects of multiple environmental stressors is becoming a vital component of risk-based decisions aimed at protecting human populations and communities. This article briefly examines the historical development of cumulative risk assessment as an analytical tool, and discusses current approaches for evaluating cumulative health effects from exposure to both chemical mixtures and combinations of chemical and nonchemical stressors. A comparison of stressor-based and effects-based assessment methods is presented, and the potential value of focusing on viable risk management options to limit the scope of cumulative evaluations is discussed. The ultimate goal of cumulative risk assessment is to provide answers to decision-relevant questions based on organized scientific analysis; even if the answers, at least for the time being, are inexact and uncertain. PMID:22470298

  20. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    NASA Astrophysics Data System (ADS)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  1. Value-based resource management: a model for best value nursing care.

    PubMed

    Caspers, Barbara A; Pickard, Beth

    2013-01-01

    With the health care environment shifting to a value-based payment system, Catholic Health Initiatives nursing leadership spearheaded an initiative with 14 hospitals to establish best nursing care at a lower cost. The implementation of technology-enabled business processes at point of care led to a new model for best value nursing care: Value-Based Resource Management. The new model integrates clinical patient data from the electronic medical record and embeds the new information in care team workflows for actionable real-time decision support and predictive forecasting. The participating hospitals reported increased patient satisfaction and cost savings in the reduction of overtime and improvement in length of stay management. New data generated by the initiative on nursing hours and cost by patient and by population (Medicare severity diagnosis-related groups), and patient health status outcomes across the acute care continuum expanded business intelligence for a value-based population health system.

  2. A nomogram based on mammary ductoscopic indicators for evaluating the risk of breast cancer in intraductal neoplasms with nipple discharge.

    PubMed

    Lian, Zhen-Qiang; Wang, Qi; Zhang, An-Qin; Zhang, Jiang-Yu; Han, Xiao-Rong; Yu, Hai-Yun; Xie, Si-Mei

    2015-04-01

    Mammary ductoscopy (MD) is commonly used to detect intraductal lesions associated with nipple discharge. This study investigated the relationships between ductoscopic image-based indicators and breast cancer risk, and developed a nomogram for evaluating breast cancer risk in intraductal neoplasms with nipple discharge. A total of 879 consecutive inpatients (916 breasts) with nipple discharge who underwent selective duct excision for intraductal neoplasms detected by MD from June 2008 to April 2014 were analyzed retrospectively. A nomogram was developed using a multivariate logistic regression model based on data from a training set (687 cases) and validated in an independent validation set (229 cases). A Youden-derived cut-off value was assigned to the nomogram for the diagnosis of breast cancer. Color of discharge, location, appearance, and surface of neoplasm, and morphology of ductal wall were independent predictors for breast cancer in multivariate logistic regression analysis. A nomogram based on these predictors performed well. The P value of the Hosmer-Lemeshow test for the prediction model was 0.36. Area under the curve values of 0.812 (95 % confidence interval (CI) 0.763-0.860) and 0.738 (95 % CI 0.635-0.841) was obtained in the training and validation sets, respectively. The accuracies of the nomogram for breast cancer diagnosis were 71.2 % in the training set and 75.5 % in the validation set. We developed a nomogram for evaluating breast cancer risk in intraductal neoplasms with nipple discharge based on MD image findings. This model may aid individual risk assessment and guide treatment in clinical practice.

  3. Modeling sediment yield in small catchments at event scale: Model comparison, development and evaluation

    NASA Astrophysics Data System (ADS)

    Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.

    2017-12-01

    Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.

  4. Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using anmore » adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base.« less

  5. Development and evaluation of a risk communication curriculum for medical students.

    PubMed

    Han, Paul K J; Joekes, Katherine; Elwyn, Glyn; Mazor, Kathleen M; Thomson, Richard; Sedgwick, Philip; Ibison, Judith; Wong, John B

    2014-01-01

    To develop, pilot, and evaluate a curriculum for teaching clinical risk communication skills to medical students. A new experience-based curriculum, "Risk Talk," was developed and piloted over a 1-year period among students at Tufts University School of Medicine. An experimental study of 2nd-year students exposed vs. unexposed to the curriculum was conducted to evaluate the curriculum's efficacy. Primary outcome measures were students' objective (observed) and subjective (self-reported) risk communication competence; the latter was assessed using an Observed Structured Clinical Examination (OSCE) employing new measures. Twenty-eight 2nd-year students completed the curriculum, and exhibited significantly greater (p<.001) objective and subjective risk communication competence than a convenience sample of 24 unexposed students. New observational measures of objective competence in risk communication showed promising evidence of reliability and validity. The curriculum was resource-intensive. The new experience-based clinical risk communication curriculum was efficacious, although resource-intensive. More work is needed to develop the feasibility of curriculum delivery, and to improve the measurement of competence in clinical risk communication. Risk communication is an important advanced communication skill, and the Risk Talk curriculum provides a model educational intervention and new assessment tools to guide future efforts to teach and evaluate this skill. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Evaluation of Historical and Projected Agricultural Climate Risk Over the Continental US

    NASA Astrophysics Data System (ADS)

    Zhu, X.; Troy, T. J.; Devineni, N.

    2016-12-01

    Food demands are rising due to an increasing population with changing food preferences, which places pressure on agricultural systems. In addition, in the past decade climate extremes have highlighted the vulnerability of our agricultural production to climate variability. Quantitative analyses in the climate-agriculture research field have been performed in many studies. However, climate risk still remains difficult to evaluate at large scales yet shows great potential of help us better understand historical climate change impacts and evaluate the future risk given climate projections. In this study, we developed a framework to evaluate climate risk quantitatively by applying statistical methods such as Bayesian regression, distribution fitting, and Monte Carlo simulation. We applied the framework over different climate regions in the continental US both historically and for modeled climate projections. The relative importance of any major growing season climate index, such as maximum dry period or heavy precipitation, was evaluated to determine what climate indices play a role in affecting crop yields. The statistical modeling framework was applied using county yields, with irrigated and rainfed yields separated to evaluate the different risk. This framework provides estimates of the climate risk facing agricultural production in the near-term that account for the full uncertainty of climate occurrences, range of crop response, and spatial correlation in climate. In particular, the method provides robust estimates of importance of irrigation in mitigating agricultural climate risk. The results of this study can contribute to decision making about crop choice and water use in an uncertain climate.

  7. Mexican-Origin Youth's Risk Behavior from Adolescence to Young Adulthood: The Role of Familism Values

    PubMed Central

    Wheeler, Lorey A.; Zeiders, Katharine H.; Updegraff, Kimberly A.; Umaña-Taylor, Adriana J.; Rodríguez de Jesús, Sue A.; Perez-Brena, Norma J.

    2016-01-01

    Engagement in risk behavior has implications for individuals' academic achievement, health, and well-being, yet there is a paucity of developmental research on the role of culturally-relevant strengths in individual and family differences in risk behavior involvement among ethnic minority youth. In this study, we used a longitudinal cohort-sequential design to chart intraindividual trajectories of risk behavior and test variation by gender and familism values in 492 youth from 12 to 22 years of age. Participants were older and younger siblings from 246 Mexican-origin families who reported on their risk behaviors in interviews spaced over eight years. Multilevel cohort-sequential growth models revealed that youth reported an increase in risk behavior from 12 to 18 years of age, and then a decline to age 22. Male youth reported greater overall levels and a steeper increase in risk behavior from ages 12 to 18, compared to female youth. For familism values, on occasions when youth reported higher levels, they also reported lower levels of risk behavior (i.e., within-person effect). For sibling dyads characterized by higher average levels of familism values, youth reported lower average levels of risk behavior (i.e., between-family effect). Findings provide unique insights into risk behavior from adolescence to young adulthood among Mexican-origin youth. PMID:28026193

  8. Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values

    PubMed Central

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491

  9. Phase angle assessment by bioelectrical impedance analysis and its predictive value for malnutrition risk in hospitalized geriatric patients.

    PubMed

    Varan, Hacer Dogan; Bolayir, Basak; Kara, Ozgur; Arik, Gunes; Kizilarslanoglu, Muhammet Cemal; Kilic, Mustafa Kemal; Sumer, Fatih; Kuyumcu, Mehmet Emin; Yesil, Yusuf; Yavuz, Burcu Balam; Halil, Meltem; Cankurtaran, Mustafa

    2016-12-01

    Phase angle (PhA) value determined by bioelectrical impedance analysis (BIA) is an indicator of cell membrane damage and body cell mass. Recent studies have shown that low PhA value is associated with increased nutritional risk in various group of patients. However, there have been only a few studies performed globally assessing the relationship between nutritional risk and PhA in hospitalized geriatric patients. The aim of the study is to evaluate the predictive value of the PhA for malnutrition risk in hospitalized geriatric patients. One hundred and twenty-two hospitalized geriatric patients were included in this cross-sectional study. Comprehensive geriatric assessment tests and BIA measurements were performed within the first 48 h after admission. Nutritional risk state of the patients was determined with NRS-2002. Phase angle values of the patients with malnutrition risk were compared with the patients that did not have the same risk. The independent variables for predicting malnutrition risk were determined. SPSS version 15 was utilized for the statistical analyzes. The patients with malnutrition risk had significantly lower phase angle values than the patients without malnutrition risk (p = 0.003). ROC curve analysis suggested that the optimum PhA cut-off point for malnutrition risk was 4.7° with 79.6 % sensitivity, 64.6 % specificity, 73.9 % positive predictive value, and 73.9 % negative predictive value. BMI, prealbumin, PhA, and Mini Mental State Examination Test scores were the independent variables for predicting malnutrition risk. PhA can be a useful, independent indicator for predicting malnutrition risk in hospitalized geriatric patients.

  10. Transitioning to a High-Value Health Care Model: Academic Accountability.

    PubMed

    Johnson, Pamela T; Alvin, Matthew D; Ziegelstein, Roy C

    2018-06-01

    Health care spending in the United States has increased to unprecedented levels, and these costs have broken medical providers' promise to do no harm. Medical debt is the leading contributor to U.S. personal bankruptcy, more than 50% of household foreclosures are secondary to medical debt and illness, and patients are choosing to avoid necessary care because of its cost. Evidence that the health care delivery model is contributing to patient hardship is a call to action for the profession to transition to a high-value model, one that delivers the highest health care quality and safety at the lowest personal and financial cost to patients. As such, value improvement work is being done at academic medical centers across the country. To promote measurable improvements in practice on a national scale, academic institutions need to align efforts and create a new model for collaboration, one that transcends cross-institutional competition, specialty divisions, and geographical constraints. Academic institutions are particularly accountable because of the importance of research and education in driving this transition. Investigations that elucidate effective implementation methodologies and evaluate safety outcomes data can facilitate transformation. Engaging trainees in quality improvement initiatives will instill high-value care into their practice. This article charges academic institutions to go beyond dissemination of best practice guidelines and demonstrate accountability for high-value quality improvement implementation. By effectively transitioning to a high-value health care system, medical providers will convincingly demonstrate that patients are their most important priority.

  11. Valuing Reductions in Fatal Illness Risks: Implications of Recent Research.

    PubMed

    Robinson, Lisa A; Hammitt, James K

    2016-08-01

    The value of mortality risk reductions, conventionally expressed as the value per statistical life, is an important determinant of the net benefits of many government policies. US regulators currently rely primarily on studies of fatal injuries, raising questions about whether different values might be appropriate for risks associated with fatal illnesses. Our review suggests that, despite the substantial expansion of the research base in recent years, few US studies of illness-related risks meet criteria for quality, and those that do yield similar values to studies of injury-related risks. Given this result, combining the findings of these few studies with the findings of the more robust literature on injury-related risks appears to provide a reasonable range of estimates for application in regulatory analysis. Our review yields estimates ranging from about $4.2 million to $13.7 million with a mid-point of $9.0 million (2013 dollars). Although the studies we identify differ from those that underlie the values currently used by Federal agencies, the resulting estimates are remarkably similar, suggesting that there is substantial consensus emerging on the values applicable to the general US population. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    NASA Astrophysics Data System (ADS)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  13. Dynamic extreme values modeling and monitoring by means of sea shores water quality biomarkers and valvometry.

    PubMed

    Durrieu, Gilles; Pham, Quang-Khoai; Foltête, Anne-Sophie; Maxime, Valérie; Grama, Ion; Tilly, Véronique Le; Duval, Hélène; Tricot, Jean-Marie; Naceur, Chiraz Ben; Sire, Olivier

    2016-07-01

    Water quality can be evaluated using biomarkers such as tissular enzymatic activities of endemic species. Measurement of molluscs bivalves activity at high frequency (e.g., valvometry) during a long time period is another way to record the animal behavior and to evaluate perturbations of the water quality in real time. As the pollution affects the activity of oysters, we consider the valves opening and closing velocities to monitor the water quality assessment. We propose to model the huge volume of velocity data collected in the framework of valvometry using a new nonparametric extreme values statistical model. The objective is to estimate the tail probabilities and the extreme quantiles of the distribution of valve closing velocity. The tail of the distribution function of valve closing velocity is modeled by a Pareto distribution with parameter t,τ , beyond a threshold τ according to the time t of the experiment. Our modeling approach reveals the dependence between the specific activity of two enzymatic biomarkers (Glutathione-S-transferase and acetylcholinesterase) and the continuous recording of oyster valve velocity, proving the suitability of this tool for water quality assessment. Thus, valvometry allows in real-time in situ analysis of the bivalves behavior and appears as an effective early warning tool in ecological risk assessment and marine environment monitoring.

  14. Play Activities of At-Risk and Non-At-Risk Elementary Students: Is There a Difference?

    ERIC Educational Resources Information Center

    Poidevant, John M.; Spruill, David A.

    1993-01-01

    Examined the play activities of 49 at-risk (AR) and non-at-risk (NAR) elementary school students, using the Smilansky Scale for Evaluation of Dramatic and Sociodramatic Play measure. Found that AR students displayed higher levels of hostile acting out behaviors than did NAR students, whereas the NAR group engaged in more verbal communication…

  15. A Discrete Event Simulation Model to Assess the Economic Value of a Hypothetical Pharmacogenomics Test for Statin-Induced Myopathy in Patients Initiating a Statin in Secondary Cardiovascular Prevention.

    PubMed

    Mitchell, Dominic; Guertin, Jason R; Dubois, Anick; Dubé, Marie-Pierre; Tardif, Jean-Claude; Iliza, Ange Christelle; Fanton-Aita, Fiorella; Matteau, Alexis; LeLorier, Jacques

    2018-04-01

    Statin (HMG-CoA reductase inhibitor) therapy is the mainstay dyslipidemia treatment and reduces the risk of a cardiovascular (CV) event (CVE) by up to 35%. However, adherence to statin therapy is poor. One reason patients discontinue statin therapy is musculoskeletal pain and the associated risk of rhabdomyolysis. Research is ongoing to develop a pharmacogenomics (PGx) test for statin-induced myopathy as an alternative to the current diagnosis method, which relies on creatine kinase levels. The potential economic value of a PGx test for statin-induced myopathy is unknown. We developed a lifetime discrete event simulation (DES) model for patients 65 years of age initiating a statin after a first CVE consisting of either an acute myocardial infarction (AMI) or a stroke. The model evaluates the potential economic value of a hypothetical PGx test for diagnosing statin-induced myopathy. We have assessed the model over the spectrum of test sensitivity and specificity parameters. Our model showed that a strategy with a perfect PGx test had an incremental cost-utility ratio of 4273 Canadian dollars ($Can) per quality-adjusted life year (QALY). The probabilistic sensitivity analysis shows that when the payer willingness-to-pay per QALY reaches $Can12,000, the PGx strategy is favored in 90% of the model simulations. We found that a strategy favoring patients staying on statin therapy is cost effective even if patients maintained on statin are at risk of rhabdomyolysis. Our results are explained by the fact that statins are highly effective in reducing the CV risk in patients at high CV risk, and this benefit largely outweighs the risk of rhabdomyolysis.

  16. [Evaluation of medication risk in pregnant women: methodology of evaluation and risk management].

    PubMed

    Eléfant, E; Sainte-Croix, A

    1997-01-01

    This round table discussion was devoted to the description of the tools currently available for the evaluation of drug risks and management during pregnancy. Five topics were submitted for discussion: pre-clinical data, methodological tools, benefit/risk ratio before prescription, teratogenic or fetal risk evaluation, legal comments.

  17. Predictive value of orthopedic evaluation and injury history at the NFL combine.

    PubMed

    Brophy, Robert H; Chehab, Eric L; Barnes, Ronnie P; Lyman, Stephen; Rodeo, Scott A; Warren, Russell F

    2008-08-01

    The National Football League (NFL) holds an annual combine to evaluate college football athletes likely to be drafted for physical skills, to review their medical history, and to perform a physical examination. The athletes receive an orthopedic grade on their ability to participate in the NFL. The purpose of this study was to test the hypothesis that this orthopedic rating at the combine predicts the percent of athletes who play in the NFL and the length of their careers. A database for all athletes reviewed at the combine by the medical staff of one team from 1987 to 2000 was created and linked to a data set containing the number of seasons and the games played in the NFL for each athlete. Players were grouped by orthopedic grade: high, low, and orthopedic failure. The percent of players who played in the NFL and the mean length of their careers was calculated and compared for these groups. The orthopedic grade assigned at the NFL combine correlated with the probability of playing in the league. Whereas 58% of athletes with a high grade and 55% of athletes with a low grade played at least one game, only 36% of athletes given a failing grade did so (P < 0.001). Players with a high grade had a mean career of 41.5 games compared with 34.2 games for players with a low grade and 19.0 games for orthopedic failures. This is the first study to report on the predictive value of a grading system for college athletes before participation in professional sports. Other professional sports may benefit from using a similar grading system for the evaluation of potential players.

  18. Evaluating Predictive Models of Software Quality

    NASA Astrophysics Data System (ADS)

    Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.

    2014-06-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  19. Contract Design: Risk Management and Evaluation.

    PubMed

    Mühlbacher, Axel C; Amelung, Volker E; Juhnke, Christin

    2018-01-12

    Effective risk adjustment is an aspect that is more and more given weight on the background of competitive health insurance systems and vital healthcare systems. The risk structure of the providers plays a vital role in Pay for Performance. A prerequisite for optimal incentive-based service models is a (partial) dependence of the agent's returns on the provider's gain level. Integrated care systems as well as accountable care organisations (ACOs) in the US and similar concepts in other countries are advocated as an effective method of improving the performance of healthcare systems. These systems outline a payment and care delivery model that intends to tie provider reimbursements to predefined quality metrics. By this the total costs of care shall be reduced. Little is known about the contractual design and the main challenges of delegating "accountability" to these new kinds of organisations and/or contracts. The costs of market utilisation are highly relevant for the conception of healthcare contracts; furthermore information asymmetries and contract-specific investments are an obstacle to the efficient operation of ACOs. A comprehensive literature review on methods of designing contracts in Integrated Care was conducted. The research question in this article focuses on how reimbursement strategies, evaluation of measures and methods of risk adjustment can best be integrated in healthcare contracting. Each integrated care contract includes challenges for both payers and providers without having sufficient empirical data on both sides. These challenges are clinical, administrative or financial nature. Risk adjusted contracts ensure that the reimbursement roughly matches the true costs resulting from the morbidity of a population. If reimbursement of care provider corresponds to the actual expenses for an individual/population the problem of risk selection is greatly reduced. The currently used methods of risk adjustment have widely differing model and forecast

  20. An inexact risk management model for agricultural land-use planning under water shortage

    NASA Astrophysics Data System (ADS)

    Li, Wei; Feng, Changchun; Dai, Chao; Li, Yongping; Li, Chunhui; Liu, Ming

    2016-09-01

    Water resources availability has a significant impact on agricultural land-use planning, especially in a water shortage area such as North China. The random nature of available water resources and other uncertainties in an agricultural system present risk for land-use planning and may lead to undesirable decisions or potential economic loss. In this study, an inexact risk management model (IRM) was developed for supporting agricultural land-use planning and risk analysis under water shortage. The IRM model was formulated through incorporating a conditional value-at-risk (CVaR) constraint into an inexact two-stage stochastic programming (ITSP) framework, and could be used to control uncertainties expressed as not only probability distributions but also as discrete intervals. The measure of risk about the second-stage penalty cost was incorporated into the model so that the trade-off between system benefit and extreme expected loss could be analyzed. The developed model was applied to a case study in the Zhangweinan River Basin, a typical agricultural region facing serious water shortage in North China. Solutions of the IRM model showed that the obtained first-stage land-use target values could be used to reflect decision-makers' opinions on the long-term development plan. The confidence level α and maximum acceptable risk loss β could be used to reflect decisionmakers' preference towards system benefit and risk control. The results indicated that the IRM model was useful for reflecting the decision-makers' attitudes toward risk aversion and could help seek cost-effective agricultural land-use planning strategies under complex uncertainties.

  1. A behavioral and neural evaluation of prospective decision-making under risk.

    PubMed

    Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J

    2010-10-27

    Making the best choice when faced with a chain of decisions requires a person to judge both anticipated outcomes and future actions. Although economic decision-making models account for both risk and reward in single-choice contexts, there is a dearth of similar knowledge about sequential choice. Classical utility-based models assume that decision-makers select and follow an optimal predetermined strategy, regardless of the particular order in which options are presented. An alternative model involves continuously reevaluating decision utilities, without prescribing a specific future set of choices. Here, using behavioral and functional magnetic resonance imaging (fMRI) data, we studied human subjects in a sequential choice task and use these data to compare alternative decision models of valuation and strategy selection. We provide evidence that subjects adopt a model of reevaluating decision utilities, in which available strategies are continuously updated and combined in assessing action values. We validate this model by using simultaneously acquired fMRI data to show that sequential choice evokes a pattern of neural response consistent with a tracking of anticipated distribution of future reward, as expected in such a model. Thus, brain activity evoked at each decision point reflects the expected mean, variance, and skewness of possible payoffs, consistent with the idea that sequential choice evokes a prospective evaluation of both available strategies and possible outcomes.

  2. Multi -risk assessment at a national level in Georgia

    NASA Astrophysics Data System (ADS)

    Tsereteli, Nino; Varazanashvili, Otar; Amiranashvili, Avtandil; Tsereteli, Emili; Elizbarashvili, Elizbar; Saluqvadze, Manana; Dolodze, Jemal

    2013-04-01

    Work presented here was initiated by national GNSF project " Reducing natural disasters multiple risk: a positive factor for Georgia development " and two international projects: NATO SFP 983038 "Seismic hazard and Rusk assessment for Southern Caucasus-eastern Turkey Energy Corridors" and EMME " Earthquake Model for Middle east Region". Methodology for estimation of "general" vulnerability, hazards and multiple risk to natural hazards (namely, earthquakes, landslides, snow avalanches, flash floods, mudflows, drought, hurricanes, frost, hail) where developed for Georgia. The electronic detailed databases of natural disasters were created. These databases contain the parameters of hazardous phenomena that caused natural disasters. The magnitude and intensity scale of the mentioned disasters are reviewed and the new magnitude and intensity scales are suggested for disasters for which the corresponding formalization is not yet performed. The associated economic losses were evaluated and presented in monetary terms for these hazards. Based on the hazard inventory, an approach was developed that allowed for the calculation of an overall vulnerability value for each individual hazard type, using the Gross Domestic Product per unit area (applied to population) as the indicator for elements at risk exposed. The correlation between estimated economic losses, physical exposure and the magnitude for each of the six types of hazards has been investigated in detail by using multiple linear regression analysis. Economic losses for all past events and historical vulnerability were estimated. Finally, the spatial distribution of general vulnerability was assessed, and the expected maximum economic loss was calculated as well as a multi-risk map was set-up.

  3. Risk adjustment in the American College of Surgeons National Surgical Quality Improvement Program: a comparison of logistic versus hierarchical modeling.

    PubMed

    Cohen, Mark E; Dimick, Justin B; Bilimoria, Karl Y; Ko, Clifford Y; Richards, Karen; Hall, Bruce Lee

    2009-12-01

    Although logistic regression has commonly been used to adjust for risk differences in patient and case mix to permit quality comparisons across hospitals, hierarchical modeling has been advocated as the preferred methodology, because it accounts for clustering of patients within hospitals. It is unclear whether hierarchical models would yield important differences in quality assessments compared with logistic models when applied to American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) data. Our objective was to evaluate differences in logistic versus hierarchical modeling for identifying hospitals with outlying outcomes in the ACS-NSQIP. Data from ACS-NSQIP patients who underwent colorectal operations in 2008 at hospitals that reported at least 100 operations were used to generate logistic and hierarchical prediction models for 30-day morbidity and mortality. Differences in risk-adjusted performance (ratio of observed-to-expected events) and outlier detections from the two models were compared. Logistic and hierarchical models identified the same 25 hospitals as morbidity outliers (14 low and 11 high outliers), but the hierarchical model identified 2 additional high outliers. Both models identified the same eight hospitals as mortality outliers (five low and three high outliers). The values of observed-to-expected events ratios and p values from the two models were highly correlated. Results were similar when data were permitted from hospitals providing < 100 patients. When applied to ACS-NSQIP data, logistic and hierarchical models provided nearly identical results with respect to identification of hospitals' observed-to-expected events ratio outliers. As hierarchical models are prone to implementation problems, logistic regression will remain an accurate and efficient method for performing risk adjustment of hospital quality comparisons.

  4. Subjective value of risky foods for individual domestic chicks: a hierarchical Bayesian model.

    PubMed

    Kawamori, Ai; Matsushima, Toshiya

    2010-05-01

    For animals to decide which prey to attack, the gain and delay of the food item must be integrated in a value function. However, the subjective value is not obtained by expected profitability when it is accompanied by risk. To estimate the subjective value, we examined choices in a cross-shaped maze with two colored feeders in domestic chicks. When tested by a reversal in food amount or delay, chicks changed choices similarly in both conditions (experiment 1). We therefore examined risk sensitivity for amount and delay (experiment 2) by supplying one feeder with food of fixed profitability and the alternative feeder with high- or low-profitability food at equal probability. Profitability varied in amount (groups 1 and 2 at high and low variance) or in delay (group 3). To find the equilibrium, the amount (groups 1 and 2) or delay (group 3) of the food in the fixed feeder was adjusted in a total of 18 blocks. The Markov chain Monte Carlo method was applied to a hierarchical Bayesian model to estimate the subjective value. Chicks undervalued the variable feeder in group 1 and were indifferent in group 2 but overvalued the variable feeder in group 3 at a population level. Re-examination without the titration procedure (experiment 3) suggested that the subjective value was not absolute for each option. When the delay was varied, the variable option was often given a paradoxically high value depending on fixed alternative. Therefore, the basic assumption of the uniquely determined value function might be questioned.

  5. Performance evaluation of GIM-TEC assimilation of the IRI-Plas model at two equatorial stations in the American sector

    NASA Astrophysics Data System (ADS)

    Adebiyi, S. J.; Adebesin, B. O.; Ikubanni, S. O.; Joshua, B. W.

    2017-05-01

    Empirical models of the ionosphere, such as the International Reference Ionosphere (IRI) model, play a vital role in evaluating the environmental effect on the operation of space-based communication and navigation technologies. The IRI extended to Plasmasphere (IRI-Plas) model can be adjusted with external data to update its electron density profile while still maintaining the overall integrity of the model representations. In this paper, the performance of the total electron content (TEC) assimilation option of the IRI-Plas at two equatorial stations, Jicamarca, Peru (geographic: 12°S, 77°W, dip angle 0.8°) and Cachoeira Paulista, Brazil (Geographic: 22.7°S, 45°W, dip angle -26°), is examined during quiet and disturbed conditions. TEC, F2 layer critical frequency (foF2), and peak height (hmF2) predicted when the model is operated without external input were used as a baseline in our model evaluation. Results indicate that TEC predicted by the assimilation option generally produced smaller estimation errors compared to the "no extra input" option during quiet and disturbed conditions. Generally, the error is smaller at the equatorial trough than near the crest for both quiet and disturbed days. With assimilation option, there is a substantial improvement of storm time estimations when compared with quiet time predictions. The improvement is, however, independent on storm's severity. Furthermore, the modeled foF2 and hmF2 are generally poor with TEC assimilation, particularly the hmF2 prediction, at the two locations during both quiet and disturbed conditions. Consequently, IRI-Plas model assimilated with TEC value only may not be sufficient where more realistic instantaneous values of peak parameters are required.

  6. Risk perception in epidemic modeling

    NASA Astrophysics Data System (ADS)

    Bagnoli, Franco; Liò, Pietro; Sguanci, Luca

    2007-12-01

    We investigate the effects of risk perception in a simple model of epidemic spreading. We assume that the perception of the risk of being infected depends on the fraction of neighbors that are ill. The effect of this factor is to decrease the infectivity, that therefore becomes a dynamical component of the model. We study the problem in the mean-field approximation and by numerical simulations for regular, random, and scale-free networks. We show that for homogeneous and random networks, there is always a value of perception that stops the epidemics. In the “worst-case” scenario of a scale-free network with diverging input connectivity, a linear perception cannot stop the epidemics; however, we show that a nonlinear increase of the perception risk may lead to the extinction of the disease. This transition is discontinuous, and is not predicted by the mean-field analysis.

  7. Training AIDS and Anger Prevention Social Skills in At-Risk Adolescents.

    ERIC Educational Resources Information Center

    Hovell, Melbourne F.; Blumberg, Elaine J.; Liles, Sandy; Powell, Linda; Morrison, Theodore C.; Duran, Gabriela; Sipan, Carol L.; Burkham, Susan; Kelley, Norma

    2001-01-01

    Tests the effectiveness of behavioral skills training based on the Behavioral-Ecological Model among a group of adolescents. Evaluates two interventions: one teaching condom use skills and the other teaching anger management skills. Changes in most skills were significant at postintervention but were not maintained at six months. Few risk-related…

  8. Evaluation of dynamically downscaled extreme temperature using a spatially-aggregated generalized extreme value (GEV) model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jiali; Han, Yuefeng; Stein, Michael L.

    2016-02-10

    The Weather Research and Forecast (WRF) model downscaling skill in extreme maximum daily temperature is evaluated by using the generalized extreme value (GEV) distribution. While the GEV distribution has been used extensively in climatology and meteorology for estimating probabilities of extreme events, accurately estimating GEV parameters based on data from a single pixel can be difficult, even with fairly long data records. This work proposes a simple method assuming that the shape parameter, the most difficult of the three parameters to estimate, does not vary over a relatively large region. This approach is applied to evaluate 31-year WRF-downscaled extreme maximummore » temperature through comparison with North American Regional Reanalysis (NARR) data. Uncertainty in GEV parameter estimates and the statistical significance in the differences of estimates between WRF and NARR are accounted for by conducting bootstrap resampling. Despite certain biases over parts of the United States, overall, WRF shows good agreement with NARR in the spatial pattern and magnitudes of GEV parameter estimates. Both WRF and NARR show a significant increase in extreme maximum temperature over the southern Great Plains and southeastern United States in January and over the western United States in July. The GEV model shows clear benefits from the regionally constant shape parameter assumption, for example, leading to estimates of the location and scale parameters of the model that show coherent spatial patterns.« less

  9. Filling Terrorism Gaps: VEOs, Evaluating Databases, and Applying Risk Terrain Modeling to Terrorism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagan, Ross F.

    2016-08-29

    This paper aims to address three issues: the lack of literature differentiating terrorism and violent extremist organizations (VEOs), terrorism incident databases, and the applicability of Risk Terrain Modeling (RTM) to terrorism. Current open source literature and publicly available government sources do not differentiate between terrorism and VEOs; furthermore, they fail to define them. Addressing the lack of a comprehensive comparison of existing terrorism data sources, a matrix comparing a dozen terrorism databases is constructed, providing insight toward the array of data available. RTM, a method for spatial risk analysis at a micro level, has some applicability to terrorism research, particularlymore » for studies looking at risk indicators of terrorism. Leveraging attack data from multiple databases, combined with RTM, offers one avenue for closing existing research gaps in terrorism literature.« less

  10. [Establish research model of post-marketing clinical safety evaluation for Chinese patent medicine].

    PubMed

    Zheng, Wen-ke; Liu, Zhi; Lei, Xiang; Tian, Ran; Zheng, Rui; Li, Nan; Ren, Jing-tian; Du, Xiao-xi; Shang, Hong-cai

    2015-09-01

    The safety of Chinese patent medicine has become a focus of social. It is necessary to carry out work on post-marketing clinical safety evaluation for Chinese patent medicine. However, there have no criterions to guide the related research, it is urgent to set up a model and method to guide the practice for related research. According to a series of clinical research, we put forward some views, which contained clear and definite the objective and content of clinical safety evaluation, the work flow should be determined, make a list of items for safety evaluation project, and put forward the three level classification of risk control. We set up a model of post-marketing clinical safety evaluation for Chinese patent medicine. Based this model, the list of items can be used for ranking medicine risks, and then take steps for different risks, aims to lower the app:ds:risksrisk level. At last, the medicine can be managed by five steps in sequence. The five steps are, collect risk signal, risk recognition, risk assessment, risk management, and aftereffect assessment. We hope to provide new ideas for the future research.

  11. Modelling tsunami inundation for risk analysis at the Andaman Sea Coast of Thailand

    NASA Astrophysics Data System (ADS)

    Kaiser, G.; Kortenhaus, A.

    2009-04-01

    The mega-tsunami of Dec. 26, 2004 strongly impacted the Andaman Sea coast of Thailand and devastated coastal ecosystems as well as towns, settlements and tourism resorts. In addition to the tragic loss of many lives, the destruction or damage of life-supporting infrastructure, such as buildings, roads, water & power supply etc. caused high economic losses in the region. To mitigate future tsunami impacts there is a need to assess the tsunami hazard and vulnerability in flood prone areas at the Andaman Sea coast in order to determine the spatial distribution of risk and to develop risk management strategies. In the bilateral German-Thai project TRAIT research is performed on integrated risk assessment for the Provinces Phang Nga and Phuket in southern Thailand, including a hazard analysis, i.e. modelling tsunami propagation to the coast, tsunami wave breaking and inundation characteristics, as well as vulnerability analysis of the socio-economic and the ecological system in order to determine the scenario-based, specific risk for the region. In this presentation results of the hazard analysis and the inundation simulation are presented and discussed. Numerical modelling of tsunami propagation and inundation simulation is an inevitable tool for risk analysis, risk management and evacuation planning. While numerous investigations have been made to model tsunami wave generation and propagation in the Indian Ocean, there is still a lack in determining detailed inundation patterns, i.e. water depth and flow dynamics. However, for risk management and evacuation planning this knowledge is essential. As the accuracy of the inundation simulation is strongly depending on the available bathymetric and the topographic data, a multi-scale approach is chosen in this work. The ETOPO Global Relief Model as a bathymetric basis and the Shuttle Radar Topography Mission (SRTM90) have been widely applied in tsunami modelling approaches as these data are free and almost world

  12. Eliciting change in at-risk elders (ECARE): evaluation of an elder abuse intervention program.

    PubMed

    Mariam, Lydia Morris; McClure, Regina; Robinson, J B; Yang, Janet A

    2015-01-01

    The current study evaluated the effectiveness of a community-based elder abuse intervention program that assists suspected victims of elder abuse and self-neglect through a partnership with local law enforcement. This program, Eliciting Change in At-Risk Elders, involves building alliances with the elder and family members, connecting the elder to supportive services that reduce risk of further abuse, and utilizing motivational interviewing-type skills to help elders overcome ambivalence regarding making difficult life changes. Risk factors of elder abuse decreased over the course of the intervention and nearly three-quarters of participants made progress on their treatment goal, advancing at least one of Prochaska and DiClemente's (1983) stages of change (precontemplation, contemplation, preparation, action, and maintenance). Forty-three percent of elders moved into the stages of action and maintenance regarding their goal. The usefulness of eliciting change via longer-term relationships with vulnerable elders in entrenched elder abuse situations is discussed.

  13. Using a Systematic Conceptual Model for a Process Evaluation of a Middle School Obesity Risk-Reduction Nutrition Curriculum Intervention: "Choice, Control & Change"

    ERIC Educational Resources Information Center

    Lee, Heewon; Contento, Isobel R.; Koch, Pamela

    2013-01-01

    Objective: To use and review a conceptual model of process evaluation and to examine the implementation of a nutrition education curriculum, "Choice, Control & Change", designed to promote dietary and physical activity behaviors that reduce obesity risk. Design: A process evaluation study based on a systematic conceptual model. Setting: Five…

  14. Value Encounters - Modeling and Analyzing Co-creation of Value

    NASA Astrophysics Data System (ADS)

    Weigand, Hans

    Recent marketing and management literature has introduced the concept of co-creation of value. Current value modeling approaches such as e3-value focus on the exchange of value rather than co-creation. In this paper, an extension to e3-value is proposed in the form of a “value encounter”. Value encounters are defined as interaction spaces where a group of actors meet and derive value by each one bringing in some of its own resources. They can be analyzed from multiple strategic perspectives, including knowledge management, social network management and operational management. Value encounter modeling can be instrumental in the context of service analysis and design.

  15. Values Engagement in Evaluation: Ideas, Illustrations, and Implications

    ERIC Educational Resources Information Center

    Hall, Jori N.; Ahn, Jeehae; Greene, Jennifer C.

    2012-01-01

    Values-engagement in evaluation involves both describing stakeholder values and prescribing certain values. Describing stakeholder values is common practice in responsive evaluation traditions. Prescribing or advocating particular values is only "explicitly" part of democratic, culturally responsive, critical, and other openly…

  16. A Novel Early Pregnancy Risk Prediction Model for Gestational Diabetes Mellitus.

    PubMed

    Sweeting, Arianne N; Wong, Jencia; Appelblom, Heidi; Ross, Glynis P; Kouru, Heikki; Williams, Paul F; Sairanen, Mikko; Hyett, Jon A

    2018-06-13

    Accurate early risk prediction for gestational diabetes mellitus (GDM) would target intervention and prevention in women at the highest risk. We evaluated novel biomarker predictors to develop a first-trimester risk prediction model in a large multiethnic cohort. Maternal clinical, aneuploidy and pre-eclampsia screening markers (PAPP-A, free hCGβ, mean arterial pressure, uterine artery pulsatility index) were measured prospectively at 11-13+6 weeks' gestation in 980 women (248 with GDM; 732 controls). Nonfasting glucose, lipids, adiponectin, leptin, lipocalin-2, and plasminogen activator inhibitor-2 were measured on banked serum. The relationship between marker multiples-of-the-median and GDM was examined with multivariate regression. Model predictive performance for early (< 24 weeks' gestation) and overall GDM diagnosis was evaluated by receiver operating characteristic curves. Glucose, triglycerides, leptin, and lipocalin-2 were higher, while adiponectin was lower, in GDM (p < 0.05). Lipocalin-2 performed best in Caucasians, and triglycerides in South Asians with GDM. Family history of diabetes, previous GDM, South/East Asian ethnicity, parity, BMI, PAPP-A, triglycerides, and lipocalin-2 were significant independent GDM predictors (all p < 0.01), achieving an area under the curve of 0.91 (95% confidence interval [CI] 0.89-0.94) overall, and 0.93 (95% CI 0.89-0.96) for early GDM, in a combined multivariate prediction model. A first-trimester risk prediction model, which incorporates novel maternal lipid markers, accurately identifies women at high risk of GDM, including early GDM. © 2018 S. Karger AG, Basel.

  17. Risk Prediction Models in Psychiatry: Toward a New Frontier for the Prevention of Mental Illnesses.

    PubMed

    Bernardini, Francesco; Attademo, Luigi; Cleary, Sean D; Luther, Charles; Shim, Ruth S; Quartesan, Roberto; Compton, Michael T

    2017-05-01

    We conducted a systematic, qualitative review of risk prediction models designed and tested for depression, bipolar disorder, generalized anxiety disorder, posttraumatic stress disorder, and psychotic disorders. Our aim was to understand the current state of research on risk prediction models for these 5 disorders and thus future directions as our field moves toward embracing prediction and prevention. Systematic searches of the entire MEDLINE electronic database were conducted independently by 2 of the authors (from 1960 through 2013) in July 2014 using defined search criteria. Search terms included risk prediction, predictive model, or prediction model combined with depression, bipolar, manic depressive, generalized anxiety, posttraumatic, PTSD, schizophrenia, or psychosis. We identified 268 articles based on the search terms and 3 criteria: published in English, provided empirical data (as opposed to review articles), and presented results pertaining to developing or validating a risk prediction model in which the outcome was the diagnosis of 1 of the 5 aforementioned mental illnesses. We selected 43 original research reports as a final set of articles to be qualitatively reviewed. The 2 independent reviewers abstracted 3 types of data (sample characteristics, variables included in the model, and reported model statistics) and reached consensus regarding any discrepant abstracted information. Twelve reports described models developed for prediction of major depressive disorder, 1 for bipolar disorder, 2 for generalized anxiety disorder, 4 for posttraumatic stress disorder, and 24 for psychotic disorders. Most studies reported on sensitivity, specificity, positive predictive value, negative predictive value, and area under the (receiver operating characteristic) curve. Recent studies demonstrate the feasibility of developing risk prediction models for psychiatric disorders (especially psychotic disorders). The field must now advance by (1) conducting more large

  18. Mexican-origin youth's risk behavior from adolescence to young adulthood: The role of familism values.

    PubMed

    Wheeler, Lorey A; Zeiders, Katharine H; Updegraff, Kimberly A; Umaña-Taylor, Adriana J; Rodríguez de Jesús, Sue A; Perez-Brena, Norma J

    2017-01-01

    Engagement in risk behavior has implications for individuals' academic achievement, health, and well-being, yet there is a paucity of developmental research on the role of culturally relevant strengths in individual and family differences in risk behavior involvement among ethnic minority youth. In this study, we used a longitudinal cohort-sequential design to chart intraindividual trajectories of risk behavior and test variation by gender and familism values in 492 youth from 12 to 22 years of age. Participants were older and younger siblings from 246 Mexican-origin families who reported on their risk behaviors in interviews spaced over 8 years. Multilevel cohort-sequential growth models revealed that youth reported an increase in risk behavior from 12 to 18 years of age, and then a decline to age 22. Male youth reported greater overall levels and a steeper increase in risk behavior from ages 12 to 18, compared to female youth. For familism values, on occasions when youth reported higher levels, they also reported lower levels of risk behavior (i.e., within-person effect). For sibling dyads characterized by higher average levels of familism values, youth reported lower average levels of risk behavior (i.e., between-family effect). Findings provide unique insights into risk behavior from adolescence to young adulthood among Mexican-origin youth. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. The multiple stressor ecological risk assessment for the mercury-contaminated South River and upper Shenandoah River using the Bayesian network-relative risk model.

    PubMed

    Landis, Wayne G; Ayre, Kimberley K; Johns, Annie F; Summers, Heather M; Stinson, Jonah; Harris, Meagan J; Herring, Carlie E; Markiewicz, April J

    2017-01-01

    We have conducted a regional scale risk assessment using the Bayesian Network Relative Risk Model (BN-RRM) to calculate the ecological risks to the South River and upper Shenandoah River study area. Four biological endpoints (smallmouth bass, white sucker, Belted Kingfisher, and Carolina Wren) and 4 abiotic endpoints (Fishing River Use, Swimming River Use, Boating River Use, and Water Quality Standards) were included in this risk assessment, based on stakeholder input. Although mercury (Hg) contamination was the original impetus for the site being remediated, other chemical and physical stressors were evaluated. There were 3 primary conclusions from the BN-RRM results. First, risk varies according to location, type and quality of habitat, and exposure to stressors within the landscape. The patterns of risk can be evaluated with reasonable certitude. Second, overall risk to abiotic endpoints was greater than overall risk to biotic endpoints. By including both biotic and abiotic endpoints, we are able to compare risk to endpoints that represent a wide range of stakeholder values. Third, whereas Hg reduction is the regulatory priority for the South River, Hg is not the only stressor driving risk to the endpoints. Ecological and habitat stressors contribute risk to the endpoints and should be considered when managing this site. This research provides the foundation for evaluating the risks of multiple stressors of the South River to a variety of endpoints. From this foundation, tools for the evaluation of management options and an adaptive management tools have been forged. Integr Environ Assess Manag 2017;13:85-99. © 2016 SETAC. © 2016 SETAC.

  20. Radiation-Induced Leukemia at Doses Relevant to Radiation Therapy: Modeling Mechanisms and Estimating Risks

    NASA Technical Reports Server (NTRS)

    Shuryak, Igor; Sachs, Rainer K.; Hlatky, Lynn; Mark P. Little; Hahnfeldt, Philip; Brenner, David J.

    2006-01-01

    Because many cancer patients are diagnosed earlier and live longer than in the past, second cancers induced by radiation therapy have become a clinically significant issue. An earlier biologically based model that was designed to estimate risks of high-dose radiation induced solid cancers included initiation of stem cells to a premalignant state, inactivation of stem cells at high radiation doses, and proliferation of stem cells during cellular repopulation after inactivation. This earlier model predicted the risks of solid tumors induced by radiation therapy but overestimated the corresponding leukemia risks. Methods: To extend the model to radiation-induced leukemias, we analyzed in addition to cellular initiation, inactivation, and proliferation a repopulation mechanism specific to the hematopoietic system: long-range migration through the blood stream of hematopoietic stem cells (HSCs) from distant locations. Parameters for the model were derived from HSC biologic data in the literature and from leukemia risks among atomic bomb survivors v^ ho were subjected to much lower radiation doses. Results: Proliferating HSCs that migrate from sites distant from the high-dose region include few preleukemic HSCs, thus decreasing the high-dose leukemia risk. The extended model for leukemia provides risk estimates that are consistent with epidemiologic data for leukemia risk associated with radiation therapy over a wide dose range. For example, when applied to an earlier case-control study of 110000 women undergoing radiotherapy for uterine cancer, the model predicted an excess relative risk (ERR) of 1.9 for leukemia among women who received a large inhomogeneous fractionated external beam dose to the bone marrow (mean = 14.9 Gy), consistent with the measured ERR (2.0, 95% confidence interval [CI] = 0.2 to 6.4; from 3.6 cases expected and 11 cases observed). As a corresponding example for brachytherapy, the predicted ERR of 0.80 among women who received an inhomogeneous low

  1. A TCP model for external beam treatment of intermediate-risk prostate cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, Sean; Putten, Wil van der

    2013-03-15

    Purpose: Biological models offer the ability to predict clinical outcomes. The authors describe a model to predict the clinical response of intermediate-risk prostate cancer to external beam radiotherapy for a variety of fractionation regimes. Methods: A fully heterogeneous population averaged tumor control probability model was fit to clinical outcome data for hyper, standard, and hypofractionated treatments. The tumor control probability model was then employed to predict the clinical outcome of extreme hypofractionation regimes, as utilized in stereotactic body radiotherapy. Results: The tumor control probability model achieves an excellent level of fit, R{sup 2} value of 0.93 and a root meanmore » squared error of 1.31%, to the clinical outcome data for hyper, standard, and hypofractionated treatments using realistic values for biological input parameters. Residuals Less-Than-Or-Slanted-Equal-To 1.0% are produced by the tumor control probability model when compared to clinical outcome data for stereotactic body radiotherapy. Conclusions: The authors conclude that this tumor control probability model, used with the optimized radiosensitivity values obtained from the fit, is an appropriate mechanistic model for the analysis and evaluation of external beam RT plans with regard to tumor control for these clinical conditions.« less

  2. ARM Data-Oriented Metrics and Diagnostics Package for Climate Model Evaluation Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chengzhu; Xie, Shaocheng

    A Python-based metrics and diagnostics package is currently being developed by the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Infrastructure Team at Lawrence Livermore National Laboratory (LLNL) to facilitate the use of long-term, high-frequency measurements from the ARM Facility in evaluating the regional climate simulation of clouds, radiation, and precipitation. This metrics and diagnostics package computes climatological means of targeted climate model simulation and generates tables and plots for comparing the model simulation with ARM observational data. The Coupled Model Intercomparison Project (CMIP) model data sets are also included in the package to enable model intercomparison as demonstratedmore » in Zhang et al. (2017). The mean of the CMIP model can serve as a reference for individual models. Basic performance metrics are computed to measure the accuracy of mean state and variability of climate models. The evaluated physical quantities include cloud fraction, temperature, relative humidity, cloud liquid water path, total column water vapor, precipitation, sensible and latent heat fluxes, and radiative fluxes, with plan to extend to more fields, such as aerosol and microphysics properties. Process-oriented diagnostics focusing on individual cloud- and precipitation-related phenomena are also being developed for the evaluation and development of specific model physical parameterizations. The version 1.0 package is designed based on data collected at ARM’s Southern Great Plains (SGP) Research Facility, with the plan to extend to other ARM sites. The metrics and diagnostics package is currently built upon standard Python libraries and additional Python packages developed by DOE (such as CDMS and CDAT). The ARM metrics and diagnostic package is available publicly with the hope that it can serve as an easy entry point for climate modelers to compare their models with ARM data. In this report, we first present the input data

  3. Advantages of new cardiovascular risk-assessment strategies in high-risk patients with hypertension.

    PubMed

    Ruilope, Luis M; Segura, Julian

    2005-10-01

    Accurate assessment of cardiovascular disease (CVD) risk in patients with hypertension is important when planning appropriate treatment of modifiable risk factors. The causes of CVD are multifactorial, and hypertension seldom exists as an isolated risk factor. Classic models of risk assessment are more accurate than a simple counting of risk factors, but they are not generalizable to all populations. In addition, the risk associated with hypertension is graded, continuous, and independent of other risk factors, and this is not reflected in classic models of risk assessment. This article is intended to review both classic and newer models of CVD risk assessment. MEDLINE was searched for articles published between 1990 and 2005 that contained the terms cardiovascular disease, hypertension, or risk assessment. Articles describing major clinical trials, new data about cardiovascular risk, or global risk stratification were selected for review. Some patients at high long-term risk for CVD events (eg, patients aged <50 years with multiple risk factors) may go untreated because they do not meet the absolute risk-intervention threshold of 20% risk over 10 years with the classic model. Recognition of the limitations of classic risk-assessment models led to new guidelines, particularly those of the European Society of Hypertension-European Society of Cardiology. These guidelines view hypertension as one of many risk and disease factors that require treatment to decrease risk. These newer guidelines include a more comprehensive range of risk factors and more finely graded blood pressure ranges to stratify patients by degree of risk. Whether they accurately predict CVD risk in most populations is not known. Evidence from the Valsartan Antihypertensive Long-term Use Evaluation (VALUE) study, which stratified patients by several risk and disease factors, highlights the predictive value of some newer CVD risk assessments. Modern risk assessments, which include blood pressure

  4. Geographic exposure risk of variant Creutzfeldt-Jakob disease in US blood donors: a risk-ranking model to evaluate alternative donor-deferral policies.

    PubMed

    Yang, Hong; Huang, Yin; Gregori, Luisa; Asher, David M; Bui, Travis; Forshee, Richard A; Anderson, Steven A

    2017-04-01

    Variant Creutzfeldt-Jakob disease (vCJD) has been transmitted by blood transfusion (TTvCJD). The US Food and Drug Administration (FDA) recommends deferring blood donors who resided in or traveled to 30 European countries where they may have been exposed to bovine spongiform encephalopathy (BSE) through beef consumption. Those recommendations warrant re-evaluation, because new cases of BSE and vCJD have markedly abated. The FDA developed a risk-ranking model to calculate the geographic vCJD risk using country-specific case rates and person-years of exposure of US blood donors. We used the reported country vCJD case rates, when available, or imputed vCJD case rates from reported BSE and UK beef exports during the risk period. We estimated the risk reduction and donor loss should the deferral be restricted to a few high-risk countries. We also estimated additional risk reduction by leukocyte reduction (LR) of red blood cells (RBCs). The United Kingdom, Ireland, and France had the greatest vCJD risk, contributing approximately 95% of the total risk. The model estimated that deferring US donors who spent extended periods of time in these three countries, combined with currently voluntary LR (95% of RBC units), would reduce the vCJD risk by 89.3%, a reduction similar to that achieved under the current policy (89.8%). Limiting deferrals to exposure in these three countries would potentially allow donations from an additional 100,000 donors who are currently deferred. Our analysis suggests that a deferral option focusing on the three highest risk countries would achieve a level of blood safety similar to that achieved by the current policy. © 2016 AABB.

  5. Cross-national validation of prognostic models predicting sickness absence and the added value of work environment variables.

    PubMed

    Roelen, Corné A M; Stapelfeldt, Christina M; Heymans, Martijn W; van Rhenen, Willem; Labriola, Merete; Nielsen, Claus V; Bültmann, Ute; Jensen, Chris

    2015-06-01

    To validate Dutch prognostic models including age, self-rated health and prior sickness absence (SA) for ability to predict high SA in Danish eldercare. The added value of work environment variables to the models' risk discrimination was also investigated. 2,562 municipal eldercare workers (95% women) participated in the Working in Eldercare Survey. Predictor variables were measured by questionnaire at baseline in 2005. Prognostic models were validated for predictions of high (≥30) SA days and high (≥3) SA episodes retrieved from employer records during 1-year follow-up. The accuracy of predictions was assessed by calibration graphs and the ability of the models to discriminate between high- and low-risk workers was investigated by ROC-analysis. The added value of work environment variables was measured with Integrated Discrimination Improvement (IDI). 1,930 workers had complete data for analysis. The models underestimated the risk of high SA in eldercare workers and the SA episodes model had to be re-calibrated to the Danish data. Discrimination was practically useful for the re-calibrated SA episodes model, but not the SA days model. Physical workload improved the SA days model (IDI = 0.40; 95% CI 0.19-0.60) and psychosocial work factors, particularly the quality of leadership (IDI = 0.70; 95% CI 053-0.86) improved the SA episodes model. The prognostic model predicting high SA days showed poor performance even after physical workload was added. The prognostic model predicting high SA episodes could be used to identify high-risk workers, especially when psychosocial work factors are added as predictor variables.

  6. Ecological risk assessment of depleted uranium in the environment at Aberdeen Proving Ground

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clements, W.H.; Kennedy, P.L.; Myers, O.B.

    1993-01-01

    A preliminary ecological risk assessment was conducted to evaluate the effects of depleted uranium (DU) in the Aberdeen Proving Ground (APG) ecosystem and its potential for human health effects. An ecological risk assessment of DU should include the processes of hazard identification, dose-response assessment, exposure assessment, and risk characterization. Ecological risk assessments also should explicitly examine risks incurred by nonhuman as well as human populations, because risk assessments based only on human health do not always protect other species. To begin to assess the potential ecological risk of DU release to the environment we modeled DU transport through the principalmore » components of the aquatic ecosystem at APG. We focused on the APG aquatic system because of the close proximity of the Chesapeake Bay and concerns about potential impacts on this ecosystem. Our objective in using a model to estimate environmental fate of DU is to ultimately reduce the uncertainty about predicted ecological risks due to DU from APG. The model functions to summarize information on the structure and functional properties of the APG aquatic system, to provide an exposure assessment by estimating the fate of DU in the environment, and to evaluate the sources of uncertainty about DU transport.« less

  7. Evaluating STORM skills training for managing people at risk of suicide.

    PubMed

    Gask, Linda; Dixon, Clare; Morriss, Richard; Appleby, Louis; Green, Gillian

    2006-06-01

    This paper reports a study evaluating the Skills Training On Risk Management (STORM) training initiative in three mental health services in the North-West of England, UK. Training for health workers has been widely advocated as a key route to suicide prevention. However, reports of evaluations are scarce in the literature. In previous research, we have demonstrated that the STORM intervention results in acquisition of new skills and can be disseminated in a community setting. The training was delivered during a 6-month period in 2002 by three mental health nurses who were seconded part-time to the project. The quantitative evaluation, which assessed change in attitudes, confidence, acquisition of skills and satisfaction, used a pretest/post-test design, with participants acting as their own controls. Qualitative interviews were conducted with a purposive sample of 16 participants to explore the impact on clinical practice, and with the three trainers at the end of the study. Data from 458 staff members were collected during a 6-month period. Positive changes in attitudes and confidence were shown, but previous evidence of skill acquisition was not replicated. Qualitative interviews revealed important insights into changes in clinical practice, particularly for less experienced or unqualified nursing staff, but also concerns about the lack of an educational culture to foster and support such interventions in practice within the organizations. STORM training for the assessment and management of suicide risk is both feasible and acceptable in mental health trusts. However, we remain uncertain of its longer-term impact, given the lack of engagement of senior staff in the enterprise and the absence of linked supervision and support from the organizational management to reinforce skill acquisition and development. We consider that regular supervision that links STORM training to actual clinical experience would be the ideal.

  8. Evaluation of in vitro models for predicting acidosis risk of barley grain in finishing beef cattle.

    PubMed

    Anele, U Y; Swift, M-L; McAllister, T A; Galyean, M L; Yang, W Z

    2015-10-01

    Our objective was to develop a model to predict the acidosis potential of barley based on the in vitro batch culture incubation of 50 samples varying in bulk density, starch content, processing method, growing location, and agronomic practices. The model was an adaptation of the acidosis index (calculated from a combination of in situ and in vitro analyses and from several components of grain chemical composition) developed in Australia for use in the feed industry to estimate the potential for grains to increase the risk of ruminal acidosis. Of the independent variables considered, DM disappearance at 6 h of incubation (DMD6) using reduced-strength (20%) buffer in the batch culture accounted for 90.5% of the variation in the acidosis index with a root mean square error (RMSE) of 4.46%. To evaluate our model using independent datasets (derived from previous batch culture studies using full-strength [100%] buffer), we performed another batch culture study using full-strength buffer. The full-strength buffer model using in vitro DMD6 (DMD6-FS) accounted for 66.5% of the variation in the acidosis index with an RMSE of 8.30%. When the new full-strength buffer model was applied to 3 independent datasets to predict acidosis, it accounted for 20.1, 28.5, and 30.2% of the variation in the calculated acidosis index. Significant ( < 0.001) mean bias was evident in 2 of the datasets, for which the DMD6 model underpredicted the acidosis index by 46.9 and 5.73%. Ranking of samples from the most diverse independent dataset using the DMD6-FS model and the Black (2008) model (calculated using in situ starch degradation) indicated the relationship between the rankings using Spearman's rank correlation was negative (ρ = -0.30; = 0.059). When the reduced-strength buffer model was used, however, there were similarities in the acidosis index ranking of barley samples by the models as shown by the result of a correlation analysis between calculated (using the Australian model) and

  9. Food-chain contamination evaluations in ecological risk assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linder, G.

    Food-chain models have become increasingly important within the ecological risk assessment process. This is the case particularly when acute effects are not readily apparent, or the contaminants of concern are not readily detoxified, have a high likelihood for partitioning into lipids, or have specific target organs or tissues that may increase their significance in evaluating their potential adverse effects. An overview of food-chain models -- conceptual, theoretical, and empirical -- will be considered through a series of papers that will focus on their application within the ecological risk assessment process. Whether a food-chain evaluation is being developed to address relativelymore » simple questions related to chronic effects of toxicants on target populations, or whether a more complex food-web model is being developed to address questions related to multiple-trophic level transfers of toxicants, the elements within the food chain contamination evaluation can be generalized to address the mechanisms of toxicant accumulation in individual organisms. This can then be incorporated into more elaborate models that consider these organismal-level processes within the context of a species life-history or community-level responses that may be associated with long-term exposures.« less

  10. Evaluation of different mathematical models and different b-value ranges of diffusion-weighted imaging in peripheral zone prostate cancer detection using b-value up to 4500 s/mm2

    PubMed Central

    Feng, Zhaoyan; Min, Xiangde; Margolis, Daniel J. A.; Duan, Caohui; Chen, Yuping; Sah, Vivek Kumar; Chaudhary, Nabin; Li, Basen; Ke, Zan; Zhang, Peipei; Wang, Liang

    2017-01-01

    Objectives To evaluate the diagnostic performance of different mathematical models and different b-value ranges of diffusion-weighted imaging (DWI) in peripheral zone prostate cancer (PZ PCa) detection. Methods Fifty-six patients with histologically proven PZ PCa who underwent DWI-magnetic resonance imaging (MRI) using 21 b-values (0–4500 s/mm2) were included. The mean signal intensities of the regions of interest (ROIs) placed in benign PZs and cancerous tissues on DWI images were fitted using mono-exponential, bi-exponential, stretched-exponential, and kurtosis models. The b-values were divided into four ranges: 0–1000, 0–2000, 0–3200, and 0–4500 s/mm2, grouped as A, B, C, and D, respectively. ADC, , D*, f, DDC, α, Dapp, and Kapp were estimated for each group. The adjusted coefficient of determination (R2) was calculated to measure goodness-of-fit. Receiver operating characteristic curve analysis was performed to evaluate the diagnostic performance of the parameters. Results All parameters except D* showed significant differences between cancerous tissues and benign PZs in each group. The area under the curve values (AUCs) of ADC were comparable in groups C and D (p = 0.980) and were significantly higher than those in groups A and B (p< 0.05 for all). The AUCs of ADC and Kapp in groups B and C were similar (p = 0.07 and p = 0.954), and were significantly higher than the other parameters (p< 0.001 for all). The AUCs of ADC in group D was slightly higher than Kapp (p = 0.002), and both were significantly higher than the other parameters (p< 0.001 for all). Conclusions ADC derived from conventional mono-exponential high b-value (3200 s/mm2) models is an optimal parameter for PZ PCa detection. PMID:28199367

  11. Risk reclassification analysis investigating the added value of fatigue to sickness absence predictions.

    PubMed

    Roelen, Corné A M; Bültmann, Ute; Groothoff, Johan W; Twisk, Jos W R; Heymans, Martijn W

    2015-11-01

    Prognostic models including age, self-rated health and prior sickness absence (SA) have been found to predict high (≥ 30) SA days and high (≥ 3) SA episodes during 1-year follow-up. More predictors of high SA are needed to improve these SA prognostic models. The purpose of this study was to investigate fatigue as new predictor in SA prognostic models by using risk reclassification methods and measures. This was a prospective cohort study with 1-year follow-up of 1,137 office workers. Fatigue was measured at baseline with the 20-item checklist individual strength and added to the existing SA prognostic models. SA days and episodes during 1-year follow-up were retrieved from an occupational health service register. The added value of fatigue was investigated with Net Reclassification Index (NRI) and integrated discrimination improvement (IDI) measures. In total, 579 (51 %) office workers had complete data for analysis. Fatigue was prospectively associated with both high SA days and episodes. The NRI revealed that adding fatigue to the SA days model correctly reclassified workers with high SA days, but incorrectly reclassified workers without high SA days. The IDI indicated no improvement in risk discrimination by the SA days model. Both NRI and IDI showed that the prognostic model predicting high SA episodes did not improve when fatigue was added as predictor variable. In the present study, fatigue increased false-positive rates which may reduce the cost-effectiveness of interventions for preventing SA.

  12. PNPLA3 I148M (rs738409) genetic variant and age at onset of at-risk alcohol consumption are independent risk factors for alcoholic cirrhosis.

    PubMed

    Burza, Maria Antonella; Molinaro, Antonio; Attilia, Maria Luisa; Rotondo, Claudia; Attilia, Fabio; Ceccanti, Mauro; Ferri, Flaminia; Maldarelli, Federica; Maffongelli, Angela; De Santis, Adriano; Attili, Adolfo Francesco; Romeo, Stefano; Ginanni Corradini, Stefano

    2014-04-01

    Environmental and genetic factors contribute to alcoholic cirrhosis onset. In particular, age at exposure to liver stressors has been shown to be important in progression to fibrosis in hepatitis C individuals. However, no definite data on the role of age at onset of at-risk alcohol consumption are available. Moreover, patatin-like phospholipase domain-containing protein 3 (PNPLA3) I148M (rs738409) variant has been associated with alcoholic cirrhosis, but only in cross-sectional studies. The aim of this study was to investigate the role of age at onset of at-risk alcohol consumption and PNPLA3 I148M variant on alcoholic cirrhosis incidence. A total of 384 at-risk alcohol drinkers were retrospectively examined. The association among age at onset of at-risk alcohol consumption, PNPLA3 I148M variant and cirrhosis incidence was tested. A higher incidence of alcoholic cirrhosis was observed in individuals with an older (≥24 years) compared with a younger (<24) age at onset of at-risk alcohol consumption (P-value < 0.001). Moreover, PNPLA3 148M allele carriers showed an increased incidence of cirrhosis (P-value < 0.001). Both age at onset of at-risk alcohol consumption and PNPLA3 148M allele were independent risk factors for developing cirrhosis (H.R. (95% C.I.): 2.76 (2.18-3.50), P-value < 0.001; 1.53(1.07-2.19), P-value = 0.021 respectively). The 148M allele was associated with a two-fold increased risk of cirrhosis in individuals with a younger compared with an older age at onset of at-risk alcohol consumption (H.R. (95% C.I.): 3.03(1.53-6.00) vs. 1.61(1.09-2.38). Age at onset of at-risk alcohol consumption and PNPLA3 I148M genetic variant are independently associated with alcoholic cirrhosis incidence. © 2013 The Authors. Liver International published by John Wiley & Sons Ltd.

  13. Accuracy, risk and the intrinsic value of diagnostic imaging: a review of the cost-utility literature.

    PubMed

    Otero, Hansel J; Fang, Chi H; Sekar, Meera; Ward, Robert J; Neumann, Peter J

    2012-05-01

    The aim of this study was to systematically review the reporting of the value of imaging unrelated to treatment consequences and test characteristics in all imaging-related published cost-utility analyses (CUAs) in the medical literature. All CUAs published between 1976 and 2008 evaluating diagnostic imaging technologies contained in the CEA Registry, a publicly available comprehensive database of health related CUAs, were screened. Publication characteristics, imaging modality, and the inclusion of test characteristics including accuracy, costs, risks, and the potential value unrelated to treatment consequences (eg, reassurance or anxiety) were assessed. Ninety-six published CUAs evaluating 155 different imaging technologies were included in the final sample; 27 studies were published in imaging-specialized journals. Fifty-two studies (54%) evaluated the performance of a single imaging modality, while 44 studies (46%) compared two or more different imaging modalities. The most common areas of interest were cardiovascular (45%) and neuroradiology (17%). Forty-two technologies (27%) concerned ultrasound, while 34 (22%) concerned magnetic resonance. Seventy-nine (51%) technologies used ionizing radiation. Test accuracy was reported or calculated for 90% (n = 133 and n = 5, respectively) and assumed perfect (reference test or gold-standard test without alternative testing strategy to capture false-negatives and false-positives) for 8% (n = 12) of technologies. Only 22 studies (23%) assessing 40 imaging technologies (26%) considered inconclusive or indeterminate results. The risk of testing was reported for 32 imaging technologies (21%). Fifteen studies (16%) considered the value of diagnostic imaging unrelated to treatment. Four studies incorporated it as quality-of-life adjustments, while 10 studies mentioned it only in their discussions or as a limitation. The intrinsic value of imaging (the value of imaging unrelated to treatment) has not been appropriately defined

  14. Quantifying the predictive accuracy of time-to-event models in the presence of competing risks.

    PubMed

    Schoop, Rotraut; Beyersmann, Jan; Schumacher, Martin; Binder, Harald

    2011-02-01

    Prognostic models for time-to-event data play a prominent role in therapy assignment, risk stratification and inter-hospital quality assurance. The assessment of their prognostic value is vital not only for responsible resource allocation, but also for their widespread acceptance. The additional presence of competing risks to the event of interest requires proper handling not only on the model building side, but also during assessment. Research into methods for the evaluation of the prognostic potential of models accounting for competing risks is still needed, as most proposed methods measure either their discrimination or calibration, but do not examine both simultaneously. We adapt the prediction error proposal of Graf et al. (Statistics in Medicine 1999, 18, 2529–2545) and Gerds and Schumacher (Biometrical Journal 2006, 48, 1029–1040) to handle models with competing risks, i.e. more than one possible event type, and introduce a consistent estimator. A simulation study investigating the behaviour of the estimator in small sample size situations and for different levels of censoring together with a real data application follows.

  15. Baseline ecological risk assessment of the Calcasieu Estuary, Louisiana: 2. An evaluation of the predictive ability of effects-based sediment quality guidelines

    USGS Publications Warehouse

    MacDonald, Donald D.; Ingersoll, Christopher G.; Smorong, Dawn E.; Sinclair, Jesse A.; Lindskoog, Rebekka; Wang, Ning; Severn, Corrine; Gouguet, Ron; Meyer, John; Field, Jay

    2011-01-01

    Three sets of effects-based sediment-quality guidelines (SQGs) were evaluated to support the selection of sediment-quality benchmarks for assessing risks to benthic invertebrates in the Calcasieu Estuary, Louisiana. These SQGs included probable effect concentrations (PECs), effects range median values (ERMs), and logistic regression model (LRMs)-based T50 values. The results of this investigation indicate that all three sets of SQGs tend to underestimate sediment toxicity in the Calcasieu Estuary (i.e., relative to the national data sets), as evaluated using the results of 10-day toxicity tests with the amphipod, Hyalella azteca, or Ampelisca abdita, and 28-day whole-sediment toxicity tests with the H. azteca. These results emphasize the importance of deriving site-specific toxicity thresholds for assessing risks to benthic invertebrates.

  16. The Comprehensive Evaluation Method of Supervision Risk in Electricity Transaction Based on Unascertained Rational Number

    NASA Astrophysics Data System (ADS)

    Haining, Wang; Lei, Wang; Qian, Zhang; Zongqiang, Zheng; Hongyu, Zhou; Chuncheng, Gao

    2018-03-01

    For the uncertain problems in the comprehensive evaluation of supervision risk in electricity transaction, this paper uses the unidentified rational numbers to evaluation the supervision risk, to obtain the possible result and corresponding credibility of evaluation and realize the quantification of risk indexes. The model can draw the risk degree of various indexes, which makes it easier for the electricity transaction supervisors to identify the transaction risk and determine the risk level, assisting the decision-making and realizing the effective supervision of the risk. The results of the case analysis verify the effectiveness of the model.

  17. Debates—Perspectives on socio-hydrology: Modeling flood risk as a public policy problem

    NASA Astrophysics Data System (ADS)

    Gober, Patricia; Wheater, Howard S.

    2015-06-01

    Socio-hydrology views human activities as endogenous to water system dynamics; it is the interaction between human and biophysical processes that threatens the viability of current water systems through positive feedbacks and unintended consequences. Di Baldassarre et al. implement socio-hydrology as a flood risk problem using the concept of social memory as a vehicle to link human perceptions to flood damage. Their mathematical model has heuristic value in comparing potential flood damages in green versus technological societies. It can also support communities in exploring the potential consequences of policy decisions and evaluating critical policy tradeoffs, for example, between flood protection and economic development. The concept of social memory does not, however, adequately capture the social processes whereby public perceptions are translated into policy action, including the pivotal role played by the media in intensifying or attenuating perceived flood risk, the success of policy entrepreneurs in keeping flood hazard on the public agenda during short windows of opportunity for policy action, and different societal approaches to managing flood risk that derive from cultural values and economic interests. We endorse the value of seeking to capture these dynamics in a simplified conceptual framework, but favor a broader conceptualization of socio-hydrology that includes a knowledge exchange component, including the way modeling insights and scientific results are communicated to floodplain managers. The social processes used to disseminate the products of socio-hydrological research are as important as the research results themselves in determining whether modeling is used for real-world decision making.

  18. LIFETIME LUNG CANCER RISKS ASSOCIATED WITH INDOOR RADON EXPOSURE BASED ON VARIOUS RADON RISK MODELS FOR CANADIAN POPULATION.

    PubMed

    Chen, Jing

    2017-04-01

    This study calculates and compares the lifetime lung cancer risks associated with indoor radon exposure based on well-known risk models in the literature; two risk models are from joint studies among miners and the other three models were developed from pooling studies on residential radon exposure from China, Europe and North America respectively. The aim of this article is to make clear that the various models are mathematical descriptions of epidemiologically observed real risks in different environmental settings. The risk from exposure to indoor radon is real and it is normal that variations could exist among different risk models even when they were applied to the same dataset. The results show that lifetime risk estimates vary significantly between the various risk models considered here: the model based on the European residential data provides the lowest risk estimates, while models based on the European miners and Chinese residential pooling with complete dosimetry give the highest values. The lifetime risk estimates based on the EPA/BEIR-VI model lie within this range and agree reasonably well with the averages of risk estimates from the five risk models considered in this study. © Crown copyright 2016.

  19. A simulation model for risk assessment of turbine wheels

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Hage, Richard T.

    1991-01-01

    A simulation model has been successfully developed to evaluate the risk of the Space Shuttle auxiliary power unit (APU) turbine wheels for a specific inspection policy. Besides being an effective tool for risk/reliability evaluation, the simulation model also allows the analyst to study the trade-offs between wheel reliability, wheel life, inspection interval, and rejection crack size. For example, in the APU application, sensitivity analysis results showed that the wheel life limit has the least effect on wheel reliability when compared to the effect of the inspection interval and the rejection crack size. In summary, the simulation model developed represents a flexible tool to predict turbine wheel reliability and study the risk under different inspection policies.

  20. A simulation model for risk assessment of turbine wheels

    NASA Astrophysics Data System (ADS)

    Safie, Fayssal M.; Hage, Richard T.

    A simulation model has been successfully developed to evaluate the risk of the Space Shuttle auxiliary power unit (APU) turbine wheels for a specific inspection policy. Besides being an effective tool for risk/reliability evaluation, the simulation model also allows the analyst to study the trade-offs between wheel reliability, wheel life, inspection interval, and rejection crack size. For example, in the APU application, sensitivity analysis results showed that the wheel life limit has the least effect on wheel reliability when compared to the effect of the inspection interval and the rejection crack size. In summary, the simulation model developed represents a flexible tool to predict turbine wheel reliability and study the risk under different inspection policies.

  1. Risk control and the minimum significant risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seiler, F.A.; Alvarez, J.L.

    1996-06-01

    Risk management implies that the risk manager can, by his actions, exercise at least a modicum of control over the risk in question. In the terminology of control theory, a management action is a control signal imposed as feedback on the system to bring about a desired change in the state of the system. In the terminology of risk management, an action is taken to bring a predicted risk to lower values. Even if it is assumed that the management action taken is 100% effective and that the projected risk reduction is infinitely well known, there is a lower limitmore » to the desired effects that can be achieved. It is based on the fact that all risks, such as the incidence of cancer, exhibit a degree of variability due to a number of extraneous factors such as age at exposure, sex, location, and some lifestyle parameters such as smoking or the consumption of alcohol. If the control signal is much smaller than the variability of the risk, the signal is lost in the noise and control is lost. This defines a minimum controllable risk based on the variability of the risk over the population considered. This quantity is the counterpart of the minimum significant risk which is defined by the uncertainties of the risk model. Both the minimum controllable risk and the minimum significant risk are evaluated for radiation carcinogenesis and are shown to be of the same order of magnitude. For a realistic management action, the assumptions of perfectly effective action and perfect model prediction made above have to be dropped, resulting in an effective minimum controllable risk which is determined by both risk limits. Any action below that effective limit is futile, but it is also unethical due to the ethical requirement of doing more good than harm. Finally, some implications of the effective minimum controllable risk on the use of the ALARA principle and on the evaluation of remedial action goals are presented.« less

  2. The Global Earthquake Model and Disaster Risk Reduction

    NASA Astrophysics Data System (ADS)

    Smolka, A. J.

    2015-12-01

    Advanced, reliable and transparent tools and data to assess earthquake risk are inaccessible to most, especially in less developed regions of the world while few, if any, globally accepted standards currently allow a meaningful comparison of risk between places. The Global Earthquake Model (GEM) is a collaborative effort that aims to provide models, datasets and state-of-the-art tools for transparent assessment of earthquake hazard and risk. As part of this goal, GEM and its global network of collaborators have developed the OpenQuake engine (an open-source software for hazard and risk calculations), the OpenQuake platform (a web-based portal making GEM's resources and datasets freely available to all potential users), and a suite of tools to support modelers and other experts in the development of hazard, exposure and vulnerability models. These resources are being used extensively across the world in hazard and risk assessment, from individual practitioners to local and national institutions, and in regional projects to inform disaster risk reduction. Practical examples for how GEM is bridging the gap between science and disaster risk reduction are: - Several countries including Switzerland, Turkey, Italy, Ecuador, Papua-New Guinea and Taiwan (with more to follow) are computing national seismic hazard using the OpenQuake-engine. In some cases these results are used for the definition of actions in building codes. - Technical support, tools and data for the development of hazard, exposure, vulnerability and risk models for regional projects in South America and Sub-Saharan Africa. - Going beyond physical risk, GEM's scorecard approach evaluates local resilience by bringing together neighborhood/community leaders and the risk reduction community as a basis for designing risk reduction programs at various levels of geography. Actual case studies are Lalitpur in the Kathmandu Valley in Nepal and Quito/Ecuador. In agreement with GEM's collaborative approach, all

  3. Identifying habitats at risk: simple models can reveal complex ecosystem dynamics.

    PubMed

    Maxwell, Paul S; Pitt, Kylie A; Olds, Andrew D; Rissik, David; Connolly, Rod M

    2015-03-01

    The relationship between ecological impact and ecosystem structure is often strongly nonlinear, so that small increases in impact levels can cause a disproportionately large response in ecosystem structure. Nonlinear ecosystem responses can be difficult to predict because locally relevant data sets can be difficult or impossible to obtain. Bayesian networks (BN) are an emerging tool that can help managers to define ecosystem relationships using a range of data types from comprehensive quantitative data sets to expert opinion. We show how a simple BN can reveal nonlinear dynamics in seagrass ecosystems using ecological relationships sourced from the literature. We first developed a conceptual diagram by cataloguing the ecological responses of seagrasses to a range of drivers and impacts. We used the conceptual diagram to develop a BN populated with values sourced from published studies. We then applied the BN to show that the amount of initial seagrass biomass has a mitigating effect on the level of impact a meadow can withstand without loss, and that meadow recovery can often require disproportionately large improvements in impact levels. This mitigating effect resulted in the middle ranges of impact levels having a wide likelihood of seagrass presence, a situation known as bistability. Finally, we applied the model in a case study to identify the risk of loss and the likelihood of recovery for the conservation and management of seagrass meadows in Moreton Bay, Queensland, Australia. We used the model to predict the likelihood of bistability in 23 locations in the Bay. The model predicted bistability in seven locations, most of which have experienced seagrass loss at some stage in the past 25 years providing essential information for potential future restoration efforts. Our results demonstrate the capacity of simple, flexible modeling tools to facilitate collation and synthesis of disparate information. This approach can be adopted in the initial stages of

  4. A systematic and critical review of model-based economic evaluations of pharmacotherapeutics in patients with bipolar disorder.

    PubMed

    Mohiuddin, Syed

    2014-08-01

    Bipolar disorder (BD) is a chronic and relapsing mental illness with a considerable health-related and economic burden. The primary goal of pharmacotherapeutics for BD is to improve patients' well-being. The use of decision-analytic models is key in assessing the added value of the pharmacotherapeutics aimed at treating the illness, but concerns have been expressed about the appropriateness of different modelling techniques and about the transparency in the reporting of economic evaluations. This paper aimed to identify and critically appraise published model-based economic evaluations of pharmacotherapeutics in BD patients. A systematic review combining common terms for BD and economic evaluation was conducted in MEDLINE, EMBASE, PSYCINFO and ECONLIT. Studies identified were summarised and critically appraised in terms of the use of modelling technique, model structure and data sources. Considering the prognosis and management of BD, the possible benefits and limitations of each modelling technique are discussed. Fourteen studies were identified using model-based economic evaluations of pharmacotherapeutics in BD patients. Of these 14 studies, nine used Markov, three used discrete-event simulation (DES) and two used decision-tree models. Most of the studies (n = 11) did not include the rationale for the choice of modelling technique undertaken. Half of the studies did not include the risk of mortality. Surprisingly, no study considered the risk of having a mixed bipolar episode. This review identified various modelling issues that could potentially reduce the comparability of one pharmacotherapeutic intervention with another. Better use and reporting of the modelling techniques in the future studies are essential. DES modelling appears to be a flexible and comprehensive technique for evaluating the comparability of BD treatment options because of its greater flexibility of depicting the disease progression over time. However, depending on the research question

  5. Patterning ecological risk of pesticide contamination at the river basin scale.

    PubMed

    Faggiano, Leslie; de Zwart, Dick; García-Berthou, Emili; Lek, Sovan; Gevrey, Muriel

    2010-05-01

    Ecological risk assessment was conducted to determine the risk posed by pesticide mixtures to the Adour-Garonne river basin (south-western France). The objectives of this study were to assess the general state of this basin with regard to pesticide contamination using a risk assessment procedure and to detect patterns in toxic mixture assemblages through a self-organizing map (SOM) methodology in order to identify the locations at risk. Exposure assessment, risk assessment with species sensitivity distribution, and mixture toxicity rules were used to compute six relative risk predictors for different toxic modes of action: the multi-substance potentially affected fraction of species depending on the toxic mode of action of compounds found in the mixture (msPAF CA(TMoA) values). Those predictors computed for the 131 sampling sites assessed in this study were then patterned through the SOM learning process. Four clusters of sampling sites exhibiting similar toxic assemblages were identified. In the first cluster, which comprised 83% of the sampling sites, the risk caused by pesticide mixture toward aquatic species was weak (mean msPAF value for those sites<0.0036%), while in another cluster the risk was significant (mean msPAF<1.09%). GIS mapping allowed an interesting spatial pattern of the distribution of sampling sites for each cluster to be highlighted with a significant and highly localized risk in the French department called "Lot et Garonne". The combined use of the SOM methodology, mixture toxicity modelling and a clear geo-referenced representation of results not only revealed the general state of the Adour-Garonne basin with regard to contamination by pesticides but also enabled to analyze the spatial pattern of toxic mixture assemblage in order to prioritize the locations at risk and to detect the group of compounds causing the greatest risk at the basin scale. Copyright 2010 Elsevier B.V. All rights reserved.

  6. Contract Design: Risk Management and Evaluation

    PubMed Central

    Amelung, Volker E.; Juhnke, Christin

    2018-01-01

    Introduction: Effective risk adjustment is an aspect that is more and more given weight on the background of competitive health insurance systems and vital healthcare systems. The risk structure of the providers plays a vital role in Pay for Performance. A prerequisite for optimal incentive-based service models is a (partial) dependence of the agent’s returns on the provider’s gain level. Integrated care systems as well as accountable care organisations (ACOs) in the US and similar concepts in other countries are advocated as an effective method of improving the performance of healthcare systems. These systems outline a payment and care delivery model that intends to tie provider reimbursements to predefined quality metrics. By this the total costs of care shall be reduced. Methods: Little is known about the contractual design and the main challenges of delegating “accountability” to these new kinds of organisations and/or contracts. The costs of market utilisation are highly relevant for the conception of healthcare contracts; furthermore information asymmetries and contract-specific investments are an obstacle to the efficient operation of ACOs. A comprehensive literature review on methods of designing contracts in Integrated Care was conducted. The research question in this article focuses on how reimbursement strategies, evaluation of measures and methods of risk adjustment can best be integrated in healthcare contracting. Results: Each integrated care contract includes challenges for both payers and providers without having sufficient empirical data on both sides. These challenges are clinical, administrative or financial nature. Risk adjusted contracts ensure that the reimbursement roughly matches the true costs resulting from the morbidity of a population. If reimbursement of care provider corresponds to the actual expenses for an individual/population the problem of risk selection is greatly reduced. The currently used methods of risk adjustment

  7. Evaluation of Two Approaches to Defining Extinction Risk under the U.S. Endangered Species Act.

    PubMed

    Thompson, Grant G; Maguire, Lynn A; Regan, Tracey J

    2018-05-01

    The predominant definition of extinction risk in conservation biology involves evaluating the cumulative distribution function (CDF) of extinction time at a particular point (the "time horizon"). Using the principles of decision theory, this article develops an alternative definition of extinction risk as the expected loss (EL) to society resulting from eventual extinction of a species. Distinct roles are identified for time preference and risk aversion. Ranges of tentative values for the parameters of the two approaches are proposed, and the performances of the two approaches are compared and contrasted for a small set of real-world species with published extinction time distributions and a large set of hypothetical extinction time distributions. Potential issues with each approach are evaluated, and the EL approach is recommended as the better of the two. The CDF approach suffers from the fact that extinctions that occur at any time before the specified time horizon are weighted equally, while extinctions that occur beyond the specified time horizon receive no weight at all. It also suffers from the fact that the time horizon does not correspond to any natural phenomenon, and so is impossible to specify nonarbitrarily; yet the results can depend critically on the specified value. In contrast, the EL approach has the advantage of weighting extinction time continuously, with no artificial time horizon, and the parameters of the approach (the rates of time preference and risk aversion) do correspond to natural phenomena, and so can be specified nonarbitrarily. © 2017 Society for Risk Analysis.

  8. The Pedagogical Influences of a Value-Added Model Evaluation System from the Perspectives of Elementary School Teachers in North Georgia: A Phenomenological Study

    ERIC Educational Resources Information Center

    Shugart, Kyle Keller

    2017-01-01

    The purpose of this phenomenological study was to describe the pedagogical influences of the value-added model of evaluation as experienced by elementary school teachers in a North Georgia suburban school district. A transcendental phenomenological design was used to provide a voice to (N = 12) elementary school teachers evaluated with a…

  9. Refinement of arsenic attributable health risks in rural Pakistan using population specific dietary intake values.

    PubMed

    Rasheed, Hifza; Slack, Rebecca; Kay, Paul; Gong, Yun Yun

    2017-02-01

    Previous risk assessment studies have often utilised generic consumption or intake values when evaluating ingestion exposure pathways. If these values do not accurately reflect the country or scenario in question, the resulting risk assessment will not provide a meaningful representation of cancer risks in that particular country/scenario. This study sought to determine water and food intake parameters for one region in South Asia, rural Pakistan, and assess the role population specific intake parameters play in cancer risk assessment. A questionnaire was developed to collect data on sociodemographic features and 24-h water and food consumption patterns from a rural community. The impact of dietary differences on cancer susceptibility linked to arsenic exposure was evaluated by calculating cancer risks using the data collected in the current study against standard water and food intake levels for the USA, Europe and Asia. A probabilistic cancer risk was performed for each set of intake values of this study. Average daily total water intake based on drinking direct plain water and indirect water from food and beverages was found to be 3.5Lday -1 (95% CI: 3.38, 3.57) exceeding the US Environmental Protection Agency's default (2.5Lday -1 ) and World Health Organization's recommended intake value (2Lday -1 ). Average daily rice intake (469gday -1 ) was found to be lower than in India and Bangladesh whereas wheat intake (402gday -1 ) was higher than intake reported for USA, Europe and Asian sub-regions. Consequently, arsenic-associated cumulative cancer risks determined for daily water intake was found to be 17 chances in children of 3-6years (95% CI: 0.0014, 0.0017), 14 in children of age 6-16years (95% CI: 0.001, 0.0011) and 6 in adults of 16-67years (95% CI: 0.0006, 0.0006) in a population size of 10,000. This is higher than the risks estimated using the US Environmental Protection Agency and World Health Organization's default recommended water intake levels. Rice

  10. An Evaluation of a School-Based, Peer-Facilitated, Healthy Relationship Program for At-Risk Adolescents

    ERIC Educational Resources Information Center

    McLeod, David Axlyn; Jones, Robin; Cramer, Elizabeth P.

    2015-01-01

    There are few evaluations of peer-facilitated teenage dating violence prevention programs in the literature. To begin to address this gap, this project assessed the effectiveness of a school-based, peer-facilitated healthy relationships program among academically at-risk students. Two hundred and ninety-one ninth graders of mixed race and gender…

  11. Uptake of Predictive Genetic Testing and Cardiac Evaluation for Children at Risk for an Inherited Arrhythmia or Cardiomyopathy.

    PubMed

    Christian, Susan; Atallah, Joseph; Clegg, Robin; Giuffre, Michael; Huculak, Cathleen; Dzwiniel, Tara; Parboosingh, Jillian; Taylor, Sherryl; Somerville, Martin

    2018-02-01

    Predictive genetic testing in minors should be considered when clinical intervention is available. Children who carry a pathogenic variant for an inherited arrhythmia or cardiomyopathy require regular cardiac screening and may be prescribed medication and/or be told to modify their physical activity. Medical genetics and pediatric cardiology charts were reviewed to identify factors associated with uptake of genetic testing and cardiac evaluation for children at risk for long QT syndrome, hypertrophic cardiomyopathy or arrhythmogenic right ventricular cardiomyopathy. The data collected included genetic diagnosis, clinical symptoms in the carrier parent, number of children under 18 years of age, age of children, family history of sudden cardiac arrest/death, uptake of cardiac evaluation and if evaluated, phenotype for each child. We identified 97 at risk children from 58 families found to carry a pathogenic variant for one of these conditions. Sixty six percent of the families pursued genetic testing and 73% underwent cardiac screening when it was recommended. Declining predictive genetic testing was significantly associated with genetic specialist recommendation (p < 0.001) and having an asymptomatic carrier father (p = 0.006). Cardiac evaluation was significantly associated with uptake of genetic testing (p = 0.007). This study provides a greater understanding of factors associated with uptake of genetic testing and cardiac evaluation in children at risk of an inherited arrhythmia or cardiomyopathy. It also identifies a need to educate families about the importance of cardiac evaluation even in the absence of genetic testing.

  12. Achieving Value in Primary Care: The Primary Care Value Model.

    PubMed

    Rollow, William; Cucchiara, Peter

    2016-03-01

    The patient-centered medical home (PCMH) model provides a compelling vision for primary care transformation, but studies of its impact have used insufficiently patient-centered metrics with inconsistent results. We propose a framework for defining patient-centered value and a new model for value-based primary care transformation: the primary care value model (PCVM). We advocate for use of patient-centered value when measuring the impact of primary care transformation, recognition, and performance-based payment; for financial support and research and development to better define primary care value-creating activities and their implementation; and for use of the model to support primary care organizations in transformation. © 2016 Annals of Family Medicine, Inc.

  13. Foraging and predation risk for larval cisco (Coregonus artedi) in Lake Superior: a modelling synthesis of empirical survey data

    USGS Publications Warehouse

    Myers, Jared T.; Yule, Daniel L.; Jones, Michael L.; Quinlan, Henry R.; Berglund, Eric K.

    2014-01-01

    The relative importance of predation and food availability as contributors to larval cisco (Coregonus artedi) mortality in Lake Superior were investigated using a visual foraging model to evaluate potential predation pressure by rainbow smelt (Osmerus mordax) and a bioenergetic model to evaluate potential starvation risk. The models were informed by observations of rainbow smelt, larval cisco, and zooplankton abundance at three Lake Superior locations during the period of spring larval cisco emergence and surface-oriented foraging. Predation risk was highest at Black Bay, ON, where average rainbow smelt densities in the uppermost 10 m of the water column were >1000 ha−1. Turbid conditions at the Twin Ports, WI-MN, affected larval cisco predation risk because rainbow smelt remained suspended in the upper water column during daylight, placing them alongside larval cisco during both day and night hours. Predation risk was low at Cornucopia, WI, owing to low smelt densities (<400 ha−1) and deep light penetration, which kept rainbow smelt near the lakebed and far from larvae during daylight. In situ zooplankton density estimates were low compared to the values used to develop the larval coregonid bioenergetics model, leading to predictions of negative growth rates for 10 mm larvae at all three locations. The model predicted that 15 mm larvae were capable of attaining positive growth at Cornucopia and the Twin Ports where low water temperatures (2–6 °C) decreased their metabolic costs. Larval prey resources were highest at Black Bay but warmer water temperatures there offset the benefit of increased prey availability. A sensitivity analysis performed on the rainbow smelt visual foraging model showed that it was relatively insensitive, while the coregonid bioenergetics model showed that the absolute growth rate predictions were highly sensitive to input parameters (i.e., 20% parameter perturbation led to order of magnitude differences in model estimates). Our

  14. Expanding pedestrian injury risk to the body region level: how to model passive safety systems in pedestrian injury risk functions.

    PubMed

    Niebuhr, Tobias; Junge, Mirko; Achmus, Stefanie

    2015-01-01

    Assessment of the effectiveness of advanced driver assistance systems (ADAS) plays a crucial role in accident research. A common way to evaluate the effectiveness of new systems is to determine the potentials for injury severity reduction. Because injury risk functions describe the probability of an injury of a given severity conditional on a technical accident severity (closing speed, delta V, barrier equivalent speed, etc.), they are predestined for such evaluations. Recent work has stated an approach on how to model the pedestrian injury risk in pedestrian-to-passenger car accidents as a family of functions. This approach gave explicit and easily interpretable formulae for the injury risk conditional on the closing speed of the car. These results are extended to injury risk functions for pedestrian body regions. Starting with a double-checked German In-depth Accident Study (GIDAS) pedestrian-to-car accident data set (N = 444) and a functional-anatomical definition of the body regions, investigations on the influence of specific body regions on the overall injury severity will be presented. As the measure of injury severity, the ISSx, a rescaled version of the well-known Injury Severity Score (ISS), was used. Though traditional ISS is computed by summation of the squares of the 3 most severe injured body regions, ISSx is computed by the summation of the exponentials of the Abbreviated Injury Scale (AIS) severities of the 3 most severely injured body regions. The exponentials used are scaled to fit the ISS range of values between 0 and 75. Three body regions (head/face/neck, thorax, hip/legs) clearly dominated abdominal and upper extremity injuries; that is, the latter 2 body regions had no influence at all on the overall injury risk over the range of technical accident severities. Thus, the ISSx is well described by use of the injury codes from the same body regions for any pedestrian injury severity. As a mathematical consequence, the ISSx becomes explicitly

  15. Applying spatial regression to evaluate risk factors for microbiological contamination of urban groundwater sources in Juba, South Sudan

    NASA Astrophysics Data System (ADS)

    Engström, Emma; Mörtberg, Ulla; Karlström, Anders; Mangold, Mikael

    2017-06-01

    This study developed methodology for statistically assessing groundwater contamination mechanisms. It focused on microbial water pollution in low-income regions. Risk factors for faecal contamination of groundwater-fed drinking-water sources were evaluated in a case study in Juba, South Sudan. The study was based on counts of thermotolerant coliforms in water samples from 129 sources, collected by the humanitarian aid organisation Médecins Sans Frontières in 2010. The factors included hydrogeological settings, land use and socio-economic characteristics. The results showed that the residuals of a conventional probit regression model had a significant positive spatial autocorrelation (Moran's I = 3.05, I-stat = 9.28); therefore, a spatial model was developed that had better goodness-of-fit to the observations. The most significant factor in this model ( p-value 0.005) was the distance from a water source to the nearest Tukul area, an area with informal settlements that lack sanitation services. It is thus recommended that future remediation and monitoring efforts in the city be concentrated in such low-income regions. The spatial model differed from the conventional approach: in contrast with the latter case, lowland topography was not significant at the 5% level, as the p-value was 0.074 in the spatial model and 0.040 in the traditional model. This study showed that statistical risk-factor assessments of groundwater contamination need to consider spatial interactions when the water sources are located close to each other. Future studies might further investigate the cut-off distance that reflects spatial autocorrelation. Particularly, these results advise research on urban groundwater quality.

  16. Construction of a Risk Assessment Model for Rainfall-Induced Landslides

    NASA Astrophysics Data System (ADS)

    Chen, Yie-Ruey; Tsai, Kuang-Jung; Chen, Jing-Wen; Lin, Wei-Chung

    2013-04-01

    The unstable geology and steep terrain in the mountainous regions of Taiwan make these areas vulnerable to landslides and debris flow during typhoons and heavy rains. According to the Water Resources Agency, Ministry of Economic Affairs of Taiwan, there were 500 typhoons and over one thousand storms in Taiwan between 1897 and 2011. Natural disasters caused 3.5 billion USD of damage between 1983 and 2011. Thus, the construction of risk assessment model for landslides is essential to disaster prevention. This study employed genetic adaptive neural networks (GANN) with texture analysis in the classification of high-resolution satellite images from which data related to surface conditions in mountainous areas of Taiwan were derived. Ten landslide hazard potential factors are included: slope, geology, elevation, distance from the fault, distance from water, terrain roughness, slope roughness, effective accumulated rainfall and developing situation. By using correlation test, GANN, weight analysis and dangerous value method, levels and probabilities of landslide of the research areas are presented. Then, through geographic information system the landslide potential map is plotted to distinguish high potential regions from low potential regions. Through field surveys, interviews with district officials and a review of relevant literature, the probability of a sediment disaster was estimated as well as the vulnerability of the villages concerned and the degree to which these villages were prepared, to construct a risk evaluation model. The regional risk map was plotted with the help of GIS and the landslide assessment model. The risk assessment model can be used by authorities to make provisions for high-risk areas, to reduce the number of casualties and social costs of sediment disasters.

  17. Ecosystem Risk Assessment Using the Comprehensive Assessment of Risk to Ecosystems (CARE) Tool

    NASA Astrophysics Data System (ADS)

    Battista, W.; Fujita, R.; Karr, K.

    2016-12-01

    Effective Ecosystem Based Management requires a localized understanding of the health and functioning of a given system as well as of the various factors that may threaten the ongoing ability of the system to support the provision of valued services. Several risk assessment models are available that can provide a scientific basis for understanding these factors and for guiding management action, but these models focus mainly on single species and evaluate only the impacts of fishing in detail. We have developed a new ecosystem risk assessment model - the Comprehensive Assessment of Risk to Ecosystems (CARE) - that allows analysts to consider the cumulative impact of multiple threats, interactions among multiple threats that may result in synergistic or antagonistic impacts, and the impacts of a suite of threats on whole-ecosystem productivity and functioning, as well as on specific ecosystem services. The CARE model was designed to be completed in as little as two hours, and uses local and expert knowledge where data are lacking. The CARE tool can be used to evaluate risks facing a single site; to compare multiple sites for the suitability or necessity of different management options; or to evaluate the effects of a proposed management action aimed at reducing one or more risks. This analysis can help users identify which threats are the most important at a given site, and therefore where limited management resources should be targeted. CARE can be applied to virtually any system, and can be modified as knowledge is gained or to better match different site characteristics. CARE builds on previous ecosystem risk assessment tools to provide a comprehensive assessment of fishing and non-fishing threats that can be used to inform environmental management decisions across a broad range of systems.

  18. Ecosystem Risk Assessment Using the Comprehensive Assessment of Risk to Ecosystems (CARE) Tool

    NASA Astrophysics Data System (ADS)

    Battista, W.; Fujita, R.; Karr, K.

    2016-02-01

    Effective Ecosystem Based Management requires a localized understanding of the health and functioning of a given system as well as of the various factors that may threaten the ongoing ability of the system to support the provision of valued services. Several risk assessment models are available that can provide a scientific basis for understanding these factors and for guiding management action, but these models focus mainly on single species and evaluate only the impacts of fishing in detail. We have developed a new ecosystem risk assessment model - the Comprehensive Assessment of Risk to Ecosystems (CARE) - that allows analysts to consider the cumulative impact of multiple threats, interactions among multiple threats that may result in synergistic or antagonistic impacts, and the impacts of a suite of threats on whole-ecosystem productivity and functioning, as well as on specific ecosystem services. The CARE model was designed to be completed in as little as two hours, and uses local and expert knowledge where data are lacking. The CARE tool can be used to evaluate risks facing a single site; to compare multiple sites for the suitability or necessity of different management options; or to evaluate the effects of a proposed management action aimed at reducing one or more risks. This analysis can help users identify which threats are the most important at a given site, and therefore where limited management resources should be targeted. CARE can be applied to virtually any system, and can be modified as knowledge is gained or to better match different site characteristics. CARE builds on previous ecosystem risk assessment tools to provide a comprehensive assessment of fishing and non-fishing threats that can be used to inform environmental management decisions across a broad range of systems.

  19. Evaluating significance in linear mixed-effects models in R.

    PubMed

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  20. Can Value Added Add Value to Teacher Evaluation?

    ERIC Educational Resources Information Center

    Darling-Hammond, Linda

    2015-01-01

    The five thoughtful papers included in this issue of "Educational Researcher" ("ER") raise new questions about the use of value-added methods (VAMs) to estimate teachers' contributions to students' learning as part of personnel evaluation. The papers address both technical and implementation concerns, considering potential…

  1. What Did the Teachers Think? Teachers' Responses to the Use of Value-Added Modeling as a Tool for Evaluating Teacher Effectiveness

    ERIC Educational Resources Information Center

    Lee, Linda

    2011-01-01

    The policy discourse on improving student achievement has shifted from student outcomes to focusing on evaluating teacher effectiveness using standardized test scores. A major urban newspaper released a public database that ranked teachers' effectiveness using Value-Added Modeling. Teachers, whom are generally marginalized, were given the…

  2. A Dual System Model of Preferences under Risk

    ERIC Educational Resources Information Center

    Mukherjee, Kanchan

    2010-01-01

    This article presents a dual system model (DSM) of decision making under risk and uncertainty according to which the value of a gamble is a combination of the values assigned to it independently by the affective and deliberative systems. On the basis of research on dual process theories and empirical research in Hsee and Rottenstreich (2004) and…

  3. Automated identification and predictive tools to help identify high-risk heart failure patients: pilot evaluation.

    PubMed

    Evans, R Scott; Benuzillo, Jose; Horne, Benjamin D; Lloyd, James F; Bradshaw, Alejandra; Budge, Deborah; Rasmusson, Kismet D; Roberts, Colleen; Buckway, Jason; Geer, Norma; Garrett, Teresa; Lappé, Donald L

    2016-09-01

    Develop and evaluate an automated identification and predictive risk report for hospitalized heart failure (HF) patients. Dictated free-text reports from the previous 24 h were analyzed each day with natural language processing (NLP), to help improve the early identification of hospitalized patients with HF. A second application that uses an Intermountain Healthcare-developed predictive score to determine each HF patient's risk for 30-day hospital readmission and 30-day mortality was also developed. That information was included in an identification and predictive risk report, which was evaluated at a 354-bed hospital that treats high-risk HF patients. The addition of NLP-identified HF patients increased the identification score's sensitivity from 82.6% to 95.3% and its specificity from 82.7% to 97.5%, and the model's positive predictive value is 97.45%. Daily multidisciplinary discharge planning meetings are now based on the information provided by the HF identification and predictive report, and clinician's review of potential HF admissions takes less time compared to the previously used manual methodology (10 vs 40 min). An evaluation of the use of the HF predictive report identified a significant reduction in 30-day mortality and a significant increase in patient discharges to home care instead of to a specialized nursing facility. Using clinical decision support to help identify HF patients and automatically calculating their 30-day all-cause readmission and 30-day mortality risks, coupled with a multidisciplinary care process pathway, was found to be an effective process to improve HF patient identification, significantly reduce 30-day mortality, and significantly increase patient discharges to home care. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. An Extreme-Value Approach to Anomaly Vulnerability Identification

    NASA Technical Reports Server (NTRS)

    Everett, Chris; Maggio, Gaspare; Groen, Frank

    2010-01-01

    The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.

  5. Validation of a multifactorial risk factor model used for predicting future caries risk with Nevada adolescents.

    PubMed

    Ditmyer, Marcia M; Dounis, Georgia; Howard, Katherine M; Mobley, Connie; Cappelli, David

    2011-05-20

    The objective of this study was to measure the validity and reliability of a multifactorial Risk Factor Model developed for use in predicting future caries risk in Nevada adolescents in a public health setting. This study examined retrospective data from an oral health surveillance initiative that screened over 51,000 students 13-18 years of age, attending public/private schools in Nevada across six academic years (2002/2003-2007/2008). The Risk Factor Model included ten demographic variables: exposure to fluoridation in the municipal water supply, environmental smoke exposure, race, age, locale (metropolitan vs. rural), tobacco use, Body Mass Index, insurance status, sex, and sealant application. Multiple regression was used in a previous study to establish which significantly contributed to caries risk. Follow-up logistic regression ascertained the weight of contribution and odds ratios of the ten variables. Researchers in this study computed sensitivity, specificity, positive predictive value (PVP), negative predictive value (PVN), and prevalence across all six years of screening to assess the validity of the Risk Factor Model. Subjects' overall mean caries prevalence across all six years was 66%. Average sensitivity across all six years was 79%; average specificity was 81%; average PVP was 89% and average PVN was 67%. Overall, the Risk Factor Model provided a relatively constant, valid measure of caries that could be used in conjunction with a comprehensive risk assessment in population-based screenings by school nurses/nurse practitioners, health educators, and physicians to guide them in assessing potential future caries risk for use in prevention and referral practices.

  6. A Data Model for Teleconsultation in Managing High-Risk Pregnancies: Design and Preliminary Evaluation

    PubMed Central

    Deldar, Kolsoum

    2017-01-01

    Background Teleconsultation is a guarantor for virtual supervision of clinical professors on clinical decisions made by medical residents in teaching hospitals. Type, format, volume, and quality of exchanged information have a great influence on the quality of remote clinical decisions or tele-decisions. Thus, it is necessary to develop a reliable and standard model for these clinical relationships. Objective The goal of this study was to design and evaluate a data model for teleconsultation in the management of high-risk pregnancies. Methods This study was implemented in three phases. In the first phase, a systematic review, a qualitative study, and a Delphi approach were done in selected teaching hospitals. Systematic extraction and localization of diagnostic items to develop the tele-decision clinical archetypes were performed as the second phase. Finally, the developed model was evaluated using predefined consultation scenarios. Results Our review study has shown that present medical consultations have no specific structure or template for patient information exchange. Furthermore, there are many challenges in the remote medical decision-making process, and some of them are related to the lack of the mentioned structure. The evaluation phase of our research has shown that data quality (P<.001), adequacy (P<.001), organization (P<.001), confidence (P<.001), and convenience (P<.001) had more scores in archetype-based consultation scenarios compared with routine-based ones. Conclusions Our archetype-based model could acquire better and higher scores in the data quality, adequacy, organization, confidence, and convenience dimensions than ones with routine scenarios. It is probable that the suggested archetype-based teleconsultation model may improve the quality of physician-physician remote medical consultations. PMID:29242181

  7. A comparative evaluation of risk-adjustment models for benchmarking amputation-free survival after lower extremity bypass.

    PubMed

    Simons, Jessica P; Goodney, Philip P; Flahive, Julie; Hoel, Andrew W; Hallett, John W; Kraiss, Larry W; Schanzer, Andres

    2016-04-01

    Providing patients and payers with publicly reported risk-adjusted quality metrics for the purpose of benchmarking physicians and institutions has become a national priority. Several prediction models have been developed to estimate outcomes after lower extremity revascularization for critical limb ischemia, but the optimal model to use in contemporary practice has not been defined. We sought to identify the highest-performing risk-adjustment model for amputation-free survival (AFS) at 1 year after lower extremity bypass (LEB). We used the national Society for Vascular Surgery Vascular Quality Initiative (VQI) database (2003-2012) to assess the performance of three previously validated risk-adjustment models for AFS. The Bypass versus Angioplasty in Severe Ischaemia of the Leg (BASIL), Finland National Vascular (FINNVASC) registry, and the modified Project of Ex-vivo vein graft Engineering via Transfection III (PREVENT III [mPIII]) risk scores were applied to the VQI cohort. A novel model for 1-year AFS was also derived using the VQI data set and externally validated using the PIII data set. The relative discrimination (Harrell c-index) and calibration (Hosmer-May goodness-of-fit test) of each model were compared. Among 7754 patients in the VQI who underwent LEB for critical limb ischemia, the AFS was 74% at 1 year. Each of the previously published models for AFS demonstrated similar discriminative performance: c-indices for BASIL, FINNVASC, mPIII were 0.66, 0.60, and 0.64, respectively. The novel VQI-derived model had improved discriminative ability with a c-index of 0.71 and appropriate generalizability on external validation with a c-index of 0.68. The model was well calibrated in both the VQI and PIII data sets (goodness of fit P = not significant). Currently available prediction models for AFS after LEB perform modestly when applied to national contemporary VQI data. Moreover, the performance of each model was inferior to that of the novel VQI-derived model

  8. Developing a Model for Identifying Students at Risk of Failure in a First Year Accounting Unit

    ERIC Educational Resources Information Center

    Smith, Malcolm; Therry, Len; Whale, Jacqui

    2012-01-01

    This paper reports on the process involved in attempting to build a predictive model capable of identifying students at risk of failure in a first year accounting unit in an Australian university. Identifying attributes that contribute to students being at risk can lead to the development of appropriate intervention strategies and support…

  9. Performance evaluation of ionospheric time delay forecasting models using GPS observations at a low-latitude station

    NASA Astrophysics Data System (ADS)

    Sivavaraprasad, G.; Venkata Ratnam, D.

    2017-07-01

    Ionospheric delay is one of the major atmospheric effects on the performance of satellite-based radio navigation systems. It limits the accuracy and availability of Global Positioning System (GPS) measurements, related to critical societal and safety applications. The temporal and spatial gradients of ionospheric total electron content (TEC) are driven by several unknown priori geophysical conditions and solar-terrestrial phenomena. Thereby, the prediction of ionospheric delay is challenging especially over Indian sub-continent. Therefore, an appropriate short/long-term ionospheric delay forecasting model is necessary. Hence, the intent of this paper is to forecast ionospheric delays by considering day to day, monthly and seasonal ionospheric TEC variations. GPS-TEC data (January 2013-December 2013) is extracted from a multi frequency GPS receiver established at K L University, Vaddeswaram, Guntur station (geographic: 16.37°N, 80.37°E; geomagnetic: 7.44°N, 153.75°E), India. An evaluation, in terms of forecasting capabilities, of three ionospheric time delay models - an Auto Regressive Moving Average (ARMA) model, Auto Regressive Integrated Moving Average (ARIMA) model, and a Holt-Winter's model is presented. The performances of these models are evaluated through error measurement analysis during both geomagnetic quiet and disturbed days. It is found that, ARMA model is effectively forecasting the ionospheric delay with an accuracy of 82-94%, which is 10% more superior to ARIMA and Holt-Winter's models. Moreover, the modeled VTEC derived from International Reference Ionosphere, IRI (IRI-2012) model and new global TEC model, Neustrelitz TEC Model (NTCM-GL) have compared with forecasted VTEC values of ARMA, ARIMA and Holt-Winter's models during geomagnetic quiet days. The forecast results are indicating that ARMA model would be useful to set up an early warning system for ionospheric disturbances at low latitude regions.

  10. Flexible modeling improves assessment of prognostic value of C-reactive protein in advanced non-small cell lung cancer.

    PubMed

    Gagnon, B; Abrahamowicz, M; Xiao, Y; Beauchamp, M-E; MacDonald, N; Kasymjanova, G; Kreisman, H; Small, D

    2010-03-30

    C-reactive protein (CRP) is gaining credibility as a prognostic factor in different cancers. Cox's proportional hazard (PH) model is usually used to assess prognostic factors. However, this model imposes a priori assumptions, which are rarely tested, that (1) the hazard ratio associated with each prognostic factor remains constant across the follow-up (PH assumption) and (2) the relationship between a continuous predictor and the logarithm of the mortality hazard is linear (linearity assumption). We tested these two assumptions of the Cox's PH model for CRP, using a flexible statistical model, while adjusting for other known prognostic factors, in a cohort of 269 patients newly diagnosed with non-small cell lung cancer (NSCLC). In the Cox's PH model, high CRP increased the risk of death (HR=1.11 per each doubling of CRP value, 95% CI: 1.03-1.20, P=0.008). However, both the PH assumption (P=0.033) and the linearity assumption (P=0.015) were rejected for CRP, measured at the initiation of chemotherapy, which kept its prognostic value for approximately 18 months. Our analysis shows that flexible modeling provides new insights regarding the value of CRP as a prognostic factor in NSCLC and that Cox's PH model underestimates early risks associated with high CRP.

  11. Assessing the potential of economic instruments for managing drought risk at river basin scale

    NASA Astrophysics Data System (ADS)

    Pulido-Velazquez, M.; Lopez-Nicolas, A.; Macian-Sorribes, H.

    2015-12-01

    Economic instruments work as incentives to adapt individual decisions to collectively agreed goals. Different types of economic instruments have been applied to manage water resources, such as water-related taxes and charges (water pricing, environmental taxes, etc.), subsidies, markets or voluntary agreements. Hydroeconomic models (HEM) provide useful insight on optimal strategies for coping with droughts by simultaneously analysing engineering, hydrology and economics of water resources management. We use HEMs for evaluating the potential of economic instruments on managing drought risk at river basin scale, considering three criteria for assessing drought risk: reliability, resilience and vulnerability. HEMs allow to calculate water scarcity costs as the economic losses due to water deliveries below the target demands, which can be used as a vulnerability descriptor of drought risk. Two generic hydroeconomic DSS tools, SIMGAMS and OPTIGAMS ( both programmed in GAMS) have been developed to evaluate water scarcity cost at river basin scale based on simulation and optimization approaches. The simulation tool SIMGAMS allocates water according to the system priorities and operating rules, and evaluate the scarcity costs using economic demand functions. The optimization tool allocates water resources for maximizing net benefits (minimizing total water scarcity plus operating cost of water use). SIMGAS allows to simulate incentive water pricing policies based on water availability in the system (scarcity pricing), while OPTIGAMS is used to simulate the effect of ideal water markets by economic optimization. These tools have been applied to the Jucar river system (Spain), highly regulated and with high share of water use for crop irrigation (greater than 80%), where water scarcity, irregular hydrology and groundwater overdraft cause droughts to have significant economic, social and environmental consequences. An econometric model was first used to explain the variation

  12. Electricity market pricing, risk hedging and modeling

    NASA Astrophysics Data System (ADS)

    Cheng, Xu

    In this dissertation, we investigate the pricing, price risk hedging/arbitrage, and simplified system modeling for a centralized LMP-based electricity market. In an LMP-based market model, the full AC power flow model and the DC power flow model are most widely used to represent the transmission system. We investigate the differences of dispatching results, congestion pattern, and LMPs for the two power flow models. An appropriate LMP decomposition scheme to quantify the marginal costs of the congestion and real power losses is critical for the implementation of financial risk hedging markets. However, the traditional LMP decomposition heavily depends on the slack bus selection. In this dissertation we propose a slack-independent scheme to break LMP down into energy, congestion, and marginal loss components by analyzing the actual marginal cost of each bus at the optimal solution point. The physical and economic meanings of the marginal effect at each bus provide accurate price information for both congestion and losses, and thus the slack-dependency of the traditional scheme is eliminated. With electricity priced at the margin instead of the average value, the market operator typically collects more revenue from power sellers than that paid to power buyers. According to the LMP decomposition results, the revenue surplus is then divided into two parts: congestion charge surplus and marginal loss revenue surplus. We apply the LMP decomposition results to the financial tools, such as financial transmission right (FTR) and loss hedging right (LHR), which have been introduced to hedge against price risks associated to congestion and losses, to construct a full price risk hedging portfolio. The two-settlement market structure and the introduction of financial tools inevitably create market manipulation opportunities. We investigate several possible market manipulation behaviors by virtual bidding and propose a market monitor approach to identify and quantify such

  13. Debris flow risk mapping on medium scale and estimation of prospective economic losses

    NASA Astrophysics Data System (ADS)

    Blahut, Jan; Sterlacchini, Simone

    2010-05-01

    Delimitation of potential zones affected by debris flow hazard, mapping of areas at risk, and estimation of future economic damage provides important information for spatial planners and local administrators in all countries endangered by this type of phenomena. This study presents a medium scale (1:25 000 - 1: 50 000) analysis applied in the Consortium of Mountain Municipalities of Valtellina di Tirano (Italian Alps, Lombardy Region). In this area a debris flow hazard map was coupled with the information about the elements at risk to obtain monetary values of prospective damage. Two available hazard maps were obtained from GIS medium scale modelling. Probability estimations of debris flow occurrence were calculated using existing susceptibility maps and two sets of aerial images. Value to the elements at risk was assigned according to the official information on housing costs and land value from the Territorial Agency of Lombardy Region. In the first risk map vulnerability values were assumed to be 1. The second risk map uses three classes of vulnerability values qualitatively estimated according to the debris flow possible propagation. Risk curves summarizing the possible economic losses were calculated. Finally these maps of economic risk were compared to maps derived from qualitative evaluation of the values of the elements at risk.

  14. Overstating values: medical facts, diverse values, bioethics and values-based medicine.

    PubMed

    Parker, Malcolm

    2013-02-01

    Fulford has argued that (1) the medical concepts illness, disease and dysfunction are inescapably evaluative terms, (2) illness is conceptually prior to disease, and (3) a model conforming to (2) has greater explanatory power and practical utility than the conventional value-free medical model. This 'reverse' model employs Hare's distinction between description and evaluation, and the sliding relationship between descriptive and evaluative meaning. Fulford's derivative 'Values Based Medicine' (VBM) readjusts the imbalance between the predominance of facts over values in medicine. VBM allegedly responds to the increased choices made available by, inter alia, the progress of medical science itself. VBM attributes appropriate status to evaluative meaning, where strong consensus about descriptive meaning is lacking. According to Fulford, quasi-legal bioethics, while it can be retained as a kind of deliberative framework, is outcome-based and pursues 'the right answer', while VBM approximates a democratic, process-oriented method for dealing with diverse values, in partnership with necessary contributions from evidence-based medicine (EBM). I support the non-cognitivist underpinnings of VBM, and its emphasis on the importance of values in medicine. But VBM overstates the complexity and diversity of values, misrepresents EBM and VBM as responses to scientific and evaluative complexity, and mistakenly depicts 'quasi-legal bioethics' as a space of settled descriptive meaning. Bioethical reasoning can expose strategies that attempt to reduce authentic values to scientific facts, illustrating that VBM provides no advantage over bioethics in delineating the connections between facts and values in medicine. © 2011 Blackwell Publishing Ltd.

  15. The necessity of sociodemographic status adjustment in hospital value rankings for perforated appendicitis in children.

    PubMed

    Tian, Yao; Sweeney, John F; Wulkan, Mark L; Heiss, Kurt F; Raval, Mehul V

    2016-06-01

    Hospitals are increasingly focused on demonstration of high-value care for common surgical procedures. Although sociodemographic status (SDS) factors have been tied to various surgical outcomes, the impact of SDS factors on hospital value rankings has not been well explored. Our objective was to examine effects of SDS factors on high-value surgical care at the patient level, and to illustrate the importance of SDS adjustment when evaluating hospital-level performance. Perforated appendicitis hospitalizations were identified from the 2012 Kids' Inpatient Database. The primary outcome of interest was high-value care as defined by evaluation of duration of stay and cost. SDS factors included race, health insurance type, median household income, and patient location. The impact of SDS on high-value care was estimated using regression models after accounting for hospital-level variation. Risk-adjusted value rankings were compared before and after adjustment for SDS. From 9,986 hospitalizations, 998 high-value encounters were identified. African Americans were less likely to experience high-value care compared with white patients after adjusting for all SDS variables. Although private insurance and living in nonmetro counties were associated independently with high-value care, the effects were attenuated in the fully adjusted models. For the 136 hospitals ranked according to risk-adjusted value status, 59 hospitals' rankings improved after adjustment and 53 hospitals' rankings declined. After adjustment for patient and hospital factors, SDS has a small but significant impact on risk-adjusted hospital performance ranking for pediatric appendicitis. Adjustment for SDS should be considered in future comparative performance assessment. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Value-driven ERM: making ERM an engine for simultaneous value creation and value protection.

    PubMed

    Celona, John; Driver, Jeffrey; Hall, Edward

    2011-01-01

    Enterprise risk management (ERM) began as an effort to integrate the historically disparate silos of risk management in organizations. More recently, as recognition has grown of the need to cover the upside risks in value creation (financial and otherwise), organizations and practitioners have been searching for the means to do this. Existing tools such as heat maps and risk registers are not adequate for this task. Instead, a conceptually new value-driven framework is needed to realize the promise of enterprise-wide coverage of all risks, for both value protection and value creation. The methodology of decision analysis provides the means of capturing systemic, correlated, and value-creation risks on the same basis as value protection risks and has been integrated into the value-driven approach to ERM described in this article. Stanford Hospital and Clinics Risk Consulting and Strategic Decisions Group have been working to apply this value-driven ERM at Stanford University Medical Center. © 2011 American Society for Healthcare Risk Management of the American Hospital Association.

  17. Risk assessment of brine contamination to aquatic resources from energy development in glacial drift deposits: Williston Basin, USA

    USGS Publications Warehouse

    Preston, Todd M.; Chesley-Preston, Tara

    2015-01-01

    Our goal was to improve the Sheridan County assessment (SCA) and evaluate the use of this new Williston Basin assessment (WBA) across 31 counties mantled by glacial drift in the Williston Basin. To determine if the WBA model improved the SCA model, results from both assessments were compared to CI values from 37 surface and groundwater samples collected to evaluate the SCA. The WBA (R2 = 0.65) outperformed the SCA (R2 = 0.52) indicating improved model performance. Applicability across the Williston Basin was evaluated by comparing WBA results to CI values from 123 surface water samples collected from 97 sections. Based on the WBA, the majority (83.5%) of sections lacked an oil well and had minimal risk. Sections with one or more oil wells comprised low (8.4%), moderate (6.5%), or high (1.7%) risk areas. The percentage of contaminated water samples, percentage of sections with at least one contaminated sample, and the average CI value of contaminated samples increased from low to high risk indicating applicability across the Williston Basin. Furthermore, the WBA performed better compared to only the contaminated samples (R2 = 0.62) versus all samples (R2 = 0.38). This demonstrates that the WBA was successful at identifying sections, but not individual aquatic resources, with an increased risk of contamination; therefore, WBA results can prioritize future sampling within areas of increased risk.

  18. Prognostic Value of High-Sensitivity Troponin-T to Identify Patients at Risk of Left Ventricular Graft Dysfunction After Heart Transplantation.

    PubMed

    Méndez, A B; Ordonez-Llanos, J; Mirabet, S; Galan, J; Maestre, M L; Brossa, V; Rivilla, M T; López, L; Koller, T; Sionis, A; Roig, E

    2016-11-01

    Primary graft dysfunction after heart transplantation (HTx) has a very high mortality rate, especially if the left ventricle (PGD-LV) is involved. Early diagnosis is important to select the appropriate therapy to improve prognosis. The value of high-sensitivity troponin T (HS-TNT) measurement obtained at patient arrival at the intensive care unit was analyzed in 71 HTx patients. Mild or moderate PGD-LV was defined by hemodynamic compromise with one of the following criteria: left ventricular ejection fraction <40%, hemodynamic compromise with right atrial pressure >15 mm Hg, pulmonary capillary wedge pressure >20 mm Hg, cardiac index <2.0 L/min/m 2 , hypotension (mean arterial pressure <70 mm Hg), and need for high-dose inotropes (inotrope score >10) or newly placed intra-aortic balloon pump. The mean recipient age was 54 ± 12 years (73% men), and donor age was 47 ± 11 years. Ischemic time was 200 ± 51 minutes, and coronary bypass time was 122 ± 31 minutes. Nine (13%) HTx patients were diagnosed with PGD-LV post-HTx, 8 with biventricular dysfunction. Four patients died, 2 with PGD-LV (22%) and 2 without PGD (4%). Mean HS-TNT before HTx was 158 ± 565 ng/L, and post-HT was 1621 ± 1269 ng/L. The area under the curve (receiver-operator characteristic) of HS-TNT to detect patients at risk of PGD-LV was 0.860 (P < .003). A cutoff value of HS-TNT >2000 ng/L had a sensitivity of 75% and specificity of 87% to identify patients at risk of PGD-LV. Multivariate analysis identified HS-TNT >2000 ng/L (P < .02) and coronary bypass-time (P < .01) as independent predictors of PGD-LV. HS-TNT >2000 ng/L at intensive care admission after HT and prolonged coronary bypass time were the most powerful predictors of PGD-LV. HS-TNT may be helpful for early detection of HTx patients at risk of PGD-LV. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Valuing the risk reduction of coastal ecosystems in data poor environments: an application in Quintana Roo, Mexico

    NASA Astrophysics Data System (ADS)

    Reguero, B. G.; Toimil, A.; Escudero, M.; Menendez, P.; Losada, I. J.; Beck, M. W.; Secaira, F.

    2016-12-01

    Coastal risks are increasing from both economic growth and climate change. Understanding such risks is critical to assessing adaptation needs and finding cost effective solutions for coastal sustainability. Interest is growing in the role that nature-based measures can play in adapting to climate change. Here we apply and advance a framework to value the risk reduction potential of coastal ecosystems, with an application in a large scale domain, the coast of Quintana Roo, México, relevant for coastal policy and management, but with limited data. We build from simple to use open-source tools. We first assess the hazards using stochastic simulation of historical tropical storms and inferring two scenarios of future climate change for the next 20 years, which include the effect of sea level rise and changes in frequency and intensity of storms. For each storm, we obtain wave and surge fields using parametrical models, corrected with pre-computed static wind surge numerical simulations. We then assess losses on capital stock and hotels and calculate total people flooded, after accounting for the effect of coastal ecosystems in reducing coastal hazards. We inferred the location of major barrier reefs and dune systems using available satellite imagery, and sections of bathymetry and elevation data. We also digitalized the surface of beaches and location of coastal structures from satellite imagery. In a poor data environment, where there is not bathymetry data for the whole of the region, we inferred representative coastal profiles of coral reef and dune sections and validated at available sections with measured data. Because we account for the effect of reefs, dunes and mangroves in coastal profiles every 200 m of shoreline, we are able to estimate the value of such ecosystems by comparing with benchmark simulations when we take them out of the propagation and flood model. Although limited in accuracy in comparison to more complex modeling, this approach is able to

  20. Three Cultural Models of Teacher Interaction Valued by Mexican Students at a US High School

    ERIC Educational Resources Information Center

    Andrews, Micah

    2016-01-01

    Using students' interviews as data source, this study explores the interactional experiences of several Mexican students at a US high school in the Midwest with their teachers and discusses how three cultural models of teacher interaction valued by the students impact their affiliation, motivation, and engagement with school. Emphasis is given to…

  1. Monte Carlo-based evaluation of S-values in mouse models for positron-emitting radionuclides

    NASA Astrophysics Data System (ADS)

    Xie, Tianwu; Zaidi, Habib

    2013-01-01

    In addition to being a powerful clinical tool, Positron emission tomography (PET) is also used in small laboratory animal research to visualize and track certain molecular processes associated with diseases such as cancer, heart disease and neurological disorders in living small animal models of disease. However, dosimetric characteristics in small animal PET imaging are usually overlooked, though the radiation dose may not be negligible. In this work, we constructed 17 mouse models of different body mass and size based on the realistic four-dimensional MOBY mouse model. Particle (photons, electrons and positrons) transport using the Monte Carlo method was performed to calculate the absorbed fractions and S-values for eight positron-emitting radionuclides (C-11, N-13, O-15, F-18, Cu-64, Ga-68, Y-86 and I-124). Among these radionuclides, O-15 emits positrons with high energy and frequency and produces the highest self-absorbed S-values in each organ, while Y-86 emits γ-rays with high energy and frequency which results in the highest cross-absorbed S-values for non-neighbouring organs. Differences between S-values for self-irradiated organs were between 2% and 3%/g difference in body weight for most organs. For organs irradiating other organs outside the splanchnocoele (i.e. brain, testis and bladder), differences between S-values were lower than 1%/g. These appealing results can be used to assess variations in small animal dosimetry as a function of total-body mass. The generated database of S-values for various radionuclides can be used in the assessment of radiation dose to mice from different radiotracers in small animal PET experiments, thus offering quantitative figures for comparative dosimetry research in small animal models.

  2. Functional Linear Model with Zero-value Coefficient Function at Sub-regions.

    PubMed

    Zhou, Jianhui; Wang, Nae-Yuh; Wang, Naisyin

    2013-01-01

    We propose a shrinkage method to estimate the coefficient function in a functional linear regression model when the value of the coefficient function is zero within certain sub-regions. Besides identifying the null region in which the coefficient function is zero, we also aim to perform estimation and inferences for the nonparametrically estimated coefficient function without over-shrinking the values. Our proposal consists of two stages. In stage one, the Dantzig selector is employed to provide initial location of the null region. In stage two, we propose a group SCAD approach to refine the estimated location of the null region and to provide the estimation and inference procedures for the coefficient function. Our considerations have certain advantages in this functional setup. One goal is to reduce the number of parameters employed in the model. With a one-stage procedure, it is needed to use a large number of knots in order to precisely identify the zero-coefficient region; however, the variation and estimation difficulties increase with the number of parameters. Owing to the additional refinement stage, we avoid this necessity and our estimator achieves superior numerical performance in practice. We show that our estimator enjoys the Oracle property; it identifies the null region with probability tending to 1, and it achieves the same asymptotic normality for the estimated coefficient function on the non-null region as the functional linear model estimator when the non-null region is known. Numerically, our refined estimator overcomes the shortcomings of the initial Dantzig estimator which tends to under-estimate the absolute scale of non-zero coefficients. The performance of the proposed method is illustrated in simulation studies. We apply the method in an analysis of data collected by the Johns Hopkins Precursors Study, where the primary interests are in estimating the strength of association between body mass index in midlife and the quality of life in

  3. Value of Earth Observation for Risk Mitigation

    NASA Astrophysics Data System (ADS)

    Pearlman, F.; Shapiro, C. D.; Grasso, M.; Pearlman, J.; Adkins, J. E.; Pindilli, E.; Geppi, D.

    2017-12-01

    Societal benefits flowing from Earth observation are intuitively obvious as we use the information to assess natural hazards (such as storm tracks), water resources (such as flooding and droughts in coastal and riverine systems), ecosystem vitality and other dynamics that impact the health and economic well being of our population. The most powerful confirmation of these benefits would come from quantifying the impact and showing direct quantitative links in the value chain from data to decisions. However, our ability to identify and quantify those benefits is challenging. The impact of geospatial data on these types of decisions is not well characterized and assigning a true value to the observations on a broad scale across disciplines still remains to be done in a systematic way. This presentation provides the outcomes of a workshop held in October 2017 as a side event of the GEO Plenary that addressed research on economic methodologies for quantification of impacts. To achieve practical outputs during the meeting, the workshop focused on the use and value of Earth observations in risk mitigation including: ecosystem impacts, weather events, and other natural and manmade hazards. Case studies on approaches were discussed and will be part of this presentation. The presentation will also include the exchange of lessons learned and a discussion of gaps in the current understanding of the use and value of earth observation information for risk mitigation.

  4. Risk-based economic decision analysis of remediation options at a PCE-contaminated site.

    PubMed

    Lemming, Gitte; Friis-Hansen, Peter; Bjerg, Poul L

    2010-05-01

    Remediation methods for contaminated sites cover a wide range of technical solutions with different remedial efficiencies and costs. Additionally, they may vary in their secondary impacts on the environment i.e. the potential impacts generated due to emissions and resource use caused by the remediation activities. More attention is increasingly being given to these secondary environmental impacts when evaluating remediation options. This paper presents a methodology for an integrated economic decision analysis which combines assessments of remediation costs, health risk costs and potential environmental costs. The health risks costs are associated with the residual contamination left at the site and its migration to groundwater used for drinking water. A probabilistic exposure model using first- and second-order reliability methods (FORM/SORM) is used to estimate the contaminant concentrations at a downstream groundwater well. Potential environmental impacts on the local, regional and global scales due to the site remediation activities are evaluated using life cycle assessments (LCA). The potential impacts on health and environment are converted to monetary units using a simplified cost model. A case study based upon the developed methodology is presented in which the following remediation scenarios are analyzed and compared: (a) no action, (b) excavation and off-site treatment of soil, (c) soil vapor extraction and (d) thermally enhanced soil vapor extraction by electrical heating of the soil. Ultimately, the developed methodology facilitates societal cost estimations of remediation scenarios which can be used for internal ranking of the analyzed options. Despite the inherent uncertainties of placing a value on health and environmental impacts, the presented methodology is believed to be valuable in supporting decisions on remedial interventions. Copyright 2010 Elsevier Ltd. All rights reserved.

  5. Application of probabilistic risk assessment: Evaluating remedial alternatives at the Portland Harbor Superfund Site, Portland, Oregon, USA.

    PubMed

    Ruffle, Betsy; Henderson, James; Murphy-Hagan, Clare; Kirkwood, Gemma; Wolf, Frederick; Edwards, Deborah A

    2018-01-01

    A probabilistic risk assessment (PRA) was performed to evaluate the range of potential baseline and postremedy health risks to fish consumers at the Portland Harbor Superfund Site (the "Site"). The analysis focused on risks of consuming fish resident to the Site containing polychlorinated biphenyls (PCBs), given that this exposure scenario and contaminant are the primary basis for US Environmental Protection Agency's (USEPA's) selected remedy per the January 2017 Record of Decision (ROD). The PRA used probability distributions fit to the same data sets used in the deterministic baseline human health risk assessment (BHHRA) as well as recent sediment and fish tissue data to evaluate the range and likelihood of current baseline cancer risks and noncancer hazards for anglers. Areas of elevated PCBs in sediment were identified on the basis of a geospatial evaluation of the surface sediment data, and the ranges of risks and hazards associated with pre- and postremedy conditions were calculated. The analysis showed that less active remediation (targeted to areas with the highest concentrations) compared to the remedial alternative selected by USEPA in the ROD can achieve USEPA's interim risk management benchmarks (cancer risk of 10 -4 and noncancer hazard index [HI] of 10) immediately postremediation for the vast majority of subsistence anglers that consume smallmouth bass (SMB) fillet tissue. In addition, the same targeted remedy achieves USEPA's long-term benchmarks (10 -5 and HI of 1) for the majority of recreational anglers. Additional sediment remediation would result in negligible additional risk reduction due to the influence of background. The PRA approach applied here provides a simple but adaptive framework for analysis of risks and remedial options focused on variability in exposures. It can be updated and refined with new data to evaluate and reduce uncertainty, improve understanding of the Site and target populations, and foster informed remedial decision

  6. Melanoma Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Probabilistic Modeling of Settlement Risk at Land Disposal Facilities - 12304

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foye, Kevin C.; Soong, Te-Yang

    2012-07-01

    The long-term reliability of land disposal facility final cover systems - and therefore the overall waste containment - depends on the distortions imposed on these systems by differential settlement/subsidence. The evaluation of differential settlement is challenging because of the heterogeneity of the waste mass (caused by inconsistent compaction, void space distribution, debris-soil mix ratio, waste material stiffness, time-dependent primary compression of the fine-grained soil matrix, long-term creep settlement of the soil matrix and the debris, etc.) at most land disposal facilities. Deterministic approaches to long-term final cover settlement prediction are not able to capture the spatial variability in the wastemore » mass and sub-grade properties which control differential settlement. An alternative, probabilistic solution is to use random fields to model the waste and sub-grade properties. The modeling effort informs the design, construction, operation, and maintenance of land disposal facilities. A probabilistic method to establish design criteria for waste placement and compaction is introduced using the model. Random fields are ideally suited to problems of differential settlement modeling of highly heterogeneous foundations, such as waste. Random fields model the seemingly random spatial distribution of a design parameter, such as compressibility. When used for design, the use of these models prompts the need for probabilistic design criteria. It also allows for a statistical approach to waste placement acceptance criteria. An example design evaluation was performed, illustrating the use of the probabilistic differential settlement simulation methodology to assemble a design guidance chart. The purpose of this design evaluation is to enable the designer to select optimal initial combinations of design slopes and quality control acceptance criteria that yield an acceptable proportion of post-settlement slopes meeting some design minimum. For this

  8. Regime switching model for financial data: Empirical risk analysis

    NASA Astrophysics Data System (ADS)

    Salhi, Khaled; Deaconu, Madalina; Lejay, Antoine; Champagnat, Nicolas; Navet, Nicolas

    2016-11-01

    This paper constructs a regime switching model for the univariate Value-at-Risk estimation. Extreme value theory (EVT) and hidden Markov models (HMM) are combined to estimate a hybrid model that takes volatility clustering into account. In the first stage, HMM is used to classify data in crisis and steady periods, while in the second stage, EVT is applied to the previously classified data to rub out the delay between regime switching and their detection. This new model is applied to prices of numerous stocks exchanged on NYSE Euronext Paris over the period 2001-2011. We focus on daily returns for which calibration has to be done on a small dataset. The relative performance of the regime switching model is benchmarked against other well-known modeling techniques, such as stable, power laws and GARCH models. The empirical results show that the regime switching model increases predictive performance of financial forecasting according to the number of violations and tail-loss tests. This suggests that the regime switching model is a robust forecasting variant of power laws model while remaining practical to implement the VaR measurement.

  9. Solid pulmonary nodule risk assessment and decision analysis: comparison of four prediction models in 285 cases.

    PubMed

    Perandini, Simone; Soardi, Gian Alberto; Motton, Massimiliano; Rossi, Arianna; Signorini, Manuel; Montemezzi, Stefania

    2016-09-01

    The aim of this study was to compare classification results from four major risk prediction models in a wide population of incidentally detected solitary pulmonary nodules (SPNs) which were selected to crossmatch inclusion criteria for the selected models. A total of 285 solitary pulmonary nodules with a definitive diagnosis were evaluated by means of four major risk assessment models developed from non-screening populations, namely the Mayo, Gurney, PKUPH and BIMC models. Accuracy was evaluated by receiver operating characteristic (ROC) area under the curve (AUC) analysis. Each model's fitness to provide reliable help in decision analysis was primarily assessed by adopting a surgical threshold of 65 % and an observation threshold of 5 % as suggested by ACCP guidelines. ROC AUC values, false positives, false negatives and indeterminate nodules were respectively 0.775, 3, 8, 227 (Mayo); 0.794, 41, 6, 125 (Gurney); 0.889, 42, 0, 144 (PKUPH); 0.898, 16, 0, 118 (BIMC). Resultant data suggests that the BIMC model may be of greater help than Mayo, Gurney and PKUPH models in preoperative SPN characterization when using ACCP risk thresholds because of overall better accuracy and smaller numbers of indeterminate nodules and false positive results. • The BIMC and PKUPH models offer better characterization than older prediction models • Both the PKUPH and BIMC models completely avoided false negative results • The Mayo model suffers from a large number of indeterminate results.

  10. Evaluation and economic value of winter weather forecasts

    NASA Astrophysics Data System (ADS)

    Snyder, Derrick W.

    State and local highway agencies spend millions of dollars each year to deploy winter operation teams to plow snow and de-ice roadways. Accurate and timely weather forecast information is critical for effective decision making. Students from Purdue University partnered with the Indiana Department of Transportation to create an experimental winter weather forecast service for the 2012-2013 winter season in Indiana to assist in achieving these goals. One forecast product, an hourly timeline of winter weather hazards produced daily, was evaluated for quality and economic value. Verification of the forecasts was performed with data from the Rapid Refresh numerical weather model. Two objective verification criteria were developed to evaluate the performance of the timeline forecasts. Using both criteria, the timeline forecasts had issues with reliability and discrimination, systematically over-forecasting the amount of winter weather that was observed while also missing significant winter weather events. Despite these quality issues, the forecasts still showed significant, but varied, economic value compared to climatology. Economic value of the forecasts was estimated to be 29.5 million or 4.1 million, depending on the verification criteria used. Limitations of this valuation system are discussed and a framework is developed for more thorough studies in the future.

  11. Evaluation of the 29-km Eta Model. Part 1; Objective Verification at Three Selected Stations

    NASA Technical Reports Server (NTRS)

    Nutter, Paul A.; Manobianco, John; Merceret, Francis J. (Technical Monitor)

    1998-01-01

    This paper describes an objective verification of the National Centers for Environmental Prediction (NCEP) 29-km eta model from May 1996 through January 1998. The evaluation was designed to assess the model's surface and upper-air point forecast accuracy at three selected locations during separate warm (May - August) and cool (October - January) season periods. In order to enhance sample sizes available for statistical calculations, the objective verification includes two consecutive warm and cool season periods. Systematic model deficiencies comprise the larger portion of the total error in most of the surface forecast variables that were evaluated. The error characteristics for both surface and upper-air forecasts vary widely by parameter, season, and station location. At upper levels, a few characteristic biases are identified. Overall however, the upper-level errors are more nonsystematic in nature and could be explained partly by observational measurement uncertainty. With a few exceptions, the upper-air results also indicate that 24-h model error growth is not statistically significant. In February and August 1997, NCEP implemented upgrades to the eta model's physical parameterizations that were designed to change some of the model's error characteristics near the surface. The results shown in this paper indicate that these upgrades led to identifiable and statistically significant changes in forecast accuracy for selected surface parameters. While some of the changes were expected, others were not consistent with the intent of the model updates and further emphasize the need for ongoing sensitivity studies and localized statistical verification efforts. Objective verification of point forecasts is a stringent measure of model performance, but when used alone, is not enough to quantify the overall value that model guidance may add to the forecast process. Therefore, results from a subjective verification of the meso-eta model over the Florida peninsula are

  12. Teacher Evaluation and Collective Bargaining: Resolving Policy at a Local Level

    ERIC Educational Resources Information Center

    Paige, Mark

    2013-01-01

    This case study analyzes controversial teacher evaluation policies in the context of collective bargaining. Dr. Jill Abrams, a new superintendent in a struggling school district, is at the center of the case. Her school board demands a form of teacher evaluation she finds problematic because it includes value-added modeling. Moreover, the board…

  13. Value Reappraisal as a Conceptual Model for Task-Value Interventions

    ERIC Educational Resources Information Center

    Acee, Taylor W.; Weinstein, Claire Ellen; Hoang, Theresa V.; Flaggs, Darolyn A.

    2018-01-01

    We discuss task-value interventions as one type of relevance intervention and propose a process model of value reappraisal whereby task-value interventions elicit cognitive-affective responses that lead to attitude change and in turn affect academic outcomes. The model incorporates a metacognitive component showing that students can intentionally…

  14. Evaluating facts and facting evaluations: On the fact-value relationship in HTA.

    PubMed

    Hofmann, Bjørn; Bond, Ken; Sandman, Lars

    2018-04-03

    Health technology assessment (HTA) is an evaluation of health technologies in terms of facts and evidence. However, the relationship between facts and values is still not clear in HTA. This is problematic in an era of "fake facts" and "truth production." Accordingly, the objective of this study is to clarify the relationship between facts and values in HTA. We start with the perspectives of the traditional positivist account of "evaluating facts" and the social-constructivist account of "facting values." Our analysis reveals diverse relationships between facts and a spectrum of values, ranging from basic human values, to the values of health professionals, and values of and in HTA, as well as for decision making. We argue for sensitivity to the relationship between facts and values on all levels of HTA, for being open and transparent about the values guiding the production of facts, and for a primacy for the values close to the principal goals of health care, ie, relieving suffering. We maintain that philosophy (in particular ethics) may have an important role in addressing the relationship between facts and values in HTA. Philosophy may help us to avoid fallacies of inferring values from facts; to disentangle the normative assumptions in the production or presentation of facts and to tease out implicit value judgements in HTA; to analyse evaluative argumentation relating to facts about technologies; to address conceptual issues of normative importance; and to promote reflection on HTA's own value system. In this we argue for a(n Aristotelian) middle way between the traditional positivist account of "evaluating facts" and the social-constructivist account of "facting values," which we call "factuation." We conclude that HTA is unique in bringing together facts and values and that being conscious and explicit about this "factuation" is key to making HTA valuable to both individual decision makers and society as a whole. © 2018 The Authors Journal of Evaluation in

  15. Fitness components and ecological risk of transgenic release: a model using Japanese medaka (Oryzias latipes).

    PubMed

    Muir, W M; Howard, R D

    2001-07-01

    Any release of transgenic organisms into nature is a concern because ecological relationships between genetically engineered organisms and other organisms (including their wild-type conspecifics) are unknown. To address this concern, we developed a method to evaluate risk in which we input estimates of fitness parameters from a founder population into a recurrence model to predict changes in transgene frequency after a simulated transgenic release. With this method, we grouped various aspects of an organism's life cycle into six net fitness components: juvenile viability, adult viability, age at sexual maturity, female fecundity, male fertility, and mating advantage. We estimated these components for wild-type and transgenic individuals using the fish, Japanese medaka (Oryzias latipes). We generalized our model's predictions using various combinations of fitness component values in addition to our experimentally derived estimates. Our model predicted that, for a wide range of parameter values, transgenes could spread in populations despite high juvenile viability costs if transgenes also have sufficiently high positive effects on other fitness components. Sensitivity analyses indicated that transgene effects on age at sexual maturity should have the greatest impact on transgene frequency, followed by juvenile viability, mating advantage, female fecundity, and male fertility, with changes in adult viability, resulting in the least impact.

  16. A game theory approach for assessing risk value and deploying search-and-rescue resources after devastating tsunamis.

    PubMed

    Wu, Cheng-Kuang

    2018-04-01

    The current early-warning system and tsunami protection measures tend to fall short because they always underestimate the level of destruction, and it is difficult to predict the level of damage by a devastating tsunami on uncertain targets. As we know, the key to minimizing the total number of fatalities after a disaster is the deployment of search and rescue efforts in the first few hours. However, the resources available to the affected districts for emergency response may be limited. This study proposes two game theoretic models that are designed for search-and-rescue resource allocation. First, the interactions between a compounded disaster and a response agent in the affected district are modelled as a non-cooperative game, after which the risk value is derived for each district from the Nash equilibrium. The risk value represents the threat, vulnerability, and consequence of a specific disaster for the affected district. Second, the risk values for fifteen districts are collected for calculation of each district's Shapley value. Then an acceptable plan for resource deployment among all districts is made based on their expected marginal contribution. The model is verified in a simulation based upon 2011 tsunami data. The experimental results show the proposed approach to be more efficient than the proportional division of rescue resources, for dealing with compounded disaster, and is feasible as a method for planning the mobilization of resources and to improve relief efforts against devastating tsunamis. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. A moni-modelling approach to manage groundwater risk to pesticide leaching at regional scale.

    PubMed

    Di Guardo, Andrea; Finizio, Antonio

    2016-03-01

    Historically, the approach used to manage risk of chemical contamination of water bodies is based on the use of monitoring programmes, which provide a snapshot of the presence/absence of chemicals in water bodies. Monitoring is required in the current EU regulations, such as the Water Framework Directive (WFD), as a tool to record temporal variation in the chemical status of water bodies. More recently, a number of models have been developed and used to forecast chemical contamination of water bodies. These models combine information of chemical properties, their use, and environmental scenarios. Both approaches are useful for risk assessors in decision processes. However, in our opinion, both show flaws and strengths when taken alone. This paper proposes an integrated approach (moni-modelling approach) where monitoring data and modelling simulations work together in order to provide a common decision framework for the risk assessor. This approach would be very useful, particularly for the risk management of pesticides at a territorial level. It fulfils the requirement of the recent Sustainable Use of Pesticides Directive. In fact, the moni-modelling approach could be used to identify sensible areas where implement mitigation measures or limitation of use of pesticides, but even to effectively re-design future monitoring networks or to better calibrate the pedo-climatic input data for the environmental fate models. A case study is presented, where the moni-modelling approach is applied in Lombardy region (North of Italy) to identify groundwater vulnerable areas to pesticides. The approach has been applied to six active substances with different leaching behaviour, in order to highlight the advantages in using the proposed methodology. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. A model for the evaluation of systemic risk in stock markets

    NASA Astrophysics Data System (ADS)

    Caetano, Marco Antonio Leonel; Yoneyama, Takashi

    2011-06-01

    Systemic risk refers to the possibility of a collapse of an entire financial system or market, differing from the risk associated with any particular individual or a group pertaining to the system, which may include banks, government, brokers, and creditors. After the 2008 financial crisis, a significant amount of effort has been directed to the study of systemic risk and its consequences around the world. Although it is very difficult to predict when people begin to lose confidence in a financial system, it is possible to model the relationships among the stock markets of different countries and perform a Monte Carlo-type analysis to study the contagion effect. Because some larger and stronger markets influence smaller ones, a model inspired by a catalytic chemical model is proposed. In chemical reactions, reagents with higher concentrations tend to favor their conversion to products. In order to modulate the conversion process, catalyzers may be used. In this work, a mathematical modeling is proposed with bases on the catalytic chemical reaction model. More specifically, the Hang Seng and Dow Jones indices are assumed to dominate Ibovespa (the Brazilian Stock Market index), such that the indices of strong markets are taken as being analogous to the concentrations of the reagents and the indices of smaller markets as concentrations of products. The role of the catalyst is to model the degree of influence of one index on another. The actual data used to fit the model parameter consisted of the Hang Seng index, Dow Jones index, and Ibovespa, since 1993. “What if” analyses were carried out considering some intervention policies.

  19. Age at menopause: imputing age at menopause for women with a hysterectomy with application to risk of postmenopausal breast cancer

    PubMed Central

    Rosner, Bernard; Colditz, Graham A.

    2011-01-01

    Purpose Age at menopause, a major marker in the reproductive life, may bias results for evaluation of breast cancer risk after menopause. Methods We follow 38,948 premenopausal women in 1980 and identify 2,586 who reported hysterectomy without bilateral oophorectomy, and 31,626 who reported natural menopause during 22 years of follow-up. We evaluate risk factors for natural menopause, impute age at natural menopause for women reporting hysterectomy without bilateral oophorectomy and estimate the hazard of reaching natural menopause in the next 2 years. We apply this imputed age at menopause to both increase sample size and to evaluate the relation between postmenopausal exposures and risk of breast cancer. Results Age, cigarette smoking, age at menarche, pregnancy history, body mass index, history of benign breast disease, and history of breast cancer were each significantly related to age at natural menopause; duration of oral contraceptive use and family history of breast cancer were not. The imputation increased sample size substantially and although some risk factors after menopause were weaker in the expanded model (height, and alcohol use), use of hormone therapy is less biased. Conclusions Imputing age at menopause increases sample size, broadens generalizability making it applicable to women with hysterectomy, and reduces bias. PMID:21441037

  20. A near real time scenario at regional scale for the hydrogeological risk

    NASA Astrophysics Data System (ADS)

    Ponziani, F.; Stelluti, M.; Zauri, R.; Berni, N.; Brocca, L.; Moramarco, T.; Salciarini, D.; Tamagnini, C.

    2012-04-01

    The early warning systems dedicated to landslides and floods represent the Umbria Region Civil Protection Service new generation tools for hydraulic and hydrogeological risk reduction. Following past analyses performed by the Functional Centre (part of the civil protection service dedicated to the monitoring and the evaluation of natural hazards) on the relationship between saturated soil conditions and rainfall thresholds, we have developed an automated early warning system for the landslide risk, called LANDWARN, which generates daily and 72h forecast risk matrix with a dense mesh of 100 x 100m, throughout the region. The system is based on: (a) the 20 days -observed and 72h -predicted rainfall, provided by the local meteorological network and the Local scale Meteorological Model COSMO ME, (b) the assessment of the saturation of soils by: daily extraction of ASCAT satellite data, data from a network of 16 TDR sensors, and a water balance model (developed by the Research Institute for Geo-Hydrological Protection, CNR, Perugia, Italy) that allows for the prediction of a saturation index for each point of the analysis grid up to a window of 72 h, (c) a Web-GIS platform that combines the data grids of calculated hazard indicators with layers of landslide susceptibility and vulnerability of the territory, in order to produce dynamic risk scenarios. The system is still under development and it's implemented at different scales: the entire region, and a set of known high-risk landslides in Umbria. The system is monitored and regularly reviewed through the back analysis of landslide reports for which the activation date is available. Up to now, the development of the system involves: a) the improvement of the reliability assessment of the condition of soil saturation, a key parameter which is used to dynamically adjust the values of rainfall thresholds used for the declaration of levels of landslide hazard. For this purpose, a procedure was created for the ASCAT

  1. Calculating excess lifetime risk in relative risk models.

    PubMed Central

    Vaeth, M; Pierce, D A

    1990-01-01

    When assessing the impact of radiation exposure it is common practice to present the final conclusions in terms of excess lifetime cancer risk in a population exposed to a given dose. The present investigation is mainly a methodological study focusing on some of the major issues and uncertainties involved in calculating such excess lifetime risks and related risk projection methods. The age-constant relative risk model used in the recent analyses of the cancer mortality that was observed in the follow-up of the cohort of A-bomb survivors in Hiroshima and Nagasaki is used to describe the effect of the exposure on the cancer mortality. In this type of model the excess relative risk is constant in age-at-risk, but depends on the age-at-exposure. Calculation of excess lifetime risks usually requires rather complicated life-table computations. In this paper we propose a simple approximation to the excess lifetime risk; the validity of the approximation for low levels of exposure is justified empirically as well as theoretically. This approximation provides important guidance in understanding the influence of the various factors involved in risk projections. Among the further topics considered are the influence of a latent period, the additional problems involved in calculations of site-specific excess lifetime cancer risks, the consequences of a leveling off or a plateau in the excess relative risk, and the uncertainties involved in transferring results from one population to another. The main part of this study relates to the situation with a single, instantaneous exposure, but a brief discussion is also given of the problem with a continuous exposure at a low-dose rate. PMID:2269245

  2. Calculating excess lifetime risk in relative risk models.

    PubMed

    Vaeth, M; Pierce, D A

    1990-07-01

    When assessing the impact of radiation exposure it is common practice to present the final conclusions in terms of excess lifetime cancer risk in a population exposed to a given dose. The present investigation is mainly a methodological study focusing on some of the major issues and uncertainties involved in calculating such excess lifetime risks and related risk projection methods. The age-constant relative risk model used in the recent analyses of the cancer mortality that was observed in the follow-up of the cohort of A-bomb survivors in Hiroshima and Nagasaki is used to describe the effect of the exposure on the cancer mortality. In this type of model the excess relative risk is constant in age-at-risk, but depends on the age-at-exposure. Calculation of excess lifetime risks usually requires rather complicated life-table computations. In this paper we propose a simple approximation to the excess lifetime risk; the validity of the approximation for low levels of exposure is justified empirically as well as theoretically. This approximation provides important guidance in understanding the influence of the various factors involved in risk projections. Among the further topics considered are the influence of a latent period, the additional problems involved in calculations of site-specific excess lifetime cancer risks, the consequences of a leveling off or a plateau in the excess relative risk, and the uncertainties involved in transferring results from one population to another. The main part of this study relates to the situation with a single, instantaneous exposure, but a brief discussion is also given of the problem with a continuous exposure at a low-dose rate.

  3. Guinea pig model for evaluating the potential public health risk of swine and avian influenza viruses.

    PubMed

    Sun, Yipeng; Bi, Yuhai; Pu, Juan; Hu, Yanxin; Wang, Jingjing; Gao, Huijie; Liu, Linqing; Xu, Qi; Tan, Yuanyuan; Liu, Mengda; Guo, Xin; Yang, Hanchun; Liu, Jinhua

    2010-11-23

    The influenza viruses circulating in animals sporadically transmit to humans and pose pandemic threats. Animal models to evaluate the potential public health risk potential of these viruses are needed. We investigated the guinea pig as a mammalian model for the study of the replication and transmission characteristics of selected swine H1N1, H1N2, H3N2 and avian H9N2 influenza viruses, compared to those of pandemic (H1N1) 2009 and seasonal human H1N1, H3N2 influenza viruses. The swine and avian influenza viruses investigated were restricted to the respiratory system of guinea pigs and shed at high titers in nasal tracts without prior adaptation, similar to human strains. None of the swine and avian influenza viruses showed transmissibility among guinea pigs; in contrast, pandemic (H1N1) 2009 virus transmitted from infected guinea pigs to all animals and seasonal human influenza viruses could also horizontally transmit in guinea pigs. The analysis of the receptor distribution in the guinea pig respiratory tissues by lectin histochemistry indicated that both SAα2,3-Gal and SAα2,6-Gal receptors widely presented in the nasal tract and the trachea, while SAα2,3-Gal receptor was the main receptor in the lung. We propose that the guinea pig could serve as a useful mammalian model to evaluate the potential public health threat of swine and avian influenza viruses.

  4. Which risk models perform best in selecting ever-smokers for lung cancer screening?

    Cancer.gov

    A new analysis by scientists at NCI evaluates nine different individualized lung cancer risk prediction models based on their selections of ever-smokers for computed tomography (CT) lung cancer screening.

  5. Application of Hellison's Teaching Personal and Social Responsibility Model in physical education to improve self-efficacy for adolescents at risk of dropping-out of school.

    PubMed

    Escartí, Amparo; Gutiérrez, Melchor; Pascual, Carmina; Marín, Diana

    2010-11-01

    This study evaluated improvement in self-efficacy and personal and social responsibility among at-risk of dropping-out of school adolescents participating in a program in which Hellison's Teaching Personal and Social Responsibility Model was applied in physical education classes during the course of an academic year. Thirty at-risk adolescents aged 13-14 years old (23 boys, 7 girls) were assigned to an intervention group (12 boys and 3 girls) or a comparison group (11 boys, 4 girls), the latter of which did not participate in the program. Quantitative results showed a significant improvement in the students' self-efficacy for enlisting social resources and in self-efficacy for self-regulated learning. Qualitative results showed an improvement in responsibility behaviors of participants in the intervention group. This suggests that the model could be effective for improving psychological and social development in at-risk adolescents, and that physical education classes may be an appropriate arena for working with these young people.

  6. Multi Criteria Evaluation Module for RiskChanges Spatial Decision Support System

    NASA Astrophysics Data System (ADS)

    Olyazadeh, Roya; Jaboyedoff, Michel; van Westen, Cees; Bakker, Wim

    2015-04-01

    Multi-Criteria Evaluation (MCE) module is one of the five modules of RiskChanges spatial decision support system. RiskChanges web-based platform aims to analyze changes in hydro-meteorological risk and provides tools for selecting the best risk reduction alternative. It is developed under CHANGES framework (changes-itn.eu) and INCREO project (increo-fp7.eu). MCE tool helps decision makers and spatial planners to evaluate, sort and rank the decision alternatives. The users can choose among different indicators that are defined within the system using Risk and Cost Benefit analysis results besides they can add their own indicators. Subsequently the system standardizes and prioritizes them. Finally, the best decision alternative is selected by using the weighted sum model (WSM). The Application of this work is to facilitate the effect of MCE for analyzing changing risk over the time under different scenarios and future years by adopting a group decision making into practice and comparing the results by numeric and graphical view within the system. We believe that this study helps decision-makers to achieve the best solution by expressing their preferences for strategies under future scenarios. Keywords: Multi-Criteria Evaluation, Spatial Decision Support System, Weighted Sum Model, Natural Hazard Risk Management

  7. Value of screening endoscopy in evaluation of esophageal, gastric and colon cancers

    PubMed Central

    Ro, Tae H; Mathew, Michelle A; Misra, Subhasis

    2015-01-01

    Esophageal, gastric, and colorectal cancers are deadly diseases that continue to plague our world today. The value of screening endoscopy in evaluating these types of cancers is a critical area of discussion due to a potential reduction in morbidity and mortality. This article describes how to identify a good screening test and explains what are important criteria in the field of screening endoscopy. Furthermore, the current status and progress of screening endoscopy for esophageal, gastric, and colorectal cancer will be evaluated and discussed. Mass screening programs have not been implemented for esophageal and gastric carcinomas in those with average or low risk populations. However, studies of high-risk populations have found value and a cost-benefit in conducting screening endoscopy. Colorectal cancer, on the other hand, has had mass screening programs in place for many years due to the clear evidence of improved outcomes. As the role of endoscopy as a screening tool has continued to develop, newer technology and techniques have emerged to improve its utility. Many new image enhancement techniques and computer processing programs have shown promise and may have a significant role in the future of endoscopic screening. These developments are paving the way for improving the diagnostic and therapeutic capability of endoscopy in the field of gastroenterology. PMID:26361416

  8. Value of screening endoscopy in evaluation of esophageal, gastric and colon cancers.

    PubMed

    Ro, Tae H; Mathew, Michelle A; Misra, Subhasis

    2015-09-07

    Esophageal, gastric, and colorectal cancers are deadly diseases that continue to plague our world today. The value of screening endoscopy in evaluating these types of cancers is a critical area of discussion due to a potential reduction in morbidity and mortality. This article describes how to identify a good screening test and explains what are important criteria in the field of screening endoscopy. Furthermore, the current status and progress of screening endoscopy for esophageal, gastric, and colorectal cancer will be evaluated and discussed. Mass screening programs have not been implemented for esophageal and gastric carcinomas in those with average or low risk populations. However, studies of high-risk populations have found value and a cost-benefit in conducting screening endoscopy. Colorectal cancer, on the other hand, has had mass screening programs in place for many years due to the clear evidence of improved outcomes. As the role of endoscopy as a screening tool has continued to develop, newer technology and techniques have emerged to improve its utility. Many new image enhancement techniques and computer processing programs have shown promise and may have a significant role in the future of endoscopic screening. These developments are paving the way for improving the diagnostic and therapeutic capability of endoscopy in the field of gastroenterology.

  9. [Pollution Evaluation and Risk Assessment of Heavy Metals from Atmospheric Deposition in the Parks of Nanjing].

    PubMed

    Wang, Cheng; Qian, Xin; Li, Hui-ming; Sun, Yi-xuan; Wang, Jin-hua

    2016-05-15

    Contents of heavy metals involving As, Cd, Cr, Cu, Ni, Pb and Zn from atmospheric deposition in 10 parks of Nanjing were analyzed. The pollution level, ecological risk and health risk were evaluated using Geoaccumulation Index, Potential Ecological Risk Index and the US EPA Health Risk Assessment Model, respectively. The results showed that the pollution levels of heavy metals in Swallow Rock Park, Swallow Rock Park and Mochou Lake Park were higher than the others. Compared to other cities such as Changchun, Wuhan and Beijing, the contents of heavy metals in atmospheric deposition of parks in Nanjing were higher. The evaluation results of Geoaccumulation Index showed that Pb was at moderate pollution level, Zn and Cu were between moderate and serious levels, while Cd was between serious and extreme levels. The ecological risk level of Cd was high. The assessment results of Health Risk Assessment Model indicated that there was no non-carcinogenic risk for all the seven heavy metals. For carcinogenic risk, the risks of Cd, Cr and Ni were all negligible (Risk < 1 x 10⁻⁶), whereas As had carcinogenic risk possibility but was considered to be acceptable (10⁻⁶ < Risk < 10⁻⁴).

  10. European approaches to work-related stress: a critical review on risk evaluation.

    PubMed

    Zoni, Silvia; Lucchini, Roberto G

    2012-03-01

    In recent years, various international organizations have raised awareness regarding psychosocial risks and work-related stress. European stakeholders have also taken action on these issues by producing important documents, such as position papers and government regulations, which are reviewed in this article. In particular, 4 European models that have been developed for the assessment and management of work-related stress are considered here. Although important advances have been made in the understanding of work-related stress, there are still gaps in the translation of this knowledge into effective practice at the enterprise level. There are additional problems regarding the methodology in the evaluation of work-related stress. The European models described in this article are based on holistic, global and participatory approaches, where the active role of and involvement of workers are always emphasized. The limitations of these models are in the lack of clarity on preventive intervention and, for two of them, the lack of instrument standardization for risk evaluation. The comparison among the European models to approach work-related stress, although with limitations and socio-cultural differences, offers the possibility for the development of a social dialogue that is important in defining the correct and practical methodology for work stress evaluation and prevention.

  11. A School-Based Evaluation Model for Accelerating the Education of Students At-Risk.

    ERIC Educational Resources Information Center

    Fetterman, David M.; Haertel, Edward H.

    This paper presents ideas for the development and utilization of a comprehensive evaluation plan for an accelerated school. It contains information about the purposes of a comprehensive evaluation, the evaluation design, and the kinds of data that might be gathered and used. The first section, "An Approach to Evaluation: Multiple Purposes and…

  12. Are youth mentoring programs good value-for-money? An evaluation of the Big Brothers Big Sisters Melbourne Program.

    PubMed

    Moodie, Marjory L; Fisher, Jane

    2009-01-30

    The Big Brothers Big Sisters (BBBS) program matches vulnerable young people with a trained, supervised adult volunteer as mentor. The young people are typically seriously disadvantaged, with multiple psychosocial problems. Threshold analysis was undertaken to determine whether investment in the program was a worthwhile use of limited public funds. The potential cost savings were based on US estimates of life-time costs associated with high-risk youth who drop out-of-school and become adult criminals. The intervention was modelled for children aged 10-14 years residing in Melbourne in 2004. If the program serviced 2,208 of the most vulnerable young people, it would cost AUD 39.5 M. Assuming 50% were high-risk, the associated costs of their adult criminality would be AUD 3.3 billion. To break even, the program would need to avert high-risk behaviours in only 1.3% (14/1,104) of participants. This indicative evaluation suggests that the BBBS program represents excellent 'value for money'.

  13. Human Health Toxicity Values in Superfund Risk Assessments

    EPA Pesticide Factsheets

    This memorandum revises the hierarchy of human health toxicity values generally recommended for use inr isk assessments, originally presented in Risk Assessment Guidance for Superfund Volume I, Part A.

  14. Added prognostic value of CT characteristics and IASLC/ATS/ERS histologic subtype in surgically resected lung adenocarcinomas.

    PubMed

    Suh, Young Joo; Lee, Hyun-Ju; Kim, Young Tae; Kang, Chang Hyun; Park, In Kyu; Jeon, Yoon Kyung; Chung, Doo Hyun

    2018-06-01

    Our study investigates the added value of computed tomography (CT) characteristics, histologic subtype classification of the International Association for the Study of Lung Cancer (IASLC)/the American Thoracic Society (ATS)/the European Respiratory Society (ERS), and genetic mutation for predicting postoperative prognoses of patients who received curative surgical resections for lung adenocarcinoma. We retrospectively enrolled 988 patients who underwent curative resection for invasive lung adenocarcinoma between October 2007 and December 2013. Cox's proportional hazard model was used to explore the risk of recurrence-free survival, based on the combination of conventional prognostic factors, CT characteristics, IASLC/ATS/ERS histologic subtype, and epidermal growth factor receptor (EGFR) mutations. Incremental prognostic values of CT characteristics, histologic subtype, and EGFR mutations over conventional risk factors were measured by C-statistics. During median follow-up period of 44.7 months (25th to 75th percentile 24.6-59.7 months), postoperative recurrence occurred in 248 patients (25.1%). In univariate Cox proportion hazard model, female sex, tumor size and stage, CT characteristics, and predominant histologic subtype were associated with tumor recurrence (P < 0.05). In multivariate Cox regression model adjusted for tumor size and stage, both CT characteristics and histologic subtype were independent tumor recurrence predictors (P < 0.05). Cox proportion hazard models combining CT characteristics or histologic subtype with size and tumor stage showed higher C-indices (0.763 and 0.767, respectively) than size and stage-only models (C-index 0.759, P > 0.05). CT characteristics and histologic subtype have relatively limited added prognostic values over tumor size and stage in surgically resected lung adenocarcinomas. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    USGS Publications Warehouse

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  16. Application of wildfire simulation models for risk analysis

    NASA Astrophysics Data System (ADS)

    Ager, A.; Finney, M.

    2009-04-01

    Wildfire simulation models are being widely used by fire and fuels specialists in the U.S. to support tactical and strategic decisions related to the mitigation of wildfire risk. Much of this application has resulted from the development of a minimum travel time (MTT) fire spread algorithm (M. Finney) that makes it computationally feasible to simulate thousands of fires and generate burn probability and intensity maps over large areas (10,000 - 2,000,000 ha). The MTT algorithm is parallelized for multi-threaded processing and is imbedded in a number of research and applied fire modeling applications. High performance computers (e.g., 32-way 64 bit SMP) are typically used for MTT simulations, although the algorithm is also implemented in the 32 bit desktop FlamMap3 program (www.fire.org). Extensive testing has shown that this algorithm can replicate large fire boundaries in the heterogeneous landscapes that typify much of the wildlands in the western U.S. In this paper, we describe the application of the MTT algorithm to understand spatial patterns of burn probability (BP), and to analyze wildfire risk to key human and ecological values. The work is focused on a federally-managed 2,000,000 ha landscape in the central interior region of Oregon State, USA. The fire-prone study area encompasses a wide array of topography and fuel types and a number of highly valued resources that are susceptible to fire. We quantitatively defined risk as the product of the probability of a fire and the resulting consequence. Burn probabilities at specific intensity classes were estimated for each 100 x 100 m pixel by simulating 100,000 wildfires under burn conditions that replicated recent severe wildfire events that occurred under conditions where fire suppression was generally ineffective (97th percentile, August weather). We repeated the simulation under milder weather (70th percentile, August weather) to replicate a "wildland fire use scenario" where suppression is minimized to

  17. Evaluation of fetal anthropometric measures to predict the risk for shoulder dystocia.

    PubMed

    Burkhardt, T; Schmidt, M; Kurmanavicius, J; Zimmermann, R; Schäffer, L

    2014-01-01

    To evaluate the quality of anthropometric measures to improve the prediction of shoulder dystocia by combining different sonographic biometric parameters. This was a retrospective cohort study of 12,794 vaginal deliveries with complete sonographic biometry data obtained within 7 days before delivery. Receiver-operating characteristics (ROC) curves of various combinations of the biometric parameters, namely, biparietal diameter (BPD), occipitofrontal diameter (OFD), head circumference, abdominal diameter (AD), abdominal circumference (AC) and femur length were analyzed. The influences of independent risk factors were calculated and their combination used in a predictive model. The incidence of shoulder dystocia was 1.14%. Different combinations of sonographic parameters showed comparable ROC curves without advantage for a particular combination. The difference between AD and BPD (AD - BPD) (area under the curve (AUC) = 0.704) revealed a significant increase in risk (odds ratio (OR) 7.6 (95% CI 4.2-13.9), sensitivity 8.2%, specificity 98.8%) at a suggested cut-off ≥ 2.6 cm. However, the positive predictive value (PPV) was low (7.5%). The AC as a single parameter (AUC = 0.732) with a cut-off ≥ 35 cm performed worse (OR 4.6 (95% CI 3.3-6.5), PPV 2.6%). BPD/OFD (a surrogate for fetal cranial shape) was not significantly different between those with and those without shoulder dystocia. The combination of estimated fetal weight, maternal diabetes, gender and AD - BPD provided a reasonable estimate of the individual risk. Sonographic fetal anthropometric measures appear not to be a useful tool to screen for the risk of shoulder dystocia due to a low PPV. However, AD - BPD appears to be a relevant risk factor. While risk stratification including different known risk factors may aid in counseling, shoulder dystocia cannot effectively be predicted. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.

  18. An Integrated Risk Management Model for Source Water Protection Areas

    PubMed Central

    Chiueh, Pei-Te; Shang, Wei-Ting; Lo, Shang-Lien

    2012-01-01

    Watersheds are recognized as the most effective management unit for the protection of water resources. For surface water supplies that use water from upstream watersheds, evaluating threats to water quality and implementing a watershed management plan are crucial for the maintenance of drinking water safe for humans. The aim of this article is to establish a risk assessment model that provides basic information for identifying critical pollutants and areas at high risk for degraded water quality. In this study, a quantitative risk model that uses hazard quotients for each water quality parameter was combined with a qualitative risk model that uses the relative risk level of potential pollution events in order to characterize the current condition and potential risk of watersheds providing drinking water. In a case study of Taipei Source Water Area in northern Taiwan, total coliforms and total phosphorus were the top two pollutants of concern. Intensive tea-growing and recreational activities around the riparian zone may contribute the greatest pollution to the watershed. Our risk assessment tool may be enhanced by developing, recording, and updating information on pollution sources in the water supply watersheds. Moreover, management authorities could use the resultant information to create watershed risk management plans. PMID:23202770

  19. Two-scale evaluation of remediation technologies for a contaminated site by applying economic input-output life cycle assessment: risk-cost, risk-energy consumption and risk-CO2 emission.

    PubMed

    Inoue, Yasushi; Katayama, Arata

    2011-09-15

    A two-scale evaluation concept of remediation technologies for a contaminated site was expanded by introducing life cycle costing (LCC) and economic input-output life cycle assessment (EIO-LCA). The expanded evaluation index, the rescue number for soil (RN(SOIL)) with LCC and EIO-LCA, comprises two scales, such as risk-cost, risk-energy consumption or risk-CO(2) emission of a remediation. The effectiveness of RN(SOIL) with LCC and EIO-LCA was examined in a typical contamination and remediation scenario in which dieldrin contaminated an agricultural field. Remediation was simulated using four technologies: disposal, high temperature thermal desorption, biopile and landfarming. Energy consumption and CO(2) emission were determined from a life cycle inventory analysis using monetary-based intensity based on an input-output table. The values of RN(SOIL) based on risk-cost, risk-energy consumption and risk-CO(2) emission were calculated, and then rankings of the candidates were compiled according to RN(SOIL) values. A comparison between three rankings showed the different ranking orders. The existence of differences in ranking order indicates that the scales would not have reciprocal compatibility for two-scale evaluation and that each scale should be used independently. The RN(SOIL) with LCA will be helpful in selecting a technology, provided an appropriate scale is determined. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. An Evaluation of a Comprehensive Mentoring Program on Selected At-Risk Students with Specific School Problems

    ERIC Educational Resources Information Center

    Washington, Taren L.

    2015-01-01

    This study evaluated a comprehensive mentoring program on selected at-risk students with specific school problems (attendance, discipline referrals, and core area grades). The sample included youths in Grades 4-8 who differed on some characteristics including grade-level, ethnicity, and gender. For the purpose of this mixed methods study, the…